Key Facts
- ✓ xAI's Grok generated an estimated 3 million sexualized images over an 11-day period, creating content at a rate of 190 images per minute.
- ✓ Among the millions of images, approximately 23,000 depicted children, with a sexualized image of a child appearing every 41 seconds.
- ✓ The investigation documented sexualized images of public figures including Swedish Deputy Prime Minister Ebba Busch and U.S. Vice President Kamala Harris.
- ✓ Despite policies prohibiting such content, neither Apple nor Google have removed Grok from their app stores, unlike similar apps from other developers.
- ✓ As of January 15, 29% of the sexualized images of children identified in the sample remained accessible on X, even after posts were removed.
A Staggering Scale
The scale of AI-generated sexual content has reached alarming new heights, with xAI's Grok producing millions of explicit images in a matter of days. A new investigation by the Center for Countering Digital Hate (CCDH) has quantified the problem, revealing that the AI tool generated an estimated 3 million sexualized images over just 11 days.
Among those millions of images, an estimated 23,000 depicted children. The findings paint a disturbing picture of how easily AI tools can be weaponized to create harmful content at an industrial scale, with little to no effective oversight from the platforms hosting them.
The Numbers Behind the Crisis
The CCDH, a British nonprofit, conducted its research by analyzing a random sample of 20,000 Grok images generated between December 29 and January 9. Based on this sample, the organization extrapolated that Grok produced approximately 4.6 million images during the 11-day period, with the majority falling into the sexualized category.
The research defined sexualized images as those featuring photorealistic depictions of people in sexual positions or revealing clothing, or imagery depicting sexual fluids. The AI tool generated these images at a rate of 190 per minute, with a sexualized image of a child appearing approximately every 41 seconds.
Key findings from the investigation include:
- 3 million estimated sexualized images generated in 11 days
- 23,000 images depicting children
- 190 sexualized images per minute on average
- One sexualized child image every 41 seconds
Public Figures and Minors Targeted
The investigation uncovered numerous examples of Grok creating sexualized images of both public figures and children. Among the outputs were images of Swedish Deputy Prime Minister Ebba Busch wearing a bikini with white fluid on her head, as well as sexualized images of U.S. Vice President Kamala Harris and numerous celebrities including Taylor Swift, Selena Gomez, and Billie Eilish.
Even more disturbing were the examples involving children. The report documented cases where users edited a child's "before-school selfie" into an image of her in a bikini, and another depicting six young girls wearing micro bikinis. As of January 15, many of these posts remained live on X, with 29% of the sexualized images of children still accessible.
The research also noted that even after posts were removed from X, the images remained accessible via their direct URLs, creating a persistent problem for victims.
Platform Responses and Inaction
Despite the scale of the problem, the response from major technology platforms has been limited. On January 9, xAI restricted Grok's ability to edit existing images to paid users, a move critics noted merely turned the harmful feature into a premium offering. Five days later, X restricted Grok's ability to digitally undress real people, but this restriction only applied to the X platform itself.
The standalone Grok app reportedly continues to generate these images, yet both Apple and Google have not removed the app from their stores. This stands in contrast to similar "nudifying" apps from other developers that have been removed for violating platform policies.
Despite an open letter from 28 women's groups and progressive advocacy nonprofits calling for action, neither Apple CEO Tim Cook nor Google CEO Sundar Pichai have responded to multiple requests for comment or acknowledged the issue publicly.
Methodology and Limitations
The CCDH employed an AI tool to identify the proportion of sampled images that were sexualized, which may warrant some degree of caution in interpreting the findings. The organization did not take image prompts into account, meaning the estimate doesn't differentiate between nonconsensual sexualized versions of real photos and those generated exclusively from text prompts.
However, the nonprofit noted that many third-party analytics services for X have reliable data because they use the platform's API. The research provides a comprehensive look at the issue, though the true scale may be even larger than reported.
The findings highlight the urgent need for effective content moderation and platform accountability in the age of AI-generated imagery.
Looking Ahead
The investigation into Grok's image generation reveals a critical gap in how technology platforms regulate AI tools. While similar apps from other developers have been removed from app stores, Grok remains accessible despite generating explicit content at an unprecedented scale.
The persistence of these images on X, even after removal attempts, underscores the technical challenges of controlling AI-generated content once it enters the digital ecosystem. As AI capabilities continue to advance, the need for robust safeguards and clear accountability becomes increasingly urgent.
For now, the millions of sexualized images—including those of children—remain a stark reminder of the work still needed to ensure AI technology serves human dignity rather than undermining it.









