M
MercyNews
HomeCategoriesTrendingAbout
M
MercyNews

Your trusted source for the latest news and real-time updates from around the world.

Categories

  • Technology
  • Business
  • Science
  • Politics
  • Sports

Company

  • About Us
  • Our Methodology
  • FAQ
  • Contact
  • Privacy Policy
  • Terms of Service
  • DMCA / Copyright

Stay Updated

Subscribe to our newsletter for daily news updates.

Mercy News aggregates and AI-enhances content from publicly available sources. We link to and credit original sources. We do not claim ownership of third-party content.

Β© 2025 Mercy News. All rights reserved.

PrivacyTermsCookiesDMCA
Home
Technology
X Monetizes Grok Image Abuse Despite UN Concerns
Technologysociety

X Monetizes Grok Image Abuse Despite UN Concerns

January 9, 2026β€’6 min readβ€’1,121 words
X Monetizes Grok Image Abuse Despite UN Concerns
X Monetizes Grok Image Abuse Despite UN Concerns
πŸ“‹

Key Facts

  • βœ“ X is only allowing 'verified' users to create images with Grok.
  • βœ“ Experts say it represents the 'monetization of abuse'.
  • βœ“ Anyone can still generate images on Grok's app and website.

In This Article

  1. Quick Summary
  2. The New Paywall for Image Creation
  3. Loopholes and Accessibility
  4. International Concerns and the UN
  5. The Future of AI Safety ️

Quick Summary#

X has recently altered access to its Grok AI image generation tool. The platform now restricts the creation of images to users who have paid for verification.

This policy shift has drawn sharp criticism from technology experts. They argue that the company is effectively putting a price tag on the ability to generate harmful content. This situation is often referred to as the monetization of abuse.

Despite these restrictions on the main X platform, the ability to generate images remains widely available. The standalone Grok application and website do not appear to enforce the same verification requirements.

The United Nations has previously issued warnings regarding the dangers of unregulated AI image generation. The focus of these warnings has been the creation of non-consensual intimate imagery.

The New Paywall for Image Creation πŸ€–#

The recent change on the X platform specifically targets the Grok AI image generator. Previously, the tool was available to a broader range of users, which led to significant misuse.

By limiting image generation to 'verified' users, X has effectively implemented a paywall. Verification on the platform typically requires a subscription fee.

Experts analyzing the situation describe this strategy as a direct monetization of abuse. This implies that the company is capitalizing on a feature that has been historically used to violate personal boundaries.

The core issue remains the generation of non-consensual intimate imagery. This type of content is a major concern for privacy advocates and international bodies like the UN.

"monetization of abuse"

β€” Experts

Loopholes and Accessibility 🌐#

While the main X platform has tightened controls, the Grok ecosystem remains porous. The restrictions do not appear to be uniform across all of the company's offerings.

Specifically, the standalone Grok app and its associated website still allow users to generate images. These platforms reportedly do not require the same 'verified' status that the main X app now demands.

This discrepancy creates a significant loophole. Individuals seeking to create harmful images can simply bypass the X platform's paywall by using these alternative access points.

The persistence of this accessibility undermines the stated goal of preventing abuse. It suggests that the barrier to entry is financial rather than a comprehensive safety overhaul.

International Concerns and the UN 🌍#

The rise of AI-generated non-consensual imagery has become a global priority. The United Nations has been vocal about the need for regulation and ethical standards in AI development.

Reports and statements from the UN highlight the devastating impact of deepfake technology on individuals. The unauthorized use of a person's likeness for explicit content is viewed as a severe violation of human rights.

The actions taken by X are being scrutinized in this international context. Critics argue that simply charging for access does not address the fundamental ethical problems posed by the technology.

Without robust content moderation and strict enforcement across all platforms, the potential for harm remains high. The international community continues to search for effective solutions.

The Future of AI Safety ⚠️#

The controversy surrounding Grok highlights a broader challenge in the tech industry. Balancing innovation with user safety is a complex and ongoing struggle.

As AI tools become more powerful and accessible, the potential for misuse grows. Companies are under increasing pressure to implement effective safeguards before releasing new features.

The debate over whether to restrict access or to improve detection of harmful content continues. The current situation with X suggests that financial barriers are not a sufficient solution to ethical dilemmas.

Future developments will likely involve more sophisticated technical guardrails and regulatory oversight. The goal is to prevent the monetization of abuse while allowing for legitimate use of AI technology.

Original Source

Wired

Originally published

January 9, 2026 at 03:19 PM

This article has been processed by AI for improved clarity, translation, and readability. We always link to and credit the original source.

View original article
#Security#Security / Privacy#Security / Security News#Business / Artificial Intelligence

Share

Advertisement

Related Topics

#Security#Security / Privacy#Security / Security News#Business / Artificial Intelligence