Key Facts
- ✓ Tech entrepreneur Siqi Chen released an open-source plugin called 'Humanizer' for Anthropic's Claude Code AI assistant on a recent Saturday.
- ✓ The plugin instructs the AI model to avoid 24 specific language and formatting patterns identified as common giveaways of AI-generated text.
- ✓ The source for these patterns is a guide created by WikiProject AI Cleanup, a volunteer group of Wikipedia editors founded by French editor Ilyas Lebleu.
- ✓ Since its inception in late 2023, the project has tagged over 500 Wikipedia articles for review due to suspected AI generation.
- ✓ The Humanizer plugin was published on GitHub and quickly gained popularity, accumulating over 1,600 stars within days of its release.
- ✓ WikiProject AI Cleanup published its formal list of AI writing patterns in August 2025, providing a structured resource for detection.
Quick Summary
A new tool has emerged that directly uses a community-curated guide to AI detection as a blueprint for evasion. On Saturday, tech entrepreneur Siqi Chen released an open-source plugin for Anthropic's Claude Code AI assistant. The plugin, named "Humanizer," is designed to stop the AI model from writing like an AI model.
The tool is not based on proprietary research but on a publicly available list of linguistic tells. It feeds Claude a specific set of 24 language and formatting patterns that Wikipedia editors have identified as chatbot giveaways. This development marks a significant moment where a guide created to detect AI-generated content is being repurposed to help AI hide its tracks.
A Plugin with a Purpose
The Humanizer plugin operates on a simple but effective premise. It provides the AI assistant with a direct list of behaviors to avoid during the writing process. The plugin's core is a curated list of 24 distinct patterns that have been observed in AI-generated text. By instructing the model to steer clear of these specific markers, the plugin aims to produce output that is less recognizable as machine-generated.
Chen published the plugin on GitHub, where it quickly gained traction within the developer community. As of Monday, the repository had accumulated over 1,600 stars, indicating significant interest in a tool that can refine AI-generated text. Chen highlighted the utility of the source material in a post on X.
"It's really handy that Wikipedia went and collated a detailed list of 'signs of AI writing.'"
He added, "So much so that you can just tell your LLM to ... not do that." The plugin's success demonstrates a practical application for the extensive work done by online communities to identify and catalog the nuances of artificial text.
"It's really handy that Wikipedia went and collated a detailed list of 'signs of AI writing.' So much so that you can just tell your LLM to ... not do that."
— Siqi Chen, Tech Entrepreneur
The Source of the Tells
The foundation for the Humanizer plugin is a comprehensive guide developed by WikiProject AI Cleanup, a dedicated group of Wikipedia editors. This volunteer team has been actively hunting for AI-generated articles on the online encyclopedia since late 2023. Their work involves identifying, tagging, and reviewing content that appears to be produced by language models rather than human writers.
The project was founded by French Wikipedia editor Ilyas Lebleu. Through their collective efforts, the volunteers have tagged over 500 articles for review. In August 2025, the group formalized their observations by publishing a detailed list of the patterns they consistently encountered. This list serves as the definitive manual for spotting AI writing, covering a range of linguistic and formatting quirks.
- Repetitive sentence structures
- Overly formal or generic phrasing
- Specific formatting inconsistencies
- Unusual word choice patterns
This meticulous cataloging effort transformed scattered observations into a structured guide, which has now become the core instruction set for an AI evasion tool.
An Arms Race in Text
The release of the Humanizer plugin illustrates a new phase in the ongoing dynamic between AI detection and AI generation. For years, researchers and online communities have worked to build tools and guides to identify machine-generated content. The work of WikiProject AI Cleanup represents a significant, crowdsourced effort in this detection space.
Now, that same work is being leveraged to improve the very technology it sought to identify. The plugin does not invent new evasion techniques; instead, it systematically applies the knowledge gained from the detection side. This creates a feedback loop where the act of defining what makes text "AI-like" directly informs how to make it appear more human.
The dynamic suggests that as detection methods become more sophisticated, so too will the methods for circumvention. The community-driven list of tells is no longer just a tool for editors—it has become a training manual for AI models aiming to blend in. The line between detection and deception continues to blur.
Key Takeaways
This development underscores the evolving relationship between human curators and artificial intelligence. The collaborative effort of Wikipedia editors has produced a resource valuable enough to be repurposed for a completely different, and somewhat oppositional, goal. It highlights the power of open-source knowledge and its potential applications.
The story is not just about a new plugin but about the broader implications of AI literacy. As AI-generated text becomes more prevalent, the ability to both detect and refine it will be increasingly important. The Humanizer plugin is a tangible example of this dual-use reality, where the same information can be used to either flag content or polish it.
Ultimately, the plugin's rapid adoption signals a clear demand for tools that can smooth the edges of AI-generated text. It represents a practical step toward a future where human and machine writing styles may become increasingly difficult to distinguish, guided by the very lists created to tell them apart.










