In a major policy update, Wikipedia has introduced stricter guidelines restricting the use of artificial intelligence (AI) in article creation and editing. The move comes amid growing concerns about the trustworthiness and policy compliance of AI-generated content.
According to recent reports, the platform has revised its rules to restrict the use of large-scale language models (LLMs) for drafting and rewriting articles. This decision is based on the repeated observation that such content often does not meet Wikipedia’s established editorial standards.
“Text generated by large-scale language models (LLMs) often violates some of Wikipedia’s core content policies,” the platform said in updated guidance. These policies emphasize verifiability, neutrality, and the use of authoritative sources, but AI-generated text is often not sufficient in this area.
However, Wikipedia has stopped short of banning AI tools completely. Instead, we have created limited exceptions that allow for the responsible use of such technology. Editors are allowed to use AI for basic copyediting tasks in their own writing, as long as the tool does not introduce new information. Additionally, changes must undergo careful human review before being published.
The platform urges caution when using these tools and highlights the risks associated with their output. “Caution should be taken, as LLM may change the meaning of the text beyond the user’s request and in a way that is not supported by the cited sources,” Wikipedia warned.
In addition to copy editing, AI tools may also be used to translate articles from other language versions of Wikipedia into English. However, this comes with strict conditions. The editor must have sufficient knowledge of the source language to verify the accuracy of the translation and ensure that it matches the original text.
Interestingly, Wikipedia also acknowledges potential challenges in identifying AI-generated content. We noted that some human editors may naturally write in a style similar to machine-generated text. As a result, the platform emphasized that stylistic similarity alone should not be the basis for penalties.
“It’s best to consider the text’s compliance with core content policies and recent edits by the editor in question,” it added, underscoring the importance of context and editorial judgment.
This policy update follows months of internal discussions among Wikipedia contributors about the growing influence of AI in content creation. As an initial step to curb abuse, the platform introduced a provision to “quickly delete” articles suspected of being AI-generated, many of which were written inappropriately.
In addition, volunteer editors have launched initiatives such as the WikiProject AI Cleanup, which aims to identify and improve or remove AI-generated content that does not meet quality standards.
Overall, the updated guidelines reflect Wikipedia’s attempt to balance technological advances with a long-standing commitment to accuracy, transparency, and human-driven knowledge creation.
