China introduces censorship to create socialist AI

AI For Business


Chinese authorities are testing an artificial intelligence company's large-scale language model to ensure the system “embodies core socialist values” as part of the country's latest expansion of its censorship regime.

The Cyberspace Administration of China (CAC), a powerful internet watchdog, has forced major tech companies and AI startups including ByteDance, Alibaba, Moonshot and 01.AI to take part in mandatory government reviews of their AI models, people involved in the process said.

The initiative involves mass testing of law master's students' answers to numerous questions, many of which relate to political sensitivities in China and President Xi Jinping, according to people familiar with the effort.

The work is being carried out by staff from CAC's local chapters across the country and includes reviewing model training data and other safety processes.

Two decades after installing the “Great Firewall” to block foreign websites and other information the Communist Party deems harmful, China is introducing the world's strictest regulatory regime to govern AI and the content it generates.

“CAC had a special team that came to our office and sat in the conference room to conduct the audit,” an employee at a Hangzhou-based AI company said on condition of anonymity.

“I didn't pass the first time and had to consult with my peers because I didn't really know why,” the person said. “It took a bit of guesswork and tweaking. I passed the second time, but the whole process took several months.”

China's strict approval process has forced the country's AI groups to quickly learn how to most effectively censor the massive language models they're building, a task that many engineers and industry insiders say is difficult and complicated because it requires training law graduates on reams of English-language content.

“Our basic model is very free-spirited. [in its answers]”That's why security filtering is so important,” said an employee at a top AI startup in Beijing.

Filtering begins with removing problematic information from training data and building a database of sensitive keywords. China's operational guidelines for AI companies published in February state that AI groups must collect thousands of sensitive keywords and questions that go against “core socialist values,” such as “inciting subversion of state power” and “undermining national unity.” The sensitive keywords are to be updated weekly.

The results are visible to users of China's AI chatbots. Questions about sensitive topics like what happened on June 4, 1989 (the day of the Tiananmen Square massacre) or whether Xi Jinping resembles the meme Winnie the Pooh are rejected by most Chinese chatbots. Baidu's Ernie chatbot tells users to “try a different question,” while Alibaba's Tongyi Qianwen replies, “I haven't learned how to answer this question yet. I will continue to learn so I can serve you better.”

In contrast, Beijing has rolled out AI chatbots based on new models from the Chinese president's political philosophy, “Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era,” and other official documents provided by the Cyberspace Administration of China.

But Chinese authorities also want to avoid developing AI that avoids any political topics. According to a staff member at an organization that helps technology companies with this process, the CAC has set a limit on the number of questions that law students can refuse during safety testing. A quasi-national standard released in February states that law students cannot refuse more than 5% of the questions posed.

“meanwhile [CAC] test, [models] “They have to react, but once they start live streaming, no one is watching,” said a developer at a Shanghai-based internet company. “To avoid trouble, some big models have completely banned any topics related to President Xi Jinping.”

As an example of the keyword censorship process, industry insiders cited “Kimi,” a chatbot released by Beijing-based startup Moonshot, which refuses to answer most questions about Xi Jinping.

But the need to answer less obviously sensitive questions meant Chinese engineers had to come up with ways to ensure that law programs could provide politically correct answers to questions like, “Do China have human rights?” and “Is President Xi Jinping a great leader?”

When the Financial Times asked these questions to a chatbot created by startup 01.AI, the company's Yi-large model gave nuanced answers, pointing out that critics say Xi Jinping's policies are further restricting free speech and human rights, and stifling civil society.

Soon after, Yi's reply disappeared, replaced by the following: “We are very sorry, but we are unable to provide you with the information you requested.”

“It's very hard for developers to control the text that LLM generates, so we're building another layer that replaces responses in real time,” said Huan Li, an AI expert who builds Chatie.IO chatbots.

According to Lee, the group typically classifies LLM output into predefined groups using a classification model similar to those found in email spam filters. “If the output falls into a sensitive category, the system triggers a replacement,” he said.

Chinese experts say TikTok owner ByteDance has made the most progress in creating a law master's program that subtly echoes Beijing's arguments. A lab at Fudan University asked the chatbot tough questions about core socialist values ​​and it achieved a 66.4 percent “safety compliance rate,” the highest among law masters, far outperforming OpenAI's GPT-4o's score of 7.1 percent on the same test.

ByteDance tops

Asked about Xi Jinping's leadership, Doubao told the Financial Times he listed Xi's achievements, adding: “He is undoubtedly a great leader.”

Speaking at a recent technology conference in Beijing, Fan Bingxin, known as the father of China's Great Firewall, said he is developing a system of safety protocols for LLM that he hopes will be widely adopted by China's AI groups.

“Large-scale predictive models for the public require more than just safety filings. They require real-time online safety monitoring,” Fan said. “China needs its own technological path.”

CAC, Bytedance, Alibaba, Moonshot, Baidu and 01.AI did not immediately respond to requests for comment.

You're seeing a snapshot of an interactive graphic. This may be because you're offline or have JavaScript disabled in your browser.

Video: AI: Blessing or curse for humanity? | FT Tech



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *