Companies with AI chatbots love to emphasize their capabilities as translators, but they still default to English, both in terms of functionality and the information they are trained to. With that in mind, Saudi Arabian AI company Humain has launched a native Arabic chatbot.
According to Bloomberg, the BOT, called Human Chat, runs on Allam's large language model. The company is trained on “one of the largest Arabic data sets ever built,” and claims to be “the world's most advanced Arabic AI model.” The company says it is fluent not only in Arabic but also in “Islamic culture, values and heritage.” (If you have religious concerns about the use of Human Chat, please consult your local imam.) The chatbot, which can be used as an app, was first only available in Saudi Arabia, and now supports bilingual conversations in Arabic and English, as well as dialects that include Egyptian and Lebanese. The plan is for the app to roll out across the Middle East and ultimately serve 500 million Arabic-speaking people around the world.
Humain took on the Allam and Chatbot project after being launched by the Saudi Arabian Data and Artificial Intelligence Bureau, government agencies and technology regulators. Therefore, Bloomberg raises the possibility that Human Chat will comply with Saudi government censorship requests and limit the type of information available to users.
Yes, it definitely seems true. The Saudi Arabian government regularly attempts to limit the type of content it has become available to its masses. The country won 25 out of 100 in Freedom House's 2024 Freedom of the Net report. This is due to strict control over online activities and restrictive speech laws that have been incarcerated by women's rights advocates for over a decade.
But perhaps we should start explicitly framing American AI tools like this. In its support document, Openai explicitly states that ChatGpt is “distorted towards Western views.” Hell, you can see Elon Musk trying to tweak Xai's Grok ideology in real time to respond to Twitter users that the chatbot is too awakened.
Certainly there are differences between corporate and government management (but increasingly, it's worth asking if there are any real differences), but earlier this year the Trump administration planned to regulate what businesses wanting a federal contract are allowed to output. It includes the requirement to “reject the fundamental climate doctrine” and frees you from “ideological prejudices” such as “diversity, equity, and inclusion.” It's not force, but it's forced. And given Openai, humanity, and Google are basically giving the government a chatbot without doing anything, they seem happy to be forced.
