What does Trump's “Awakened AI” executive order mean for technology? : NPR

AI For Business


President Trump will answer questions at the White House on July 11, 2025.

President Trump will answer questions at the White House on July 11, 2025.

Get McNamee/Getty Images


Hide captions

Toggle caption

Get McNamee/Getty Images

The Trump administration on Wednesday intensified attacks on “awakening” artificial intelligence systems. This is a move driven by the belief among conservatives that the AI model drifts far to the left.

“For all Americans to adopt and realize the benefits of AI, these systems must be built to reflect truth and objectivity, not top-down ideological bias,” said Michael Kratzos, the head of science and technology policy, in a call with reporters on Wednesday.

The Trump administration said more action is expected on the issue of “wake-up” AI, including an executive order later Wednesday.

The White House plans to amend Biden-era federal guidelines on AI safety to remove references to diversity, equity and inclusion, climate change and misinformation, according to the Trump administration's AI Action Plan.

And soon, the federal government will “work only with high-tech companies that ensure that free speech and expression are allowed to flourish,” Kratzios said in a briefing call that it reflects the language from the administration's policy documents directing technology companies to remove liberal biases from the AI model.

This is the latest example of the Trump administration turning screws on the DEI initiative and railing against popular AI chatbots. Trump supporters are increasingly criticizing the technology, saying that the safety guardrail would censor conservative views.

“The AI industry is deeply concerned about this situation,” said Neil Sahota, an engineer who advises the United Nations on the issue of artificial intelligence. “They are already in a global arms race with AI. Now they may be considered awake, so they are being asked to take some very vague steps to revoke their protection,” he said. “It's making tech companies crazy.”

According to Sahota, one way AI companies can handle it is to announce a “conflict” version of chatbots with fewer safeguards to land a federal government-favorable business.

“If you're a tech company with lots of government contracts, this order is a troublesome ticket gate,” Sahota said.

While some studies have shown that popular chatbots can provide answers to specific policy questions in the middle, experts say it often depends on how the questions are assembled or what parts the system summarises.

AI scholars say there is no evidence that the major chatbots were intentionally designed to censor liberal answers and conservative views.

“What often happens with these criticisms is that chatbots don't align with someone's individual perspectives, so they want to hold the model accountable,” said Chinasa Okolo, a fellow at the Center for Technology Innovation at the Brookings Facility, a think tank in Washington, DC.

Awakened AI: From the cry of rally to government policies

Turning “awakening AI” into a rallies of crying is similar to previous conservative abuse rams against Silicon Valley. The belief is that content guidelines for social media platforms have been devised from a right-wing perspective.

Last year, howe from the chatbot “wake up” when Google's Gemini image generator portrayed black and Asian men as portraying us as ethnically diverse. Google executives apologised, explaining that Gemini overcorrected it for diversity, including “cases where it should not clearly display scope.”

The development of policies to counter these episodes has been the focus of White House Ai Czar David Sacks and Sriram Krishnan, senior policy advisors in the Trump administration.

This is a notable reversal from the way the Biden administration approached technology. Authorities called for ways to perpetuate bias and establish barriers to AI that could violate people's civil rights.

Now, new energy has been sucked in, making AI a part of a bigger culture war.

Conservative activists were seized on Google Gemini Snafu, but right-wing commentators had little response when Elon Musk's Grok Chatbot leapt out of Rails earlier this month and launched the anti-Semitic Tyrades.

A few days later, the maker of Musk's Xai, Grok, along with Google, Anthropic and Openai, was awarded a Department of Defense contract worth up to $200 million.

“Mask's original vision for Xai was a kind of “anti-awake AI,” but disabling protection measures with poor control over data quality gives us something like a recent Nazi episode.”

Xai has denounced the outdated software code in the meltdown. In particular, the instructions Grok said “based on maximum” were slang to hold strong opinions, even if it was troublesome, and were reinforced by similar instructions given to chatbots. The company said the issue has been fixed.

Most Popular Chatbots It protects against things like slur, harassment, hate speech, and basic guardrails that could be under new scrutiny by the Trump administration.

“Most of the cases where conservatives have seen AI cite the reasoning that they are 'wake', LLM refuses to confirm conspiracy theories or racist claims,” Linger said.

For Okoro at the Brookings facility, the fight over whether chatbots perpetuate the left or the right view is hidden by another fight over acceptance of provable facts.

“Unfortunately, some people believe that the basic facts with scientific evidence are either left-leaning or “awakening.”

It's troubling to change AI systems to work to respond to the White House executive order, and engineer Sahota is concerned about where the lines are drawn and why they can cause all sorts of political and cultural fires.

“What was politically driven? In this era, if someone said something about the importance of vaccinations for measles, is that now a politically charged argument?” he said. “But if a future federal contract could potentially have hundreds of billions of dollars, a company may have to do something and may be earning serious income at risk.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *