What will Trump's AI action plan be right – and where it's lacking

AI News


Usama Fayyad sees strong potential in plans' focusing on upskills and open source tools, but he looks at areas that could benefit from clearer guidance and broader collaboration.

President Donald Trump appears on the podium in front of several American flags.
President Donald Trump announced his AI Action Plan earlier this week. AP Photo/Julia Demarie Nikinson)

President Donald Trump announced his long-standing AI action plan this week, outlining various regulatory changes aimed at accelerating the development of American artificial intelligence.

The 23-page plan employs three broad approaches focusing on innovation, infrastructure, and international deployment and security. By removing what the Trump administration calls “an unnecessary regulatory barrier that hinders the private sector,” it underscores the need for the United States to “achieve global domination in artificial intelligence.”

Osama Feiyad, senior vice director of AI and data strategies at Northeastern University, says there is a lot to prefer to focus on the plan, particularly on workers' proficiency and support for open source AI models.

“We're talking about educating AI users, and that includes small businesses,” he says. “That part is good. It's the whole idea that we have to be careful about how AI is applied, and we have to educate our population to figure out how to use it faster and faster. We also like the fact that in addition to accelerating and changing business, we also thought about how AI actually accelerates and changes science.”

Fayyad says many of the planning recommendations are “rational.” Still, he says there are opportunities to refine several areas to improve the impact. For example, he says that framing AI development as a global competition could be counterproductive.

“We believe that using languages like “we dominate” will disrupt our allies and perhaps even polarize our enemies. It gives all “bad guys” or anyone who is not currently in the US approval circle with a reason to point to it and say, “Look, things are heading in the wrong direction.” ”

He also notes that the plan includes recommendations that suggest that it removes references to misinformation, climate change, diversity, equity and inclusion from the National Institute of Standards and Technology's AI risk management framework. These are in line with Trump's recent executive orders regarding the government's use of AI.

Portrait of Usama Fayyad
Osama Feiyad, senior vice director of AI and data strategies at Northeast University, shares his thoughts on Trump's AI plan. Photo: Matthew Moderno/Northeastern University

Regarding misinformation, Fayyad says it remains an important issue.

“We need to improve filtering it because it's a priority for humanity anyway,” he says. “A bad actor, one of AI's biggest threats, can produce many kinds of backed up misinformation.”

He says the same applies to climate change.

“I think AI can help combat climate change and combat some of its impact on decarbonization or carbon use reduction,” he says. “These are problems that involve a lot of data, many measurements from many sensors. AI technology and AI algorithms are very good at dealing with these very large data sets.”

Fayyad says he has fewer concerns about diversity, equity and inclusion. We believe this can be addressed through social and legal processes.

“However, no legislature or lawmakers can issue a law that says, “Stop rising temperatures on Earth and start a reversal.” You can say you need to measure everything you want.

He also appreciates the goal of building “neutral and fair” AI models, but question whether those standards should be set at the presidential level.

Fayyad points out that the focus of the planning for large data centers could lead to increased energy demand.

“This is the area the US is missing,” he says. “There were big lessons to learn from the Deepseek episodes. What many small models can actually do is far cheaper than these very large frontier and basic models.

Implementation will be important, as with previous efforts, Fayyad said. When former President Joe Biden issued an executive order on AI safety and security in 2023 — Trump later retracted — Feiyad said the same thing.

“The devil will be in detail,” he says, referring to whether the new plan recommendations will be funded and implemented.

“It's a comprehensive practice plan and it's a few very encouraging, but there are a few mistakes,” says Fayard.

Science and Technology

Recent stories



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *