Skip to content

AI Action Plan highlights innovation and education, with room for refinement, Northeastern expert says

Usama Fayyad sees strong potential in the plan’s focus on upskilling and open-source tools, while noting areas that could benefit from clearer guidance and broader collaboration.

President Donald Trump appearing at a podium in front of several American flags.
President Donald Trump unveiled his AI Action plan earlier this week. AP Photo/Julia Demaree Nikhinson)

President Donald Trump unveiled his long-anticipated AI Action Plan this week, outlining a range of regulatory changes aimed at accelerating artificial intelligence development in the United States.

The 23-page plan takes a three-pronged approach focused on innovation, infrastructure and international deployment and security. It emphasizes the need for the U.S. to “achieve global dominance in artificial intelligence” by removing what the Trump administration calls “unnecessary regulatory barriers that hinder the private sector.”

Usama Fayyad, Northeastern University’s senior vice provost for AI and data strategy, says there’s a lot to like about the plan — particularly its emphasis on worker upskilling and its support for open-source AI models.

“It talks about educating users of AI, and that includes small businesses,” he says. “That part is good — the whole idea that we must pay attention to how AI is applied, and we must educate our population to figure out how to use it faster, better. I also like the fact that they also thought about how AI could actually accelerate and change science, in addition to accelerating and changing business.”

Fayyad says many of the plan’s recommendations are “reasonable.” Still, he says there are opportunities to refine some areas to improve impact. For example, he says framing AI development as a global competition could be counterproductive.

“Using language like ‘We will dominate’ I believe will disorient our allies and probably polarize our enemies further. It gives all ‘the bad guys’ or people who are not currently in the approval circle of the U.S. reason to point to it and say, ‘See, things are heading in the wrong direction.’”

He also notes the plan includes a recommendation that suggests removing references to misinformation, climate change and diversity, equity and inclusion from the National Institute of Standards and Technology’s AI Risk Management Framework. These align with Trump’s recent executive orders on governmental use of AI.

Portrait of Usama Fayyad
Usama Fayyad, Northeastern University’s senior vice provost for AI and data strategy, shares his thoughts on Trump’s AI plan. Photo by Matthew Modoono/Northeastern University

On misinformation, Fayyad says it remains a critical issue.

“We need to get better at filtering it out because it’s a priority for humanity anyway,” he says. “One of the biggest threats of AI — bad actors being able to generate lots of kinds of corroborated misinformation.”

He says the same applies to climate change.

“I think AI can do a lot to help us counter climate change, to help us counteract some of its effects on decarbonization or carbon use reduction,” he says. “Those are problems that involve a lot of data, lots of measurements from a lot of sensors. AI technology and AI algorithms are really good at helping humans cope with these very large data sets.”

Fayyad says he has fewer concerns about diversity, equity and inclusion, which he believes can be addressed through social and legal processes.

“But Congress or any legislator can’t ever issue a law that says, ‘Well the rise in temperatures on the planet shall cease and start reversing.’ You can say that all you want. It’s not going to happen. You have to measure it so you can figure out how to manage it.”

He also appreciates the goal of building “neutral and unbiased” AI models, though he questions whether those standards should be set at the presidential level.

Fayyad notes the plan’s focus on large-scale data centers could lead to increased energy demands.

“This is the area where the U.S. is missing out,” he says. “There was a big lesson to be learned from the Deepseek episode, and the fact that many of the small models can actually do more, are much cheaper than these very big frontier or foundation models. This trend of going bigger, bigger, bigger with more energy consumption isn’t showing a corresponding benefit.”

As with previous efforts, Fayyad says, implementation will be key. When former President Joe Biden issued an executive order on AI safety and security in 2023 — which Trump later rescinded — Fayyad made a similar point.

“The devil will be in the details,” he says, referring to whether the new plan’s recommendations will be funded and carried out.

“It’s a comprehensive practical plan that has several very encouraging things, but a few misses,” Fayyad says.