Among other things, the report found that views about how and whether to rein in AI tools don’t follow typical red-blue state divides.
Half of U.S. adults report using at least one “major AI tool,” but public attitudes about artificial intelligence regulation remain divided nationwide, according to a new survey.
The 50-state report, published as part of the multiuniversity Civic Health and Institutions Project (CHIP50), found that views about how and whether to rein in AI tools don’t follow typical red-blue state divides. Missouri and Washington, for example, expressed the strongest views about a lack of regulatory oversight, while New York and Tennessee were most worried about government overreach.
But concerns about workplace disruption are nearly universal. Majorities in all 50 states expect AI to impact their jobs within five years, especially in tech-heavy and Sun Belt states such as California, Massachusetts, Texas and Georgia. Meanwhile, regions like the Corn Belt and Rust Belt anticipate less immediate disruption.
John Wihbey, an associate professor of media innovation and technology at Northeastern University and co-author of the study, says the findings provide some insight into the public’s view of a technology that has already become part of many Americans’ daily life.
“At a time when state-level regulation for AI and public opinion is central to the national debate, this is perhaps the first look at how the states compare on usage, preferences and regulation,” Wihbey says.
Wihbey; Ata Uslu, a network science doctoral student; David Lazer, a university distinguished professor of political science and computer sciences; Mauricio Santillana, a professor of physics and electrical and computer engineering; and Hong Qu, a network science doctoral student, all collaborated on the study.
The researchers used data from a nationally representative online survey of nearly 21,000 respondents, the data of which was collected from April 10 to June 5. The study honed in on how the general public is “encountering AI in daily life,” as well as their attitudes toward the emerging technologies.
“It really stood out to us that, in every single state, people expect AI to impact their jobs,” Uslu says. “And that expectation is showing up in state legislatures too. The federal government can and should treat these state-level bills and citizens’ perceptions as a kind of policy lab: a way to leverage American federalism to ensure safe deployment of AI while also staying globally competitive in the AI race.”
The findings also point to deep demographic gaps as it pertains to AI use. Increasingly, AI adoption is led by younger, higher-income adults with college educations, with older, rural and lower-income adults lagging behind.
The study found that among AI tools, ChatGPT stands out, with 65% of Americans recognizing the name and 37% reporting they’ve used it. Gemini was next at 26%, then Microsoft Copilot at 18%. Notably, actual usage rates lag far behind name recognition — 65% of respondents recognize ChatGPT, for example, but only about half report using it.
But frequent everyday use remains concentrated among a small slice of users, and awareness of AI consistently outpaces actual use across all platforms, the study says.
The question over how to regulate AI is ultimately a federalism policy debate, Wihbey says — a struggle playing out in real time over who gets to shape and control the technology. He points out that the Trump administration has pushed for a top-down regulatory approach, which he notes is “a little out of step” with conservatives’ broader skepticism of federal regulatory power.
“The White House would say the big questions are unbridled innovation, which would allow for AI dominance over adversaries to ensure national security and prosperity, and this notion of ‘woke’ AI,” Wihbey says.
A proposed moratorium on states’ ability to regulate AI was included as a provision as part of President Donald Trump’s sweeping Big Beautiful Bill before the Senate voted the measure down 99-1. The administration also recently unveiled an AI Action Plan, which identifies over 60 federal policy actions design to bolster innovation in AI tech.
In the wake of the federal moratorium’s defeat, state regulators have begun proposing their own frameworks. States like California and Michigan have introduced bills that would increase transparency requirements, strengthen whistleblower protections and require third-party auditing.
Wihbey notes there’ve been hundreds of bills under consideration across the country.
“Many of these bills want to set up a commission to study the impact of AI at the state level, and many address issues of bias, and the use of AI tools for hiring, health screening or other areas where bias and functional discrimination could be a result,” Wihbey says.
“There’s also some real questions about deepfakes, which is a huge issue — especially in the political arena,” he says.
“This isn’t abstract, and it’s no longer just about political campaigns or celebrities,” Uslu says. “With Elon Musk’s recent promotion of Grok’s new Imagine feature for example, anyone can now turn a photo into a video that follows their prompts.”
Uslu continues: “On their phone, in under a minute, for free. And this is just the beginning. When these kinds of tools become widely accessible, we need to know how prepared and aware the public is. That’s what this kind of research helps us measure.”