Companies that introduce new AI tools into their operations will face regulatory challenges. Congress has been slow to act on AI, but each U.S. state has enacted its own AI laws regulating the use of the technology that companies must comply with.
Colorado recently became the first U.S. state to pass a comprehensive AI bill that applies to both developers and deployers of AI systems. California is moving forward with a state AI bill, and the Connecticut Senate approved a comprehensive bill in April to regulate the deployment of AI systems in the private sector. These states are not the first governments to target AI technology. New York City passed an anti-bias law that will take effect in 2023, requiring employers to audit recruiting tools that use AI. New York Governor Kathy Hochle also proposed new AI regulatory measures this year.
In the past five years, 17 states have passed 29 bills focused on regulating AI, said Ayanna Howard, dean of the Ohio State University College of Engineering, who spoke at a hearing on AI held by the Joint Economic Committee this week.
Howard said that if AI regulation isn't addressed at the federal level, states will continue to make their own rules.
“That's a problem,” she said.
In fact, Ara Valente, an analyst at Forrester Research, says having a comprehensive federal AI law, rather than numerous state AI laws, would ease the compliance burden on companies. While the initial compliance process with state laws will be difficult, Valente says the real challenge comes from change management.
When companies operate regionally or nationally without a federal mandate, they must comply with each state's regulations.
In the absence of federal standards, states are moving to enact comprehensive AI laws
The Colorado AI Act applies to technology companies that develop AI systems and to users of AI systems doing business in Colorado. The state's AI law primarily targets high-risk AI systems, i.e. AI used to make high-stakes decisions in education, finance, employment and healthcare situations.
The law requires companies deploying such systems to complete impact assessments and adopt AI risk management policies and programs, requirements that don't go into effect until February 2026.
According to Gartner analyst Aviva Litan, Colorado's AI law mirrors many of the requirements of the European Union's AI law, which divides AI systems into risk categories and sets out different requirements for each category.
“I think it's going to be a real shock to companies when they realize they have to comply with this law,” she said of Colorado's AI law.
Meanwhile, California's SB 1047 would establish safety standards for the development of AI systems and create an enforcement agency within the California Department of Technology called the Frontier Models Division to hold companies accountable. In Connecticut, SB 2 would establish requirements for both the development and deployment of AI systems and would ban the distribution of certain AI-generated media.
“When a company operates in a state that has certain regulations, they become responsible for that state's specific regulations, especially if something is missing or absent at the federal level,” Valente said.
Valente said that if Congress were to pass a federal bill, it would, to some extent, supersede state AI laws, and states would have to harmonize their laws with federal requirements.
But Litan said he doesn't expect federal AI legislation to be passed. Indeed, while congressional leaders such as Sen. Chuck Schumer (D-N.Y.) have been discussing AI for months, comprehensive AI legislation has yet to be introduced.
Litan said AI systems are fraught with risks and need regulation, and while state AI laws would be a “compliance nightmare,” he said there will eventually be some regulation of AI systems.
“Even if it was just California, New York and Colorado, that would cover 90 percent of the large companies operating in the U.S.,” Litan said. “That's enough of a few key states to make this federal law by default.”
Businesses need to prepare for state AI laws
Forrester's Valente said companies can take steps to prepare for new AI laws by meeting existing best practices. For example, the National Institute of Standards and Technology has released a risk management framework specifically for AI.
“Hopefully, as states enact these additional laws, your efforts are already up to the mark for that particular law,” she said. “What you're trying to do is minimize that change management.”
Build the foundational building blocks before deploying risky AI applications.
Aviva LitanGartner Analyst
Valente said it's important for companies to assemble a team to evaluate all state bills that target AI as they deploy their technology, so they aren't caught off guard when something becomes law. Additionally, he said, company leaders need to stay on top of existing laws governing issues like consumer privacy and security. Agencies like the Federal Trade Commission and Department of Justice have made strong statements about their ability to enforce existing laws against companies' use of AI.
“Many organizations are violating existing regulations through their use of AI,” Valente said.
Gartner's Litan said companies should set aside budgets and assemble teams to address compliance with state AI laws, and preparing acceptable use policies, data classifications and policy enforcement systems are also important parts of compliance preparation, she said.
“Build the foundational building blocks before deploying risky AI applications,” Litan says.
Mackenzie Holland is a senior news writer covering big tech companies and federal regulation. Prior to joining TechTarget Editorial, she worked at Wilmington Star-News Crime and education reporter Wabash Plain Dealer.