Some important themes are emerging.

Sarah Rogers/Mittle | Getty
This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter on power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.
Last week, Senate Majority Leader Chuck Schumer (Democrat of New York) announced a grand strategy for AI policymaking in a speech in Washington, D.C., ushering in a new era for US tech policy. He outlined some of the key principles of AI regulation and argued that Congress should quickly introduce new legislation.
Mr. Schumer’s plan is the culmination of many other smaller policy measures. On June 14, Senators Josh Hawley (Republican, Missouri) and Senator Richard Blumenthal (Democrat, Connecticut) passed Section 230, a law that protects online platforms from liability for user-generated content. ) has submitted a bill to exclude generated AI. Last Thursday, the House Science Committee invited several AI companies to ask questions about the technology and the various risks and benefits it poses. House Democrats Ted Liu and Anna Eshu, along with Republican Ken Buck, have proposed creating a National AI Commission to manage AI policy, while a bipartisan group of Senators are competing with China, among others. proposed the creation of a federal office to facilitate
Don’t settle for half the story.
Get access to the latest tech news without a paywall.
subscribe now
Already a Subscriber? Login
MIT Technology Review provides an intelligent, independent filter for large amounts of information about technology.
subscribe now
Already a Subscriber? Login
While this flurry of activity is noteworthy, U.S. lawmakers aren’t really starting from scratch on AI policy. Alex Engler, a researcher at the Brookings Institution, said, “While we see many offices formulating separate views on specific parts of AI policy, most of them have some sort of ramifications for existing problems. It’s within range,” he said. Agencies such as the FTC, the Department of Commerce, and the U.S. Copyright Office have responded quickly to the epidemic of the past six months, releasing policy statements, guidelines, and warnings, particularly around generative AI.
Of course, when it comes to Congress, I’m not entirely sure if talk means action. But US lawmakers’ thinking on AI reflects some new principles. Here are her three main themes throughout this conversation that you should know to understand where the US AI bill is headed.
- The United States is the home of Silicon Valley and prides itself on protecting innovation. Many of the biggest AI companies are American companies, and Congress won’t let you or the EU forget that. Schumer called innovation the “polar star” of US AI strategy. That means regulators will ask tech CEOs how they want to be regulated. It would be interesting to see the technology lobby in action here. The language comes in part in response to the latest regulations by the European Union, which some tech companies and critics say stifle innovation.
- Technology, especially AI, should align with “democratic values.” We hear this from government officials like Schumer and President Biden. The subtext here is the narrative that US AI companies are different from Chinese AI companies. (China’s new guidelines require that generative AI output reflect “communist values.”) It is trying to package AI regulations in a way that expands production. Controlling the chips that power AI systems and continuing the escalating trade war.
- One of the big questions is what will happen to Section 230? The big unresolved question about AI regulation in the United States is whether Section 230 reform will happen. Section 230 is an internet law passed in the United States in the 1990s to prevent technology companies from being sued over content on their platforms. But should tech companies have the same “out of jail” path for AI-generated content? It needs to be identified and labeled, which is a large undertaking. Considering that the Supreme Court recently rejected a ruling on Section 230, it is likely that the debate was brought back to Congress. Every time legislators decide whether and how to amend legislation, it can have a significant impact on the AI landscape.
So where does this go? Well, politicians are on summer vacation, so there won’t be anything to do in the short term. But starting this fall, Schumer plans to start an invitation-only discussion group in Congress to look at specific parts of AI.
In the meantime, Engler said, there may be discussions about banning certain applications of AI, such as sentiment analysis and facial recognition, which mirror some of the EU regulations. Lawmakers could also try to revive existing proposals for comprehensive technology legislation, such as the Algorithm Liability Act.
For now, all eyes are on Schumer’s big swing. “The idea is to come up with something very comprehensive and do it quickly.
what else are you reading
- Everyone is talking about “bidenomics,” the specific economic policies of the current president. High tech is at the heart of Bidennomics, pouring billions of dollars into American industry. For a glimpse of what that means on the ground, it’s worth reading this story from the Atlantic about a new semiconductor factory being built in Syracuse.
- AI detection tools attempt to identify whether text or images online are created by AI or by humans. But there is a problem. they don’t work very well. New York Times journalists played around with different tools and ranked them according to their performance. What they discover makes for sober reading.
- Google’s advertising business is having a tough week. About 80% of Google’s ad placements seem to violate its own policies, according to a new study published by The Wall Street Journal, which Google disputes.
what i learned this week
We may be more likely to believe AI-generated disinformation, according to new research highlighted by my colleague Rhiannon Williams. Researchers at the University of Zurich found that AI-generated inaccurate tweets are 3% less likely to be identified than those written by humans.
This is just one study, but if supported by further research, it would be an alarming finding. Rhiannon writes: “The generative AI boom puts powerful and accessible AI tools in the hands of everyone, including the bad guys. It can be used to quickly and cheaply generate false narratives for conspiracy theorists and disinformation campaigns.”
