Justin Hendrix is CEO and Editor of Tech Policy Press. Paul M. Barrett is Associate Director of the New York University Stern Center for Business and Human Rights.
Co-published by Just Security.
Last Wednesday, Senate Majority Leader Chuck Schumer (D-N.Y.) announced the SAFE Innovation Framework, a set of policy goals for an “all-out effort” to combat artificial intelligence (AI). Announced. He calls it a “revolutionary moment” that will lead to “profound and dramatic changes” and predicts that “in just a few short years the world could be completely unrecognizable from the one we live in today.” He brought in experts.
Given the post-release hype of generative AI systems such as OpenAI’s ChatGPT, it’s no surprise that U.S. policymakers are keen to come up with shiny new legislative proposals. To Senator Schumer’s credit, Senator Schumer has taken the initiative to set priorities and develop legislation to sustain what he calls “innovation, our north star.” , calls for a serious and systematic approach. But while the “insight forum” he’s proposing to hold this fall will undoubtedly be interesting, the reality is that most of what Congress needs to do most is pretty basic and Congress can take these steps today.
A new report from New York University’s Stern Center for Business and Human Rights, “Protecting AI: Addressing the Risks of Generative Artificial Intelligence,” argues that the U.S. government must address AI first. existing Apply consumer protection, competition, and privacy laws to your AI business. AI doesn’t deserve to be ignored for compliance with already established laws just because it’s new and ‘innovative’. Moreover, many of the most important legislative interventions are already under consideration.
Privacy is a prime example of an agenda largely ignored in the new framework, with Sen. Schumer so eager to respond to the media hype. Senator Schumer said the word “privacy” was not included in the framework itself, but the eventual “Insight Forum” will likely focus on “privacy and responsibility.” But many of the worst abuses of AI technology, from algorithmic bias to the delivery of highly personalized disinformation, are compounded by a lack of protection for personal data. A major proposal for federal privacy law, the American Data Privacy and Protection Act (ADPPA), already exists, but last year Senator Schumer reportedly refused to bring it to a House vote.
The Federal Trade Commission (FTC) already has relevant powers to address many of the potential harms of AI systems. However, the agency is underfunded and understaffed, especially technically skilled personnel. Despite this reality, authorities are doing their best to stay ahead of AI, issuing blog posts warning of potential harm and warning companies to investigate abuses. The company has a significant opportunity to address the role of cloud computing infrastructure in shaping AI and address questions related to competition in the antitrust sector through its just-finished request for information (RFI) on the subject. . Scaling up these efforts with more funding and tighter mandates may not be a new idea, but it will likely have a significant impact.
In his framework, Senator Schumer appears to acknowledge that one way to lead AI innovation is not only through technology, but also through “security, transparency and accountability.” Mandating transparency is a key priority, especially when it comes to AI, and here, too, legislation has already been put forward to provide a roadmap to get there. A bipartisan Platform Accountability and Transparency Act in the Senate and a Democratic-sponsored Digital Services Surveillance Security Act in the House will help independent researchers evaluate technology platforms while protecting user privacy and trade secrets. We provide a useful model. These bills were written primarily with social media issues in mind. It should be revised as needed to address AI-specific concerns.
There are many other existing proposals related to AI. Anna Lenhart, a Knight Policy Fellow at the Institute for Data, Democracy, and Politics at George Washington University, recently wrote, “Federal legislative proposals governing the processing of data, including the generative AI tools currently capturing the public imagination. We have compiled a list of She is,” she said, and she also addresses other concerns such as market power, discrimination and the spread of harmful content. AI-related bills could lead to more bipartisan compromises than when social media was the only focus. But few of these proposals made progress in the last Congress, and Senator Schumer’s effort starts on a tight schedule in a presidential election year.
Finally, when Senator Schumer hosts these high-profile “insight forums” on the Capitol, he needs to be careful about the mix of experts he invites to the floor. High-profile industry leaders, including OpenAI CEO Sam Altman, have publicly called for AI regulation, but behind the scenes they often disagree in detail. . We have seen this movie before. Mark Zuckerberg has hailed social media regulation in front of lawmakers, even as an army of lobbyists is trying to crack it down. Senator Schumer should be particularly wary of advice from big tech companies. If executives have too much say in setting the rules, it can favor existing businesses over new ones.
Senator Schumer has already set an industry-friendly “North Star” for his efforts, but remember that the oaths he and his colleagues took said nothing about protecting corporate interests. right. Let CEOs hype technology. The Senate should maintain its status as the forum for addressing the fundamentals of regulation.