The White House on Thursday unveiled a series of measures to meet the challenges of artificial intelligence. This is fueled by the sudden popularity of tools such as ChatGPT, amid growing concerns about the technology’s potential risks to discrimination, misinformation, and privacy.
The U.S. government will introduce policies that will shape how federal agencies procure and use AI systems, according to the White House. Sites, security checkpoints, and other settings may control how you interact with AI.
The National Science Foundation will also spend $140 million to advance AI research and development, the White House added. The administration said the funds will be used to set up research centers that seek to apply AI to issues such as climate change, agriculture and public health.
The plan follows Vice President Kamala Harris and other government officials meeting with CEOs of Google, Microsoft, ChatGPT creator OpenAI, and Anthropic to highlight the importance of ethical and responsible AI development. done on the same day. It also coincides with a UK government investigation into the risks and benefits of AI that was launched Thursday.
A senior Biden administration official told reporters on a conference call prior to the meeting, “Technology companies have fundamental obligations to ensure their products are safe and secure, and to protect people’s rights before they are deployed or made public. I am responsible,” he said.
Officials on the call cited various risks faced by the public in the widespread adoption of AI tools, including the potential use of AI-created deepfakes and misinformation that could undermine democratic processes. I was. Job losses related to increasing automation, biased algorithmic decision-making, physical dangers from autonomous vehicles, and the threat of malicious AI-powered hackers are also on the White House list of concerns.
A person familiar with the situation told CNN that President Joe Biden stopped by for a surprise visit while Thursday’s meeting was underway. White House officials said Biden has been briefed extensively on ChatGPT and that he has even tested it himself.
In conversations with tech executives, Harris reminded companies that they have an “ethical, moral, and legal responsibility to ensure the safety and security of their products,” and that they are held accountable under existing U.S. law. I was. meeting.
Harris also hinted at the potential for additional regulation in the future of the rapidly evolving industry.
“Governments, the private sector and the rest of society must work together to address these challenges,” Harris said in a statement. As such, we are committed to doing our part, including promoting potential new regulation and supporting new legislation.”
White House Press Secretary Carine Jean-Pierre told reporters after the meeting that the conversation was “honest” and “candid”.
“There were four CEOs here, meeting with the vice president and the president,” she said. “It shows how serious we are.”
Jean-Pierre said that increasing the transparency of AI companies, such as allowing the public to rate and rate their products, will be key to ensuring the safety and reliability of AI systems. rice field.
The conference is the latest example of the federal government acknowledging concerns about the rapid development and deployment of new AI tools and trying to find ways to address some of the risks.
In testimony before Congress, members of the Federal Trade Commission argued that AI could “accelerate” fraud and scams. Its chairman, Lina Khan, said in her New York Times op-ed this week that the U.S. government has sufficient existing legal powers to regulate AI by relying on its mandate to protect consumers and competition. You write that you have
Last year, the Biden administration released a proposal for the AI Rights Bill, which would require developers to respect the principles of privacy, safety, and equal rights when creating new AI tools.
Earlier this year, the Department of Commerce released voluntary AI risk management guidelines to help organizations and businesses “manage, map, measure, and manage” potential hazards in each part of the development cycle. In April, the ministry also said it was seeking public input on the best policies for regulating AI, including through audits and industry self-regulation.
The US government isn’t the only one pushing ahead with AI development. A European official expects to roll out an AI law as early as this year that could have a big impact on his AI companies around the world.