Given the rapid emergence and adoption of artificial intelligence (AI) tools and systems, the Biden administration will convene CEOs of major AI companies at the White House on May 4, 2023 to discuss what it calls “responsible” AI. announced several projects to promote innovation.
The White House also signaled the openness of these technologies to further regulation. “[T]The private sector has an ethical, moral, and legal responsibility to ensure the safety and security of its products,” Vice President Harris said after the meeting, adding, “All businesses must do their part to protect the American people.” We have to comply with existing laws,” he added. President Biden stopped by a conference that included Alphabet’s Sundar Pichai, Anthropic’s Dario Amodei, Microsoft’s Satya Nadella and OpenAI’s Sam Altman, telling executives there is “great potential and great danger” in what they are doing. Told.
Initiatives for AI
Coinciding with the CEO’s conference, the White House announced three AI initiatives that will fund responsible AI research, provide independent community assessments of AI systems, and begin the process of establishing an AI policy across the U.S. government. Announced.
Investment in AI R&D. The National Science Foundation has invested $140 million to launch seven new National AI Labs (“Labs”), bringing the total number of Labs to 25. These laboratories act as catalysts for collaboration among higher education institutions, federal agencies, and industry at large. , others. They pursue innovative AI in ways that are ethical, trustworthy, responsible, and serve the public good, driving breakthroughs in areas such as climate, agriculture, energy, public health, education, and cybersecurity. I’m here. This investment adds to the billions of dollars that private companies are pouring into advancing technology.
Community testing of existing generative AI systems. Leading AI developers such as Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, Stability AI, etc. will participate in a hacker convention called DEFCON 31 in August 2023, in line with the principles of responsible disclosure, to revise existing AI systems. I agree to participate in the public evaluation of In particular, AI models are evaluated or tested by the community, independent of the government or the company that developed them. The assessment is against the principles outlined in the policy documents the Biden administration released in his 2022 and his early 2023, blueprints for the AI Rights Bill, and his AI Risk Management Framework at the National Institute of Science and Technology. measured by Based on learnings from these evaluations, AI innovation can be improved as needed.
Draft Policy Guidance. In the summer of 2023, the U.S. Office of Management and Budget (OMB) plans to release for public comment draft policy guidance on the use of AI systems by the U.S. government. This guidance helps establish specific policies to be followed by federal departments and agencies to ensure the development, procurement, and use of AI systems centered around protecting the rights and safety of individuals.
Existing agency guidance and administrative policies
The White House called these initiatives a “build[ing] Governments have already taken many steps to promote the responsible development of AI. Additional efforts to strengthen AI hygiene include the aforementioned AI Bill of Rights blueprint. It is an AI risk management framework that identifies principles that should guide the design, use, and deployment of automated systems to protect the American public. Managing risks related to validity and reliability, safety, security and resilience, explainability and interpretability, privacy, fairness and bias, and the National AI Research Resource released earlier this year by the NAIRR task force Roadmap for launching (NAIRR).
Other agencies are rapidly releasing AI guidance. For example, by April 2023, four federal agencies have announced: the Consumer Financial Protection Bureau (CFPB), the Federal Trade Commission (FTC), the U.S. Department of Justice (DOJ), and the U.S. Equal Employment Opportunity Commission (EEOC). Did. A joint statement committing to the active use of their respective powers to protect against discrimination and prejudice in automated systems. Also in April, he said, the Department of Homeland Security announced the launch of a task force dedicated to using AI to advance critical homeland security missions. And in May, the FTC published a blog post warning companies against using generative AI tools to change consumer behavior. In this post, we argue that the manipulative use of generative AI can be effective even if not all customers are harmed and those harmed do not constitute the class of people protected by anti-discrimination laws. I explained that it might be illegal.
important point
The announced initiative and White House statement provide signals on what companies should do to develop and deploy AI tools in a way that minimizes regulatory risk.
First, the President said, “Underscore[d] According to conference materials released by the White House, companies have a fundamental responsibility to ensure the safety and security of their products before they are deployed or released to the public. Therefore, entities should validate the safety and security of AI systems before deploying them for widespread use.
Second, the White House emphasized that “it is imperative to mitigate the current and potential risks that AI poses to individual, societal and national security.” The administration stressed the importance of the personal leadership of CEOs, calling for them to “model responsible behavior” and “take action to ensure responsible innovation and adequate protection.” Therefore, AI companies must design their systems with trust, safety, and risk mitigation in mind. (A previous client discussed the top 10 business and legal risks of generative AI in his alert.)
Third, among announced initiatives, the U.S. government is the world’s largest purchaser, given the government-procured AI systems and how they value individual rights and safety in their decisions. , OMB guidance is likely to be most important. It has a big impact on the commercial market.
This is a rapidly developing field and we are happy to answer any questions you may have. You can also subscribe to WilmerHale’s Privacy and Cybersecurity blog to stay up to date.
