President Biden receives national security memo outlining prohibited uses of AI and innovation opportunities

Applications of AI


President Joe Biden is expected to receive a national security memorandum on Friday outlining specific risks that artificial intelligence technologies could pose to the U.S. national security posture, and sources familiar with the contents said the memo will toe the line between encouraging experimentation with AI systems while limiting the circumstances under which they can be deployed. Incoming Government/FCW.

As part of President Biden's October 2023 executive order on AI, the upcoming memorandum aims to “develop a coordinated executive branch approach to managing AI security risks” and will build on guidance previously issued by the Office of Management and Budget, as well as international efforts discussed at Bletchley Park in November 2023 and the G7 meetings.

“this [memorandum] “The memo focuses on national security systems that exist in the military and intelligence communities, but it will also cover some FBI and Department of Homeland Security systems,” said a person familiar with the memo's expected contents.

On the government contracting side, the memorandum does not impose any changes to AI procurement procedures, but it is likely to have a “significant impact” on how cloud service providers and frontier model developers fully understand how to best deploy these technologies responsibly.

Ensuring U.S. leadership in AI innovation and standardization is also likely to be a focus of the memorandum, which is expected to address domestic workforce challenges.

“It emphasizes a strategic focus on talent development to maintain technology leadership, as well as an emphasis on domestic talent development and attracting top talent to the U.S.,” said a second person with knowledge of the memo. “This is seen as important to increasing the nation's competitiveness in AI technologies.”

The memo is also expected to address the energy demands of AI computing and how to best balance those demands with policies that promote clean energy.

The memo is expected to address how AI should not be used in government operations, and the first source said it would likely include a short list of “prohibited uses” of AI systems, such as using them to track constitutionally protected activities like nuclear weapons operations or free speech.

The memo also outlines “high-impact” use cases for AI. High-impact AI deployments would likely include dangerous scenarios that, while not prohibited, would require greater oversight, such as real-time biometric tracking and identifying individuals as threats to national security.

“These high-impact uses are subject to different governance and risk management practices similar to those described in the OMB memo, but differ from them in some respects,” the first source said.

The memo will initially be classified, but the Biden administration wants to declassify as much of it as possible to make it more widely available at a later date, said a second source familiar with the memo.

National security experts say the memorandum is important in setting the tone for how the government responds to both the risks and benefits that AI technology poses.

“What we're looking at here is the government's ability to really impact fundamental freedoms and rights – who to investigate, who to surveil, who to let into the country, who to designate as a threat to national security or public safety. These are things that really matter to individuals and really affect their lives,” said Faiza Patel, co-director of the Liberty and National Security Program at New York University's Brennan Center for Justice. Next Government/FCW“So it's a very important document, but I don't think it's gotten as much attention as other AI research.”

Patel noted that internal mechanisms are often needed within national security organizations to enforce the implementation of safeguards. Putting in place stronger external oversight to ensure the safe deployment of AI technologies would be beneficial for federal agencies as they work to protect civil rights alongside integrating AI, she said.

“I would be happy to see strong guardrails for high-risk systems. I would be happy to see a robust list of high-risk systems, but I wonder whether there are effective mechanisms in government to check whether those rules and safeguards are actually being followed,” Patel said.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *