US AI advisory calls for more disclosures on the use of facial recognition by federal law enforcement

Applications of AI


FedScoop reports that the U.S. National AI Advisory Board is preparing to recommend requiring the country's federal law enforcement agencies to publish an annual summary of their use of facial recognition and other high-risk AI tools. It is said that it is in progress.

This requirement will be part of an inventory of AI applications already required to be submitted under a memo released by the Office of Management and Budget in March. Committee members said greater transparency would provide better information about the scope and quality of AI use and help reassure Americans.

The Miami Police Department told commission members during a fact-finding hearing earlier this year that it uses facial recognition about 40 times a year.

“If we learn that a government agency is using facial recognition, for example, this is a fundamental shift towards a kind of surveillance state, and we can One might speculate that their actions would be tracked wherever they go,” said NAIAC Law Enforcement Subcommittee Chair Jane Bambauer. Fedscoop. “Others said, 'No, you shouldn't use it that often, and you should only use it in situations where there are restrictions on its use.'”

Benji Hutchinson, a committee member and chief executive officer of the Republic of China, said reporting should be easy for government agencies to produce, but coordination and standardization could be more difficult. Transparency efforts can be complicated by the different tiers of law enforcement, data-sharing agreements between them, and memorandums of understanding already in place, he says.

The law enforcement subcommittee's recommendations also include federal investment in pre-deployment performance testing of AI technology and a state-level repository of body camera footage that academic researchers can access, according to the report.

Brookings podcast foreshadows recent developments

In light of current developments in AI and public policy in America, the Brookings Institution's TechTank podcast is revisiting a 2022 episode that addresses the impact of AI on civil rights.

Lisa Rice, president and CEO of the National Fair Housing Alliance, said that “centuries of discriminatory laws” have created inherent biases that remain influential and that even with the introduction of AI, It says that it is creating an environment of racial disparity. The history of these laws is not taught in class, and more people don't believe they exist than know about them, she says.

Rice also mentioned the National Fair Housing Alliance's then-new framework for equity assessments based on “objectives, process, and monitoring” as a potential tool for auditors.

Lenny Cummings, a criminologist and resident data activist at the University of Virginia, argues that accountability and transparency are needed to stop the use of “Black people as data points of danger.”

According to Cummings, when AI is introduced into smart cities and law enforcement applications, communities with low trust in technology and institutions will be subject to surveillance, and the expected reduction in crime rates has not yet been achieved. That's what it means.

Rice said there are laws in place to protect civil rights, but individual violations cannot be litigated, and “unfortunately, federal regulators have not kept up with this technology.” Therefore, various mechanisms are needed to ensure the accountability, transparency, explainability and auditability that Cummings refers to.

Additional reporting responsibilities for federal agencies that use AI could contribute to that change.

When host Dr. Nicole Turner-Lee asked if the EU's special designation of high-risk AI applications could serve as a good example for U.S. regulation, Cummings said, “Regulation is something we are still trying to figure out. “It is something that is not properly understood,” he said, so he answered that it was possible.

epic failure

The Electronic Privacy Information Center briefly reviews the Government Accountability Office's recent report on concerns about biometric technology, highlighting that the majority of impacts reported to GAO are negative and that the best facial recognition algorithms states that there is evidence of bias.

Although more negative impacts than positive impacts were reported to GAO by more stakeholders, GAO is clearly skeptical about how to understand these reports.

“However, information on positive and negative impacts is limited, as stakeholders primarily provide anecdotes, first-hand experiences, or examples related to potential impacts,” the report said. Says.

EPIC also declares that GAO has found that “racial and gender bias persists in controlled clinical testing, even with the best algorithms.”

This claim apparently refers to a more nuanced part of the GAO report and refers to the National Institute of Standards and Technology testing. “For example, the accuracy of facial recognition has improved significantly over the past four years, and the best-performing systems have little variation in false-negative rates between different populations in clinical tests,” he said in the GAO. is writing. “This is not the case for false positive rates when the difference in performance decreases but the difference remains.”

NIST said in 2022 that the difference in false positives for the best algorithms is “undetectable.”

NIST's most recent assessment of the false positive differential in 1:N facial recognition algorithms shows that the false positive (or “match”) rate for all groups is less than 0.005 for dozens of algorithms.

Article topics

Biometrics | Epic | Facial Recognition | Regulation | Lock | US Government | America

Latest biometric news

India's Digi Yatra biometric airport ID system is expected to be available to international travelers by the end of 2025.

Diebold Nixdorf automatic age estimation based on facial biometrics is now available at self-service checkout in EDEKA Yeter stores…

Thailand has begun implementing a digital wallet program as the government furthers its efforts on national AI.

A new study from data security firm UpGuard reveals that a vast database of classified documents from U.S. government AI contractors…

The New Zealand Transport Agency (NZTA) has begun the pilot phase of an app to host New Zealand's mobile services.

The iProov Vital Pulse Survey of 2,000 Americans found that 79 percent of respondents thought there must be something wrong with it.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *