Connecticut legislators are considering the use of artificial intelligence and algorithms by state agencies to ensure automated systems aren’t making key decisions based on biased or discriminatory processes.
Last month, members of the General Law Commission unanimously passed a bill creating two new offices in state government. The Office of Policy and Management’s Artificial Intelligence Officer and the Office of Administrative Services’ Artificial Intelligence Implementation Officer.
The goal is to have an inventory of automated systems used by state agencies, draft policies on their use, and ensure compliance by the end of this year.
Without this bill, lawmakers would have no say over what metrics AI would use if it were allowed to make important decisions about residency status such as employment, health care programs, housing, and utility bills. I am worried.
At a recent meeting, Milford Democratic Senator James Maloney, who co-chairs the Common Law Committee, used a fully automated process to determine whether residents are eligible for Medicaid coverage. I remembered the example of Indiana, where
“A woman was going through cancer treatment,” Maloney said. “She had been denied her treatment by this algorithm. rice field.”
Maloney said the example was extreme and not what he had hoped for, but it points to the concerns of the bill’s supporters.
“We want to proactively prevent problems and make sure we are implementing AI ethically,” he said. “We see a lot of uses that are more efficient and help streamline processes. I’m not saying don’t use it…just say you need to test it to be sure.”
The legislative committee is not alone in having these concerns. Earlier this year, the United States Civil Rights Commission’s Connecticut Advisory Board released a preliminary report on the use of algorithms by state agencies.
“[T]The Committee is concerned that the use of algorithms and computers for decision-making may limit individual opportunities such as employment and credit; prevent. Reflect and reproduce the inequalities that exist in heavily guarded areas. and/or embed new and harmful prejudices and discrimination, for example through inaccurate language translation,” the report said.
The bill, introduced by the General Law Commission, has bipartisan support. House Minority Leader Vincent His Canderola testified in favor of the proposal at a hearing in February. Candelora applauded the section of the bill that extends recently adopted data privacy policies to state agencies to curb identity theft.
“Routinely, state agencies process sensitive information about residents and businesses,” Candelora wrote. “It is imperative that government agencies and many vendors operating on behalf of our residents do everything possible to protect consumer data.”
However, the bill was opposed by OPM and DAS, the two most directly influential agencies. DAS chief information officer Mark Raymond and OPM senior policy adviser Adel Ebeid said in written testimony that the bill had unintended consequences and “beyond what is currently achievable.” .
“OPM and DAS have developed the necessary policies and procedures to ensure that AI is integrated in an ethical and equitable manner, without the need for such a task force, and to protect the rights of state residents and those doing business in the state. We believe we can establish procedures,” they said. I have written.