Researchers at the UConn Humanities Institute are part of a new initiative investigating the issue of bias in AI technology.
Spam filters, Face ID, Netflix recommendations – these everyday services and many others are powered by artificial intelligence (AI).
The rapid development of AI technology in recent years has raised important ethical questions about how these tools are built and used.
Design Justice AI, a new multi-institutional initiative that includes the UConn Humanities Institute, brings humanities scientists from around the world together to address issues of prejudice and the breadth of technology operating within realms previously reserved for humans. addressing the impact of
Lauren Goodlad, chair of the Critical AI Initiative at Rutgers University, is leading the effort. Design Justice AI is supported by her $250,000 grant from the Andrew W. Mellon Foundation. This group includes international collaborators from the University of Pretoria in South Africa and the Australian National University.
The UConn team includes Michael P. Lynch, Director of the Humanities Institute and Distinguished Professor of Philosophy. Yohei Igarashi is Associate Professor of Digital He is Director and Coordinator of Humanities and Media Studies and Associate Professor of English at the Humanities Institute.
The UConn Humanities Institute’s participation in this work demonstrates the institute’s position as a global leader in digital humanities.
“The lab has a long history of research into the ethics of AI,” says Lynch. “It’s a way of trying to address the changes that algorithms are making to our society, especially the way we think, treat each other, and distribute research.”
The rise of “generative AI” such as ChatGPT, which can generate strikingly human-like text, and the image generator DALL-E have also generated debates about the nature of creativity, culture, knowledge, and learning.
AI technology is typically trained on data scraped indiscriminately from the internet. In other words, it employs the human bias inherent in the data. This led, for example, to Microsoft’s 2016 “TayTweets” chatbot experiment learning to spout hate speech on Twitter.
“Because these models are being trained over the internet, there are all sorts of issues of bias and mediocrity,” says Igarashi. “That’s one of the key questions he has: What do humanists have to contribute to making artificial intelligence work positively for us?”
Rather than outright rejecting generative AI, Design Justice AI studies issues exploring how these technologies can be holistic and positively impact human communication and creativity. to fund up to 20 interdisciplinary academics.
“One of the things we urgently need to do is find ways to actually use this technology in ways that better reflect who we are, not the undemocratic and inclusive part – the worst part. It’s about trying. It’s part of us,” says Lynch.
Researchers at the University of Pretoria will help advance the goal of thinking about the relationship between underresourced languages, such as those spoken on the African continent, and technology developed primarily by English-speaking engineers. The purpose of this work is to expose and think critically about the kinds of assumptions a Global North developer is making when designing his AI technology.
Design Justice AI is a fully interdisciplinary effort that fosters dialogue between humanities and STEM researchers in the field.
Funded researchers disseminate their findings through Critical AI’s public blogs, multidisciplinary peer-reviewed publications, and other channels.
The effort will conclude at a conference next summer at the University of Pretoria.
Lynch said he sees the effort as the beginning of a new collaboration between researchers studying issues that will become increasingly important as AI technology becomes more ubiquitous and complex.
“It’s not the end. It’s the beginning of something,” says Lynch. “My hope is to form a stable and sustainable research network among these universities on these topics.”