AI as a broken machine

Machine Learning


SJ Bennett, one of the researchers in the six-person AI Ethics and Society team at the University of Edinburgh, perhaps best summed up the spirit of last Friday’s symposium on Artificial Intelligence, Bankruptcy and Justice in his introductory speech:

“Today, we want to make disruption an entry point into the critical examination of technology by flipping the dominant narrative of disruptive innovation and shifting the focus to what is left broken in this process of destruction,” Bennett told an audience of thinkers and researchers working at the intersection of contemporary technology and morality. “Disruption invites us to think about the social and technological mechanisms that operate selectively and ask for whom are these disruptive? Scholars and activists have long urged us to pay attention to how AI and other data-intensive systems can perpetuate marginalization, displacement, and violence.”

The talk discussed many of the serious negative impacts that artificial intelligence (its formation, advancement, and human use) will have on people around the world, but also how we can collectively care for, repair, and resist it.

The three surveys and joint panels on this day were: Margins, Data Patchworks, and Justicealong with Morgan Currie, Natassa Filimonos, and Slaviya Chandiramouli. Though their interests and fields vary slightly, a major theme that runs through their work is how contemporary technology can make workers inequitably obscured. Chandiramouli, for example, discussed how marginalized and often displaced communities in the global South perform the labor that makes AI work (often referred to as “ghost work”), such as creating and annotating datasets specific to AI training and augmentation, and how hiring practices and rhetoric further impede economic and personal independence for people in India, particularly women.

Errors, Uncertainties, and ClassificationsThe second panel of the symposium featured Alexander Campolo, Cindy Lin, and Benjamin Jacobsen. The discussion centered on the role of errors, or outliers, in machine learning models. What can “error” tell us about ourselves as a society? What does it mean to recognize errors as anomalies that should be iteratively reduced and eventually erased by humans? “I think there’s this engineering idea that we just need to eliminate errors,” Campolo explained, but he later suggested that “‘error’ is a concept that can reveal what our culture values ​​as truth.” Relatedly, Lin said, “errors can help us create a context for understanding machine learning,” which can help us think about questions like, “What do errors prioritize as truth?” […] “What voices are seen and what voices are not seen?” Ultimately, machine learning “errors” can themselves reveal rich insights not only about deep-rooted biases, preconceptions, discriminatory views and behaviors, but also about us as a society and the technologies we develop.


Recommended reading


The last panel is Care, Repair and CraftingParticipants included Ann Lee Steele, Alex Taylor, and SJ Bennett. Given the fragility of artificial intelligence, how can repair be done? How can we be more considerate and compassionate towards one another amid this fragmentation? A recurring theme discussed by the panelists during this collaborative discussion, and a way of responding, was relationship building and community organizing. As a way of bringing public voices to the fore, Taylor touched on her current work with BRAID (Bridging the Responsible AI Divide Programme), led by the University of Edinburgh in partnership with the Ada Lovelace Institute and the BBC. As part of the BRAID research project, Taylor and her team are conducting fieldwork in multicultural Leith to hear and understand first-hand the thoughts and opinions of local residents. As Taylor explained, “We're asking the people of Leith: What is AI? What will it change for you?”

The series of dialogues and questions this day presented a much-needed rethinking of how we should think about our current predicament with artificial intelligence. While these days, human development of artificial intelligence feels like a speeding train with no brakes, a turbulent force that cannot and will not slow down, at least for now, AI is not a fully autonomous entity. Humans are behind every level of AI and its implementation: its creation, use, and evolution. And within that, there is potential. As people who use, interact with, and live with AI, we can use our collective voices, efforts, and actions to not only care for those affected by AI's negative effects, but also to reach out to those behind AI development and help fix the broken machine.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *