AI and Technological Colonialism | Books

AI For Business

Code Addiction: Living in the Shadow of AI

authorBy Madhumita Murgia

Click here to connect with us on WhatsApp

the publisher:picador

page: 336

price: 699 rupees

If you're reading this, you've probably been exposed to artificial intelligence (AI) to some degree, even if you didn't realize it. You've integrated ChatGPT and other large-scale language models into a variety of applications, and you've grown accustomed to Alexa, Google, and Siri. You've used Uber, played AI-run games, and used neural net trading programs. You may have even received a credit card or personal loan offer from an AI.

This book focuses on the impact of AI on everyday life as it is quickly becoming a part of it. AI uses the data we spit out in unimaginable, almost magical ways. Often, AI uses it to our benefit, but often, it doesn't. This book looks at humans through the lens of data, but it's ultimately about humans and how AI processing that data impacts people's lives. This technology is addressed in that context.

The emergence of AI represents a new stage in “technocolonialism,” where cheap Third World labor performs the chores of labeling and annotating data, while the big tech companies that run the algorithms generate huge profits. For example, one of Madhumita Murjia's references is the Nigerian company that annotates data for OpenAI and how it treats its employees.

One of the strengths and weaknesses of AI is that it does things its creators don't understand. This can result in amazing breakthroughs, like AI figuring out difficult problems like protein folding or learning how to manage the magnetic fields of a nuclear fusion reactor. It can also lead to absurd results, where AI finds ridiculous or even obviously harmful correlations, like an algorithm sifting through medical data about pneumonia and COVID-19 categorised only by age differences.

This “black box” nature makes AI a very dangerous tool when it comes to profiling people, because AI is not very good at explaining how it reaches its conclusions. Another example cited is ProKid, an algorithmic profiling software used by the Dutch police to predict “tendency to commit crimes” based on data on past contacts with police, addresses, relationships, and “role as witness or victim.” This resulted in hundreds of innocent young people being flagged. A teenage girl from a low-income family in Argentina was flagged in the database because the AI ​​determined she was at risk for pregnancy. Boys of color and immigrants are criminalized by AI profiling. Similarly, when fed credit score data and academic data, AI amplifies existing biases about gender, race, and caste inequalities.

The book contains a lot of new material drawn from different disciplines, as well as feedback from interviews with those affected. The authors met with gig workers, tech workers, health professionals, teenagers, and activists, including those from marginalized communities at the fringes of the AI ​​technology value chain in Nigeria, Bulgaria, Kenya, China, and elsewhere. Non-technical writing about AI and its impacts ranges from the highly optimistic to the apocalyptic. Indeed, AI could cause nuclear war, enable genocide, or enable oppression on a horrific scale, as it has been weaponized against the Uighur community in Gaza and China's Xinjiang Uighur Autonomous Region. It could also solve many of our problems around climate change and healthcare.

But the day-to-day impacts of AI are much more mundane than nuclear war — consider, for example, the seismic shifts that AI could cause in employment patterns. The book serves the reader well by focusing on the less obvious, and while the tone is generally pessimistic, it's not overly pessimistic.

The chapters are designed to provide a broad range of stories, as indicated by headings such as “Your Life”, “Your Body”, “Your Health” and “Your Freedom”. Regulation and routes to regulation are discussed, while a presentation of the “victim” perspective is also provided in a personalised way.

Anecdotes matter because they can evoke empathy in ways that data cannot. Interviews with Uber drivers, doctors, researchers, teenagers, and mothers give nuanced stories about the harm AI can cause. Women have had their lives destroyed by deepfakes in porn. Gig workers, delivery drivers, and similar platform workers are deceived at worst and underpaid at best. Repressive or would-be repressive regimes are using facial recognition as a tool to target activists. The Indian farmer protests feature in this context.

“Surveillance capitalism,” “data colonialism,” and an exploration of labor within the AI/IT industry and its growing impact on macro labor dynamics are all important themes. Other concerns documented by Murgia are the feedback loops that AI can use to reinforce existing biases and the power amplification this provides to police states.

I have mixed feelings about the epilogue. It repeats many of the previous points at what seems like too long a length. But it also raises some important questions for the reader to ponder. The book is unbalanced in the sense that it focuses more on the mundane potential harms than on the mundane potential harms. But it does require reflection on the possibility that the harms resulting from the introduction of AI may outweigh the benefits. The book is well researched, well written, and definitely worth a read.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *