“Atomic Human” Draft Blueprint for Trustworthy AI

Machine Learning


DeepMind Machine Learning Professor Neil D. Lawrence's new book, “Atomic Human,” may be one of the most important works in understanding what we call AI today. The book explains a complex subject with far-reaching analogies without equations, and is a must-read for anyone trying to understand the hype, caution, and opportunity of AI. But most importantly, it breaks down what makes human intelligence special — the “atoms” — and explains how we can build on them in a trustworthy way, rather than replacing or obliterating them as some fear.

This book details the effort to build machine intelligence and how it was accelerated and shaped by World War II. It tackles the topic by beginning with the role that planning, communication, and collective decision-making played on D-Day. It's only fitting that it was published 80 days after Eisenhower, acting on information from Bletchley Park, sent one million troops across the English Channel.

This book is about the interactions of what we call AI, but it's actually much deeper than that. Lawrence signed my copy, so Release of the Open Manifesto ReportHe says that the book focuses on explaining AI but does not make any recommendations or instructions regarding AI, which I found quite strange.

The clue to this distinction comes from his explanation early in the book:

But the artificial intelligence that we're selling, the technology that we're using, is simply a combination of very large data sets and computers. It's a combination of advanced math and statistics.

He argues that this is a better way to frame our understanding than debating the superintelligence annihilation scare stoked by Elon Musk, Bill Gates and Stephen Hawking. This scare starts with a classification error that he details: we must be careful to distinguish between intelligence as an entity and intelligence as a property. There is a big gap between what Lawrence is actually building and what he is asking. To cross this gap we need to explore who we are and how we can use machine intelligence as a mirror of our own perception. He explains:

By using machines as a mirror to reflect who we are, we can better understand what we want, and with this understanding we can determine how we want our society to function in this new age of AI, and how individuals within that society can play their role, balancing personal freedom with the societal need for consistency and safety.

Reflective and reflective

One thing to note is that humans are using reflective and reflexive intelligence through machine interfaces. Reflective intelligence is the part where you make a plan or write a story. Reflective intelligence is the part where you quickly identify uncertainty and automatically move your car around so that it can deftly avoid a cyclist on the highway. This also involves mastering the feel of control of the car and recognizing the importance of a cyclist about to hit you.

In World War II, reflective intelligence enabled Bletchley Park to build logical operations machines and supporting human processes to decipher German military communications, which became the seeds of Alan Turing's later general-purpose computer innovations. These innovations built on earlier efforts to develop mathematical systems of logic, reasoning, and planning. But when faced with uncertainty, plans can fall apart.

Reflexive intelligence enabled engineers to build automatic control systems to fly ships and planes. During the war, Norbert Wiener developed an anti-aircraft control system that could adapt to the uncertainty of the pilot's evasive maneuvers. This was the seed of cybernetics and the start of modern research into autonomous systems that can adapt to uncertainty.

Human-machine interface is another important part that is often omitted in discussions about AI, and Lawrence details the challenges faced by engineers building aircraft control systems that enable pilots to navigate the chaotic turbulence of large clouds.

Lawrence describes how these came together in the first moon landing, where control was shared between mission control, on-board computers, and the human pilot. A reflex system helped plan and program the flight path to the moon. The reflex system steered the mission in space. But after a computer failure, control returned to the on-board humans to decide how to proceed. And the original plan didn't take into account a large rock at the original landing site, so Neil Armstrong had to manually steer the spacecraft into the Sea of ​​Tranquility.

The dangers of System Zero

Another section details the dangers that could arise if AI that has passed laboratory safety tests but fails to consider new dangers that arise in a wider context is forced onto humans. The section begins with a cautionary tale about the first tests of a drug called seralizumab, which was carried out on eight patients in 2006. Although the drug had passed all safety checks on monkeys, all of the patients who received a “low dose” suffered significant organ damage within hours as their immune systems began to attack their own bodies.

Lawrence likens this to how early AI systems seemed safe and drove engagement on social media, but could also breed distrust, like how Russian agents exploited weaknesses in the systems to influence the U.S. election in 2016. He details many other ways in which the automation of seemingly inconsequential decisions can have negative consequences.

The story is based on System 1 and System 2 thinking, popularized by Daniel Kahneman in his book Thinking, Fast and Slow. System 1 is the fast, reflexive thinking that helps us make quick, automatic decisions, from driving a car to experts making snap judgments about their fields. System 2 is the slower, reflexive thinking involved in problem solving and learning new skills.

System 0 thinking is a slower, more centralized process that resembles how the immune system learns pathogens' patterns and customizes immune cells in the thymus to attack them. The digital equivalent is a modern machine learning pipeline that trains models on a centralized infrastructure deployed across the web to automate decision-making. Lawrence says this creates new dangers we're not prepared to deal with.

Machines are able to anticipate human actions and exploit blind spots in sensory-motor intelligence. This is becoming a new level of cognition that can be thought of as System Zero.

These systems process data millions of times faster than humans and manipulate us by using statistical survey techniques to restrict our view of the world. The process of disrupting our environment in small, carefully selected ways is similar to the virtual world projected onto enslaved humans in “The Matrix,” but far more subtle and efficient. Current digital System Zero does not understand social context, bias, or empathy. He wonders whether we want to be like the men who participated in the ceralizumab clinical trial, who believed this systematic intervention was safe.

Through social media and next-generation AI-generated technologies, we are administering System Zero to ourselves and testing it against the health of society, the nature of which is not easily quantifiable, making it difficult to measure the results of a Phase 1 trial of System Zero.

Building a foundation of trust

The book concludes with an analysis of what is needed to ensure that these systems foster trust rather than undermine it. The common analogy that AI is like a nuclear weapon that could go out of control and destroy humanity is unhelpful and mischaracterizes the problem we face. A more useful metaphor is similar to the threat we pose to ecosystems.

The real problem is the information asymmetry between three types of information typography: ecosystems, human cultures and institutions, and new mechanisms for sharing and categorizing information. Our ecosystems are shaped by the complex interplay of natural intelligences that have evolved over billions of years, but the pace of genetic information exchange is relatively slow. However, this can be undermined by the clumsy behavior of the human layer. Our new computer systems can similarly undermine the diversity and complexity of the world around us.

By operating on such short timescales, artificial intelligence has access to vast amounts of information from humans and has the potential to do as much damage to our cultural ecosystems as human actions have done to natural ecosystems.

One problem is the power asymmetry that new tools create between the gatekeepers and the rest of us. The second problem is the tendency to trust confident-sounding tools and automate more and more decisions without understanding their limitations.

New institutions that collectivize data rights and act as pressure groups could help reap the benefits of data without ceding too much control to digital oligarchs. On the control side, we need to build software tools and engineering practices that allow more people to take part in its creation and automated decisions. Explaining his work with the Data Science Africa project, he explains:

By empowering users to design their own software and delivering it in an explainable, maintainable system, we can shift the balance of power away from the software guilds and towards the people and organizations that make up an open society.

My take

A few weeks ago I attended the opening of Salesforce's new AI centre in London. promise The goal is to make AI more accessible to non-technical users. I was amazed by the beauty of the Tate Modern museum outside my window, not realizing much about its past at the time, other than that it was once a power station and had a chimney.

Atomic Human has linked another generation of history to the gallery as Albion Mill. Albion Mill had a monopoly on grain milling in London, bankrupted local millers, and burned down in suspicious circumstances. It may have been destroyed by disgruntled millers or operational flaws. But it was the adoption of new machine intelligence feedback loops developed by James Watt that made the steam-powered mill powerful and profitable.

Looking at this building today, it's hard to imagine its contentious past, or the possibility that this mechanical giant could have been destroyed by discontent or its own demise. Either way, the building is now an institution of hope for a shared new worldview. And it seems fitting that Salesforce is trying to do something similar for AI in its new offices overlooking this site. This gives us hope that, despite the problems and dangers we see with AI today, there is still an opportunity to lay the foundation for a better future.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *