Sign up for Big Think Books
A dedicated space to explore the books and ideas that shape our world.
Much of the ongoing debate surrounding AI can be categorized along two broad lines of thought. One concerns practical questions: How do large-scale language models (LLMs) impact the job market? How can we prevent bad actors from using them to generate false information? How can we reduce risks related to surveillance, cybersecurity, privacy, copyright, and the environment?
The other is much more theoretical. Are technological constructs capable of emotion and experience? Will machine learning lead to a singularity, a virtual point where progress accelerates at an unimaginable rate? Can AI be considered as intelligent as humans?
The answer to many of these questions may depend on the last question. If you ask Blaise Aguera y Arcas, he will answer with a resounding yes.
Aguera y Arcas is Google's CTO of Technology and Society and founder of the company's interdisciplinary Intelligence Paradigm team, which studies the “fundamental building blocks” of the senses. His new book — an apt title. What is intelligence? — LLMs like Gemini, Claude, and ChatGPT make bold and thought-provoking claims that they don't just resemble the human brain. They operate in a way that is functionally indistinguishable from them. Based on the premise that intelligence is inherently predictive and computational, he argues that AI is not a disruption or aberration, but rather a continuation of the evolutionary process from the first single-celled life forms to 21st century humans.
Big Think recently spoke with Aguera y Arcas about the challenges of writing critically about AI for a general audience, how attitudes in Silicon Valley have changed over the course of his career, and why old approaches to machine learning may lead to a dystopian future.
Think big: Scientific papers often rely on metaphors, but metaphors can be a double-edged sword. When describing the familiar in terms of the unfamiliar, writers may miss meaningful differences. what is your opinion?
Aguera and Arcas: Because of the issues you allude to, I try to minimize them. They can lead to incorrect assumptions.
When I say things like “the brain is a computer'' or “life is calculation,'' some people interpret it as metaphorical. Just as we talked about the brain as an engine or a telephone exchange. I don't mean that figuratively. I mean it literally.
Think big: In your preface, you mention two groups of readers. One is a seasoned researcher who doesn't have much patience for pop science, and the other is a casual passerby with little specialized knowledge. How can you keep your writing both engaging and easy to read?
Aguera and Arcas: This was the biggest challenge I faced writing this book. I tried to minimize what I needed to bring. For example, I needed to explain some things about thermodynamics, but I really thought about how to do it in a way that wasn't superficial and boring for people who already knew thermodynamics. In many cases, I tried to add a twist that would give experts a new perspective on a familiar problem.
Think big: AI means different things to different professions. As a writer and researcher, do your personal attitudes toward AI – your expectations, hopes, and concerns – change depending on which hat you wear?
Aguera and Arcas: My views on many things, including AI, change depending on how far you zoom in. It's disheartening to see what's happening every day. It's easy to get excited about what we see on the news, but some of it can be devastating.
But when you zoom out and look at what life was like for people in, say, 1900 from a historical perspective, it's hard not to see some surprising positive trends, even if there were many bumps along the way. I try to zoom out and spend some healthy time. Not only because it's a more cheerful place, but also because history is accelerating. These days, zooming out isn't even zooming out anymore.
Think big: Your career has gone through several cycles of AI optimism, stagnation, and breakthrough. What discoveries or personal experiences led you to arrive at the “subversive” premise of your book?
Aguera and Arcas: I would like to say that I have always been wise and used my wisdom to steer a middle path while others wavered. [between] AI optimism or extreme AI pessimism? Of course, my own thoughts have changed significantly over the past few years.
In the early days of the Internet and personal computing, there were great thinkers who truly believed that these technologies would be free and inherently democratic. They became very disillusioned when they discovered that countries could build giant firewalls and use the Internet for surveillance and the spread of disinformation on a massive scale.
It's a bit like the time scale question. When you're obsessed with something and only see the possibilities, you tend to be hyperbolic about it. When the two-sided nature of technology is revealed, you suddenly find that this is not as simple as you thought, and you end up leaning in the exact opposite direction. However, none of these stories are simple. They are all complicated.
Is it true that the Internet, computers, and smartphones are not free? No, certainly many people around the world have experienced them in different situations.
Think big: In the book, you mention that David Graeber, a prominent anthropologist of capitalism and business culture, describes the disillusionment of the early 20th century AI scene as a “secret shame”, a “broken promise” of technological progress that never materialized.
How do you remember a time when progress seemed to be accelerating again, so different from the way we live today?
Aguera and Arcas: I love David Graeber and I miss his voice. He died a few years ago, too young. Although I didn't agree with all of his opinions, I thought he was a fresh and innovative thinker.
The source of the quote is Utopia of rules, [which] This was ironic timing, as the AI revolution (at least the neural net part of it) was well underway at the time. It was right in the middle of what Jeff Dean called the “Golden Age of Deep Learning.”
What Graeber was writing overlapped considerably with what economists were writing about, namely the significant slowdown in technological acceleration since 1970. lord of the rings When author JRR Tolkien was born, cavalry charges were still in vogue, but by the time he died, the hydrogen bomb existed. Such upheaval is unprecedented; [but] Generations after Tolkien did not experience the same level of technological change. It is true that there was a real slowdown after 1970.
We have entered a period of further acceleration that began in 2020. I think AI is very important, not only as a technology in its own right, but also as a meta-technology that accelerates the development of other technologies. [It’s analogous to] As I said, it's ironic that Graeber was writing at what in hindsight appears to be the end of that period of decline.
Think big: Speaking of that period, you write that the people working on rudimentary AI in the early 2010s didn't believe they were working on AI at all. Why not? Is it because the importance of their work only becomes apparent in retrospect?
Aguera and Arcas: when Utopia of rules When it was published, tasks such as visual category recognition (getting an AI to recognize a picture of a banana as a banana) were already working reliably. Progress was also being made in issues such as handwriting and voice recognition. In 2016, AI even beat humans at Go, a game that had long resisted classical computer science methods.
All of this progress has been made possible by neural networks, a brain-inspired architecture unlike any previous AI, called Good Old Fashioned AI (GOFAI). This was the source of our optimism. So we've seen real progress towards true AI using a brain-like approach.
The idea that general intelligence (the ability to use language, understand concepts, and reason in general ways) could emerge from neural nets trained on narrowly specific problems seems far-fetched. These systems have only one goal: to get a 100% score on a particular test. Many others and I thought that before we could reach “real AI” we needed more fundamental insight into what general intelligence is.
What was surprising was that applying AI in an unsupervised environment, rather than just training it for specific tasks, approached what looked like general intelligence. That was a big shock.
Think big: For those looking in from the outside, the question is not necessarily “how” or “why,” but “so what.” In this case, if you argue, as you do, that intelligence equals prediction and AI, what's the problem?
Aguera and Arcas: I think one of the biggest “so what”s has to do with the fact that the old ways of thinking about artificial intelligence (optimizing something, maximizing test scores, etc.) have turned out to be incorrect. And that was really good news.
When we were running GOFAI, we were transcribing correctly, recognizing image categories correctly, and maximizing our test scores. If people think that artificial intelligence is optimizing scores, that's a very utilitarian way of thinking. It's like assuming that people, companies, and other organizations are all about maximizing money or happiness.
The problem is that almost everything we tell intelligent systems to optimize ends up going in the wrong direction. This is the lesson of Swedish philosopher Nick Bostrom's Paperclip Maximizer. You give it an innocuous goal like “make a clip”, but once the system maxes out the clip everything else becomes useless. That's pretty much true no matter what you ask. Optimizing or maximizing in any way creates a horrifying dystopia. That was the basis on which Bostrom wrote. super intelligence (2014) is a very scary book about how superintelligence could mean the death of humans, the Earth, and even the universe.
In many ways, this theme What is intelligence? That is, intelligence and value maximization are not the same. In fact, we achieved general intelligence when we abandoned supervised learning and instead did open-ended modeling of human output. That's why I feel optimistic about AI. I see it as part of the existing ecosystem, part of human intelligence, and not an alien monster applying strange, inhuman thinking to optimize the problems we give it. I don't think intelligence works that way.
Sign up for Big Think Books
A dedicated space to explore the books and ideas that shape our world.
