Jean Mary Zarate: 00:04
Hello, and welcome to Tales From the Synapse, a podcast brought to you by Nature Careers in partnership with Nature Neuroscience. I’m Jean Mary Zarate, a senior editor at the journal Nature Neuroscience.
And in this series, we speak to brain scientists all over the world about their life, their research, their collaborations, and the impact of their work.
In this final episode, we speak to a researcher who is determined to understand how brain matter creates intelligence, and why the future of AI depends on it.
Jeff Hawkins: 00:42
I’m Jeff Hawkins. And I started a company called Numenta, which is a research company. It’s sort of like a private lab. And we do research into brain theory, specifically understanding how the neocortex works, and the things it’s connected to.
We also use that knowledge that we’ve learned about brains and how they work and applying it to improving AI, artificial intelligence.
So it’s sort of a dual mission for the company. And I’m the chief scientist there, as well as one of the founders.
Jeff Hawkins: 01:20
So, I have a recent book called A Thousand Brains, a New Theory of Intelligence. The focus of the book is really about theories of really what we understood, and what we gained knowledge about, about how the neocortex works.
And we’ve made some very significant progress. We’ve written that up in journal papers, peer reviewed journal papers. But to reach a broader audience, and to just sort of spread the word broadly, I wrote a book about it.
So it covers essentially the basics of how the neocortex works, how we think, how we understand the world. We describe our discoveries in there,
I also extended the book. It’s in three sections. The first section is all about the brain. The second sections is about AI, and how the future of artificial intelligence is going to play out, especially since what we’ve learned about the brain.
And then the third section is really a bit more philosophical about the future of humanity and the future of intelligence and intelligent machines. And how we might think about humans’ future more broadly. So it covers a lot of ground.
Jeff Hawkins: 02:31
I think you could say what we’re doing is reverse engineering the brain. But you know, what it means is we have this thing in front of us, we know it works, we have a lot of data on it. We don’t have all the data. We have a lot of things we can’t measure about the brain. But we have a lot, a tremendous amount of data.
And so we’re just trying to figure out what how does this thing work? What do the different pieces do and how do they work together? It’d be no different than if I gave you a computer and you didn’t know anything, what computers were, and I said, “Well, what does it do? And how does it do it?”
Well, it’d be very difficult to figure it out. But after many years, you’d probably be able to, you know, teams of people would be working on it, they’d be able to figure that out. And that would be reverse engineering it just like, okay, now we have a theory of how computers work, now we have a theory about how the brain works. So the brain, of course, is very difficult to study. Because it’s a living tissue and human brains especially, we can’t take probes in them all the time.
So it’s very difficult to get data. But there’s a lot of it. So yeah, we’re reverse engineering the brain. That’s a simple way to phrase the whole thing.
I studied engineering in undergraduate school. And right after I got out of school I started my first job, working at Intel in the semiconductor business. And I immediately fell in love with brains. This happened because I read an article by Francis Crick, which is in Scientific American, the September ‘79 issue.
And he wrote the last article in that issue. It was a dedicated issue on the brain. And he pointed out that we had all this knowledge about the brain, all this facts. We collected facts. This is back in ’79. And yet, there was no theory underlying it. It was sort of like an emperor no clothes type of thing.
Like yeah, you know, we talked about like, we understand this thing, but we have no idea how this thing works.
And so I said, “That’s a fascinating puzzle. Here’s a puzzle, we have all these pieces, but no overarching theory.”
And I felt. like, there’s no reason we can’t do that in my lifetime. There’s no reason we can’t solve this problem, like, “What is the brain doing? And how does it do it?” In a very detailed way.
And so I just came in love with the idea. Then I realized that I don’t think there’s anything more interesting or important to work on because every human endeavour is based on the brain. Everything we’ve ever done in the arts and the sciences, and literature and humanities and politics. It’s all brains.
And in fact, nothing can be understood. Only brains understand things. Only brains ask questions. So it felt to me like if we don’t understand the brain, we don’t really understand anything. And so it’s just like, “Wow, we have to work on this.”
And so I dedicated my life to that. I became a graduate student at Berkeley, UC Berkeley in California. And I quickly found out that it wasn’t really possible to be a theoretical brain scientist. You had to work in a lab. I didn’t really understand this. I said, “No, I really want to work on theories.”
“No, no, you can’t do that.”
And so I really got blocked. And so I decided to go back in industry for a few years, I thought it’d be four years before returning to academia. But it turned out that I ended up starting a couple of very successful computing companies, Palm Computing, and Handspring.
And I worked on the first mobile computers and the first smartphones. And this became a big business. But I really had to get myself out of it, because I wanted to get back to brains.
So I picked a date. And I said, “Okay, I’m gonna leave this stuff. I have to just get back to working on brains.”
And so then I left and the next thing I did, as I said, “What am I gonna do?” I had some neuroscientist friends and they said, “Why don’t you start an institute studying neuroscience, neocortical theory?”
And so we did that. I said, “That’s crazy, but why not?”
So we created the Redwood Neuroscience Institute, which is now actually at UC Berkeley. And it’s still ongoing. But I ran that for a few years. And then I decided the best approach for me to achieve my personal scientific goals was to start an independent lab. And that’s Numenta. And we did that. And we’ve been doing that now for 17 years.
Okay, well, so you know, the human nervous system is a complex thing. It’s not one thing. It’s all made up neurons. But it comprises everything from your spinal cord, and big brain, and so on.
So it’s multiple organs. If we want to think about intelligence, our ability to understand the world and see the world, there’s some parts that are more important than others. And I have to be very, I’m going to annoy some neuroscientists in this answer, because everyone cares about their little piece, but ’m gonna give the sort of big picture.
The largest part of our brain of a human brain, most mammals’ brains, is the neocortex, which is the thing you see on top.
It’s a big wrinkly thing. It’s, it’s about three millimeters thick, two and a half to three millimetres thick. It’s a big sheet of tissue in its fold. You see all those folds, because it fitted into your skull and has to get wrinkled, to fit in your skull.
And the neocortex is associated with all of our perception, vision, and hearing, and language. It creates our language. When we think about things, when we plan things, it’s the neocortex. It is really the organ of intelligence. There are other parts that are very important too, the hippocampus and the enthorinal cortex, and so on..
But our main focus has been on the neocortex. The largest, by far. It occupies 75% of the volume of your brain, although it doesn’t have the most cells.
So that organ, fortunately, the neocortex is very regular in its structure. Although it’s quite large, it’s about the size of a dinner napkin, and three millimetres thick. It has a very repetitive structure to it.
So it’s not just a blob of cells. It has this very detailed architecture. If you look in the three millimetres you’ll see these very specific types of cells connected in very specific ways. It’s quite complicated. But that complication is the same pretty much everywhere in the cortex. Which, as a scientist, if we want to understand how something works, this is a great, wonderful discovery, because we don’t have to think that each part of the neocortex is doing something completely different.
It’s really like, “No, they’re all doing something the same.” And the first person to point this out was a famous neurophysiologist named Vernon Mountcastle.
And he literally, he wrote this famous essay back in the ’50s, I think it was, where he said, you know, (on no, ’70s, excuse me), when you said, you know, looks, “It looks like this is the same algorithm operating everywhere. So vision and hearing, and touch and language, these are all actually the same underlying process, the same complex algorithm, but somehow they’re all the same. And if we understand what that process is, we’ll understand everything.”
Now, this was such a revolutionary idea that a lot of people didn’t believe it. And a lot of people did believe it. But it’s just no one could really imagine what, what that algorithm was, just what was going on.
Vernon Mountcastle went further and he said the neocortex is divided in what are called columns and mini-columns.
It’s confusing. But you can think of columns about the, they are like about a millimetre, somewhere between half a millimetre and a millimetre in diameter, they span the three millimetres. So they’re like three millimetres deep, or two and a half millimetres deep and, and a millimetre wide.
And that seems to be like an element that’s repeated. And so there’s, you know, there’s about 150,000 columns in a human’s brain. And this becomes the element of which we want to understand. There’s about 100,000 neurons in a cortical column. And that’s sort of the element of processing.
It’s confusing because these are even subdivided further into what are called mini columns, which are very hairlike structures, which really relate to how the brain evolved in utero. Where the neurons did come along as little strings, and so there’s about 120 cells in each mini column.
And Mountcastle said these are functional too. But no one understood what they were. So you have this big sheet of cells that’s responsible for everything, it looks the same, pretty much the same everywhere. Not exactly, but pretty close.
And this is divided into 150,000 columns, the columns are divided into a couple of hundred mini columns. And, and this is like a great puzzle. We should be able to figure this out.
Of course, you can’t understand the neocortex in isolation. There are many other things that relate to which we could go into, but are pretty detailed, the thalamus and how it interacts with the ethorinal corteres, and the hippocampus.
But in a very top level first order approximation, we can say we can understand what the neocortex is. And we can understand basically, how does the brain understand the world and how does it think? Because that’s the organ that does that.
So our our thinking about how the brain works, is, is evolving and has evolved. And still, the field of neuroscience hasn’t coalesced about one idea here. Although we’re proposing, I’m proposing a very specific way of thinking about it.
And that’s the contents of my book A Thousand Brains. I think the way to think about the most dominant way right now, is sort of this hierarchical processing.
So imagine you’re looking at something and there’s an image on your retina at the back of your eyes.
It’s upside down, which is interesting, but doesn’t really matter. And then that, there’s these cells in the retina, which project what they’re sensing to part of the neocortex.
It almost goes straight there. Makes a way into the thalamus, but it gets right to the neocortex. And the idea then is okay, so it’s sort of like this picture, that’s, it’s being projected the part of your neocortex and, and then what happens is one part of the neocortex processes this input from the eyes.
And then it passes its output to another section of the neocortex, which processes, does something, and then it processes, it sends it to another section in your cortex.
And after you do this about four times, the brain knows what it is you’re looking at.
There’s like a hierarchical processing. And this has been the dominant view about a lot, but for many people, about how the brain works, how the neocortex works, for a long time. There’s a lot of evidence supporting it.
Today’s neural networks in AI, when people do vision recognition, they kind of work in this fashion.
But I think it’s fundamentally wrong. It’s not that the hierarchy doesn’t exist. What’s really the core of our work, and the core of the theories we’re promoting, is that the brain, it does not look at static images.
We don’t do anything statically. We’re constantly moving, we’re constantly, not only just we’re moving our eyes three times a second, and the inputs of the brain are constantly changing.
And it’s not like we think in between those changes. It’s not like we take snapshots, snapshots, snapshots.
When we look around the world, we actually see things in different places at different locations. And so movement is critical. You can think about when you touch something, you can’t understand it at all without moving your fingers. And so we move our fingers over a surface, again, think about it’s not just the changing pattern of our fingers, what it’s what we say is the brain is a sensory motor system, meaning that it’s sensing things, but it’s moving.
So it’s motor and sensing at the same time. And the cortex has to know where its sensors are in the world and how they’re moving.
And so you can’t understand the world just by thinking about it as taking a bunch of snapshots or pictures. It’s, you have to understand that the brain has constantly in incredible detail it’s tracking where in space, it’s looking and where it’s feeling, and how your fingers and eyes are moving constantly.
So it’s a sensory motor system. This idea of sensory motor learning and sensory motor inference is not new, but it hasn’t really filtered into the neuroscience literature. I mean, people say “Oh yes, it has to do that.” But theories of the brain and theories how they work really haven’t incorporated that.
There are some exceptions in in work that’s been done in what are called place cells and grid cells, but but it’s not really that’s not in the neocortex.
And it just really hasn’t filtered in like, “Hey, we have to understand how the brain works as a sensory motor system.”
So in a nutshell, what we’ve discovered is that earlier I talked about these cortical columns. Each one of those cortical columns is a complete sensory motor learning system.
Each one is tracking where its input is in space, in the world. It literally is keeping track of where in location, some space, there’s different spaces, that its input is coming from.
And as the input changes, it knows where those sensations occured in space. And it builds up a model of the thing that sends it. So the cortex is a model building system, it essentially learns a model of the world and the model of everything in it.
Everything we know and everything we interact with we have a model of it in our head. But it does this through movement and in the incorporation space in addition to just a sensation.
This is a very big change to how most people think about how the brain works? No scientists will say, “Oh, that’s not true.” But it just hasn’t really filtered through any of the, almost none of the neuroscience, I’d say. It’s a big field so I can’t say none, but the vast majority of neuroscientists don’t think about this this way.
So we’ve actually figured out a lot of the details about how this actually happens, and how the neurons do this. How do they know where they are in space? How do they build these models? How do they compensate for things like when you’re tilting your head, and so on? It’s really fascinating.
But the brain is really the sensory motor processing system. And each cortical column is actually doing a complete sensory motor modeling.
And now the hierarchy, remember, I talked earlier about the hierarchy, well, that’s really there. Information does get passed from region to region of the cortex. But we’re not just passing patterns, we’re passing entire models, we’re saying, I can recognize, let’s say a letter, and then someone else puts them together for words. And then you create sentences.
It’s not just spatial patterns, it’s like three dimensional models, or, like, “My computer has keys and a screen and a mouse. And these all have positions in space, and so on.”
So the brain is always processing. It is building models of three dimensional objects, or two dimensional objects, or one dimensional but multiple dimensional objects. And building them in a sort of hierarchical ways, and that’s our model of the world. And it’s all based on movement.
So that’s a lot to absorb. But if you just want to remember one thing, the brain is a sensory motor processing system.
And we have to think, give equal measure to how movement is understood by the brain, not just what’s coming from the sensors.
Jeff Hawkins: 16:41
You know, it’s fun visiting neuroscience labs, because some of them have, you know, the electron microscopes, some have these sequencing machines, some have, you know, all these animals running around and doing all these crazy things, you know.
A theorist’s lab is pretty boring, actually, right, relative to that. We have computers, we have whiteboards. Our whiteboards have lots of fun writing on them, you know, some mathematical, a lot of anatomy and structural diagrams of the brain. and so on.
So, I think, you know, a typical thing you might find in our office is, brains. There’s a lot that’s known about the anatomy of brain, detailed anatomy. So like in the neocortex, there are literally thousands of papers that have been written about, you know, what are the cell types, and where are they located, and how they’re connected together and, and how they might work and so on.
And we have pictures of those all over the place. And we draw them by hand and, and trying to tease apart what’s going on in these different things.
But other than that it is pretty boring. I suppose it’s like visiting a theoretical physicist’s lab, you know, this pad of paper and a pencil, and some writing on the whiteboard. It’s kind of like that.
One thing we did do, which is unusual, very unusual I think. We decided, one of our team members once said, he said, “Why don’t we, let’s let’s record our lab meetings. And post them on YouTube.”
And like, these are unfiltered lab minutes. This is like we get together and we argue with each other, and we, you know, draw on each other’s pictures and debate things.
And so we started doing that. And I said, “Who’s gonna want to watch this?”
Well, if turns a lot of people wanted to watch it. And I think what was interesting about it is, I’m not aware of any other lab doing that. It’s like most people thought it’s like your dirty laundry, you know, we want to come across? Like we know everything.
We know, we’ll be honest, we don’t have this stuff, you know, doesn’t mean.
But we did that for a while. So that’s something interesting we did. We haven’t done that recently, because the work is more recently has been on AI type stuff. But you can actually see what it’s like to be in our office. Those are on YouTube, and you can dig them up.
Artificial intelligence has had a real resurgence in the last 10-15 years, mostly due to the fact that we have….it’s not it’s not as much new algorithms, although there is a lot of that. But it’s also because computers have gotten very big. And we’ve been able to collect lots of data.
And so today’s AI is dominated by artificial neural networks, sometimes called deep learning networks. This idea has been around for 50 years.
And they were kind of like I mentioned earlier, they take a series of artificial neurons, they pass information into it. Then it’s processed, and then it’s passed to another layer, and then another layer, another layer.
And if I was going to do an image recognition system, I do this maybe 100 times. And then I can classify the image of what it is. They have now extended it to natural language and generating images and generating language. The results are very impressive, what people have done recently. It’s really amazing the progress that’s been made.
But the leaders in AI, most of them, I would say the majority of the really founding you know, people in AI, do not believe that these networks are intelligent. They know not only do not work the way the brain does, but they have severe limitations.
They’re very good at mimicking things, but they really have no idea what they’re doing. They’re not, they have a lot of problems, they’re not flexible. They can’t, they make errors that humans would never make.
They do not generalize. You know, you can’t give them some open-ended unstructured question or problem to work on. They can fool people, but they don’t really have any idea what they’re doing.
And so, and so there’s a, there’s a belief among quite a few senior AI scientists that we need new approaches.
And we’re gonna get limited here, despite how good it is, and how valuable it might be. It’s not really truly AI. It’s not like really intelligent.
And so many people really do want to build truly intelligent machines, ones we don’t have to make excuses for it and say, well, it really doesn’t understand what it’s doing.
Now, that doesn’t take again, I want to emphasize that the current AI systems are much better than humans in many things, you know.
But we don’t we no longer marvel at a computer that’s better at arithmetic than we are and we shouldn’t really be marveling at a computer that can recognize images better than we. You know, big computers, lots of power, lots of data, you could do things.
But it’s missing, it’s missing a lot. And that’s just not my opinion. But it’s others’ too. I felt this way my entire life, I felt like. “Hey, if it’s not working like a brain, it’s not going to be intelligent.” And the systems that we use these artificial neural networks are quite primitive.
But that’s the state of the art today. It’s exciting. It’s great. But it’s also limited. And quite a few people think we need to go beyond that. And I think our work is going to be seminal in that effort to go beyond it.
Jeff Hawkins: 21:37
So let’s stick to AI and think about AI again as what it is today. And what do we want it to be in the future?
So today’s AI is very valuable. We’ve been able to take brain principles, things we’ve learned about selling the brain. Parts of our theory, not the entire theory, but parts of our theories, and apply it to existing AI systems existing artificial neural networks.
So the neurons they use the in the way they treat neurons in artificial neural networks is very primitive. But if we take more biological approaches to that, we can make traditional AI systems much better.
So we have a whole team doing that right now. They can take these language models with these things, that is natural language models, that are prevalent today in AI.
And we can make them run 100 times faster, use much less memory, lower the latency from. It’s dramatic progress, it’s the same models, it’s the same stuff that they’re doing.
But we can get really better. Which is important because these models are so big, they take huge amounts of energy, they’re just incredibly expensive to run.
So we’ve been able to take brain principles and accelerate that and improve that. And I can talk about how we did that. It is kind of interesting. But we’ve been able to do that.
But on the other hand, we’re not happy with that. We’re not gonna stop there. We want to really build truly intelligent machines. So we have another team of people at Numenta that is working on sensory motor modeling, sensory motor inference, the four thousands brains theory, if you will. And there we are. That’s, that’s more research-y, in some sense, because it’s a field that hasn’t really developed yet, but we know what we have to do.
So we’re making great progress there too, which we’re saying, “Okay, the future of AI is going to be built on sensory motor modeling, sensory motor inference.”
It doesn’t have to look like a robot or something like that. But the half of the principles of understanding that knowledge is stored at locations in space inside the brain
It’s complicated, but the idea that the structured knowledge, it’s learned through sampling movement through space.
And so we’re building the foundational first attempts, really, at building the software that works this way. In a practical way, and also one that models the key principles that are under there, we believe are happening in the neocortex.
So that work is a little further out, meaning it’s going to take a few more years to to be really important commercially, but we’re making good progress on it. So it’s exciting.
As far as I know, we are I am sure one of the leading research labs in the world that have been doing sensory motor modeling and sensory motor inference and basing it on the the principles of the neocortex.
So I believe I believe that that’s going to be foundational work. And one way to think about it if you think about computers, and and how they evolved.
And it was in the 1940s, maybe the late 30s, that people like Alan Turing, and John von Neumann, who came up with the principles of computing.
They didn’t how to build a computer. They had no idea how to do that. But the idea of algorithms running on machines and stuff, they came up and this “Oh, my gosh, this is going to be big.”
And then it took a long time for it to develop into real computers. Well, we’re at that we’re a little bit beyond that stage right now. But we’re kind of at sort of the era we’re in.
And we figured out how brains kind of work and and what it means to think, and how it is you understand the world. And we say, “Okay, we have to build this.”
And so we’re just starting to do that. And our progress will be much quicker than back in the early days of computing because we we have this huge technological base to work upon.
But it’s in that era. So I think the work we’re doing now. If I ask myself, take this century. You know, the middle part of the century is going to be totally transformed by AI. We can talk about the good and the bad of that. But it’s going to be totally transformed by AI in the same way though the latter part of the 20th century was transformed by computers.
It’s really hard to imagine the significance of this transition that’s going to occur. But that’s happening.
Jean Mary Zarate: 25:39
Now, that’s it for this episode and the series of Tales from the Synapse.
Thanks again to Jeff Hawkins.
I’m Jean Mary Zarate, a senior editor at Nature Neuroscience. The producer was Dom Byrne. I do hope you’ve enjoyed this exploration of the world of neuroscience. Thanks for listening.