How scientists are trying to unlock human minds using AI

Applications of AI


Compared to traditional psychological models using simple mathematical equations, Centaur did a much better job of predicting behavior. Accurate predictions about how humans respond in psychological experiments are valuable in their own right. For example, scientists can use Centaur to employ and pilot experiments on a computer before paying for human participants. However, in their paper, researchers suggest that Centaur could be more than just a predictor. By interrogating the mechanisms that enable Centaur to effectively replicate human behavior, scientists argue that scientists can develop new theories about the inner workings of the mind.

However, some psychologists doubt whether Centaur can tell us much about the mind. Certainly, predicting human behavior is better than traditional psychological models, but it also has a billion times more parameters. And just because the model behaves like an outsider doesn't mean it works like an insider. Olivia Guest, assistant professor of computational cognitive science at Radboud University in the Netherlands, compares Centaur with a calculator. “By studying calculators, you don't know what you'll learn about human additions,” she says.

Even if the centaur captures something important about human psychology, scientists may struggle to extract insights from the millions of neurons in the model. AI researchers work hard to grasp the massive behavior of language models, but have barely managed to open the black box. Understanding the enormous neural network model of the human mind may not be much easier than understanding the things themselves.

One other approach is to get smaller. The second of the two Nature Some studies contain only a single neuron, but nevertheless, they can predict behavior in mice, rats, monkeys, and even humans. Because the network is so small, you can track the activity of individual neurons and use that data to understand how the network is generating behavioral predictions. And while there is no guarantee that these models will function like brains trained to mimic, they can at least generate testable hypotheses about human and animal cognition.

Understanding is costly. Unlike Centaur, trained to mimic human behavior in dozens of different tasks, each small network can only predict behavior at a particular task. For example, one network is specialized to predict how people will choose from a variety of slot machines. “If behavior is really complicated, you need a large network,” says Marcelo Matar, assistant professor of psychology and neuroscience at New York University, who led small network research and contributed to the centaurs. “Of course, the compromise is that it's very difficult to understand now.”

This trade-off between prediction and understanding is an important feature of neural network-driven science. (I happen to write a book about it too.) Research like Mattar has made some progress in filling that gap. Like his network, they can predict behavior more accurately than traditional psychological models. The same goes for the study of the interpretability of LLM taking place in places like humanity. But for now, understanding complex systems, from humans to climate systems to proteins, is slowing deeper and slower our ability to predict them.

This story was originally published in AI's weekly newsletter, Algorithm. To get a story like this in your inbox for the first time, sign up here.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *