New AI discoveries certainly look like the dawn of true machine reasoning

Machine Learning


  • Although AI models have become incredibly sophisticated in a short period of time, there are still some tasks for which humans are the undisputed master, even simple tasks such as inference.
  • But the three MIT papers introduce an “abstraction library” that allows AI to learn new tasks in a way that is neurologically similar to how humans accomplish feats. We hope to improve the reasoning of (LLM).
  • Although these upgrades have received limited testing and publication, they demonstrate that complex reasoning is not necessarily limited to humans.

If the goal of AI research is to one day recreate the human brain, they length alright, let's go. Large-scale language models (LLMs) work fairly well at faking perception, but Fool some programmers along the way), imitating the human mind, honed over millions of years of evolution, is not so easy.

For example, consider abstraction. Humans can learn new concepts without thinking too deeply by creating high-level representations of complex topics that remove unimportant details. But despite headlines about the exponential growth in AI complexity, these systems still struggle with such cognitive tasks.

Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) show how everyday language can provide “a rich source of contextual information for language models,” according to an MIT press statement. That's why they created three “abstraction libraries.” It's about giving AI something similar to human reasoning. The scientists presented their findings across three separate papers at the International Conference on Learning Representations in Vienna earlier this month.



“Language models prefer to work with functions named in natural language,” MIT doctoral student Gabe Grand, lead author of one of the research papers, said in a press statement. “Our work creates simpler abstractions of language models and assigns each a natural language name and documentation, leading to more interpretable code for programmers and improved system performance.”

Simply put, all three libraries, LILO (Library Induction from Language Observation), Ada (Action Domain Acquisition), and LGA (Language Guided Abstraction), are designed to improve human understanding across specific functions such as computer programming, tasks, etc. It works to provide such inferences. Planning and robot tasks.

MIT uses neuro-symbolic techniques built into LILO to identify abstractions using the Stitch (get it?) algorithm. This allows LLMs to apply a high degree of common sense knowledge not available in previous models.

Meanwhile, Ada showcases the reasoning behind the human mind that is seemingly difficult to replicate with AI.

“Making breakfast in the morning may require translating extensive knowledge about cooking and the kitchen into dozens of fine motor movements to find, crack, and fry a specific egg,” the researchers wrote in their paper. is written in. “Decades of research have developed representations and algorithms for solving narrow, short-term planning problems, but generalized, long-term planning remains a problem that is difficult to solve in essentially all AI paradigms. This is a core and unresolved challenge for us.”

The researchers developed a language model that proposes abstractions from the dataset, focusing on housework and command-based video games. When implemented on an existing LLM platform such as GPT-4, it performs AI actions such as “put the chilled wine in the cabinet” or “make the bed”. Mine Craft sense) and task accuracy significantly improved by 59% to 89%, respectively.



Finally, LGA helps robots complete complex tasks beyond simple image recognition. MIT News explains:

A human first provides a pre-trained language model with a description of a common task using natural language, such as “bring me a hat.” The model then converts this information into abstractions about the essential elements needed to perform this task. Finally, an imitation policy trained on several demonstrations can implement these abstractions to guide the robot to grab the desired item.

When tested on Boston Dynamics' dog-like robot Spot by asking the robot to pick up fruit or put a bottle in a recycling bin, the language model was able to create a plan of action in what the researchers called an “unstructured environment.” This kind of navigation task could have real-world implications for self-driving cars and other autonomous technologies.

All of these techniques benefit AI development, but they also demonstrate one surprising truth: the human mind is a beautiful and powerful thing.

Darren Orf's photo

Darren lives in Portland and has a cat. She writes/edits about science fiction and how our world works. If you look hard enough, you can find his previous articles on Gizmodo and Paste.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *