Improving AI helpers: Emulating irrational human behavior

AI News


To build AI systems that can work effectively with humans, it helps to first have good models of human behavior. However, humans tend to take suboptimal actions when making decisions.

This irrationality is particularly difficult to model and often ultimately results in computational constraints. Humans cannot spend decades thinking about the ideal solution to a single problem.

Researchers at MIT and the University of Washington have developed a method to model the behavior of agents, whether human or machine, that takes into account unknown computational constraints that can impede an agent's ability to solve problems. .

Their model can automatically infer an agent's computational constraints by seeing only a small trace of the agent's previous actions. As a result, an agent's so-called “inference budget” can be used to predict its future actions.

In a new paper, researchers demonstrate how their method can be used to infer someone's navigational goals from previous routes and predict a player's subsequent moves in a chess match. Masu. Their technique is comparable to or better than other popular methods for modeling this type of decision making.

Ultimately, this research could help scientists teach AI systems how humans behave, which could allow these systems to better respond to human collaborators. There is a gender. AI assistants could become even more useful if they can understand human behavior and infer goals from that behavior, says the electrical engineering and computer science (EECS) graduate student in the upcoming paper. said lead author Asr Paul Jacob. This technique.

“If we look at how humans have acted in the past and know that humans are about to make mistakes, an AI agent can step in and suggest a better way. The agent can also adapt to the weaknesses of its human collaborators. “Being able to model human behavior is an important step toward building an AI agent that can actually help that human.'' says.

Jacob co-authored the paper with Abhishek Gupta, an assistant professor at the University of Washington, and senior author Jacob Andreas, an EECS associate professor and member of the Computer Science and Artificial Intelligence Institute (CSAIL). This research will be presented at the International Conference on Learning Representations.

behavioral modeling

Researchers have been building computational models of human behavior for decades. Many traditional approaches have attempted to account for suboptimal decisions by adding noise to the model. Rather than having the agent always choose the correct option, this model allows the agent to make the correct choice 95% of the time.

However, these methods may not capture the fact that humans do not always behave in the same suboptimal manner.

Other researchers at MIT are also investigating more effective ways to plan and estimate goals in the face of suboptimal decisions.

To build the model, Jacob and his collaborators drew inspiration from previous research on chess players. They found that players spend less time thinking before acting when making simple moves, and that stronger players tend to spend more time planning than weaker players in difficult matches.

“Ultimately, we found that depth of planning – how long someone thinks about a problem – is a good indicator of human behavior,” Jacob says.

They built a framework that can infer the depth of an agent's planning from its previous actions and use that information to model the agent's decision-making process.

The first step of their method is to run the algorithm for a fixed amount of time to solve the problem being studied. For example, if you are studying a game of chess, you might have an algorithm that plays chess perform a certain number of steps. Finally, researchers can see the decisions the algorithm made at each step.

Their model compares these decisions to the actions of agents solving the same problem. Coordinate the agent's decisions with the algorithm's decisions and identify the step at which the agent stopped planning.

From this, the model can determine the agent's inference budget, or how long the agent plans for this problem. You can use your inference budget to predict how your agent will react when solving similar problems.

Interpretable solution

This method is very efficient because it allows researchers to access the complete set of decisions made by the problem-solving algorithm without any additional work. This framework can also be applied to any problem that can be solved by a particular class of algorithms.

“What was most impressive to me was the fact that this inferential budget is very easy to interpret. This means that harder problems require more planning, or that stronger players It means a longer plan. When we first tried to do this, we didn't. I think our algorithms can recognize those actions naturally,” Jacob said. says.

The researchers performed three different modeling tasks: inferring navigational goals from previous routes, inferring someone's communication intentions from verbal cues, and predicting subsequent moves in a human-on-human chess match. I tested the approach with .

Their method matched or outperformed common alternatives in each experiment. Furthermore, the researchers confirmed that the human behavior model matched well with measures of player skill and task difficulty (in a chess match).

In the future, the researchers hope to use this approach to model planning processes in other areas, such as reinforcement learning, a trial-and-error technique commonly used in robotics. In the long term, we plan to continue this work with the larger goal of developing more effective AI collaborators.

This research was supported in part by the MIT Schwarzman College of Computing's Artificial Intelligence Program for Augmentation and Productivity and the National Science Foundation.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *