New Tools allow anyone to train robots | MIT News

Machine Learning


New skills to teach robots were used to require coding expertise. But a new generation of robots can learn from almost anyone.

Engineers design robot helpers that can “learn from demonstrations.” This more natural training strategy allows a person to lead the robot through tasks. Usually it's one of three ways through remote control, such as operating a joystick to remotely operate the robot. By physically moving the robot. Or by performing the task yourself while the robot monitors and mimics.

A robot per learning is usually trained in one of these three demonstration approaches. However, MIT engineers have now developed three training interfaces that allow robots to learn tasks using one of three training methods. The interface is a form of handheld sensor equipped tools that can be attached to many common joint robot arms. People teach and perform tasks to teach the robot by remotely controlling the robot, physically manipulating it, or demonstrating the task itself.

The MIT team tested a new tool called “a multi-purpose demonstration interface” with standard cooperative robot arms. Volunteers with manufacturing expertise used this interface to perform two manual tasks commonly performed on factory floors.

Researchers say the new interface will provide more flexibility in training that will allow users and “teachers” to interact with the robot. It may also allow the robot to learn a wider range of skills. For example, a person can remotely train a robot to deal with toxic substances, but further down the production line, others can physically move the robot through boxing movements of the product, and at the end of the line, someone else can monitor the robot and draw a company logo to learn to do the same.

“We are trying to create extremely intelligent and skilled teammates who can work effectively with humans to accomplish complex tasks,” says Mike Hargenow, a postdoctoral at MIT in the Department of Aerospace and Space Sciences. “We believe that flexible demonstration tools can help far beyond the manufacturing floor in other domains that want to see an increase in robot adoption, such as home and care settings.”

Hagenow will be presenting a paper detailing the new interface at the IEEE Intelligent Robots and Systems (IROS) conference in October. The MIT co-author of the paper is Dimosthenis Kontogiorgos, a postdoctoral at MIT Computer Science and Artificial Intelligence Lab (CSAIL). Yanwei Wang Phd '25 recently completed his PhD in Electrical Engineering and Computer Science. Julie Shah, Professor MIT and Director of the Aerospace and Space Bureau.

Training together

MIT's Shah Group designs robots that can work with humans at work, at hospitals and at home. The main focus of her research is developing systems that allow people to teach robots “at work”, “at work” and “in the work”. For example, such systems can help factory floor workers quickly and naturally adjust their robot operations to improve tasks at the moment, rather than pausing to reprogram the robot's software from scratch.

The team's new work is based on a new strategy for robot learning, known as “learning from demonstrations” or LFD. The robots are designed to be trained in a more natural and intuitive way. In examining the LFD literature, Hagenow and Shah found that previously developed LFD training methods generally fall into three main categories: elongation training and natural education.

One training method may work better than the other two for a particular person or task. Shah and Hagenow wondered whether they could combine all three ways to design tools that allow robots to learn more tasks from more people.

“If we can put together these three different ways that someone might want to interact with a robot, it could benefit a variety of tasks and different people,” says Hargenow.

Handy tasks

With that goal in mind, the team designed a new, versatile demonstration interface (VDI). The interface is a handheld attachment that fits into the arms of a typical cooperative robot arm. The attachment is equipped with a camera and marker that tracks the position and movement of the tool over time, and a force sensor to measure the amount of pressure applied during a particular task.

When the interface is connected to the robot, the entire robot can be controlled remotely, and the camera on the interface records the robot's movements. This allows the robot to use it as training data to learn tasks independently. Similarly, a person can physically move the robot through the task with an interface attached. VDI can also be held physically by people separate and separated to perform the desired task. The camera records the movement of the VDI. This can also be used by robots to mimic tasks when VBI is retightened.

To test the ease of attachments, the team brought the interface along with a cooperative robotic arm to a local innovation centre where factory experts learn about and test techniques that can improve factory hierarchical processes. The researchers set up an experiment in which center volunteers were asked to use all three interface training methods to complete two common manufacturing tasks: pre-fitting and molding. In press fitting, the user trained the robot to push the peg into the hole and attach it to the hole, as with many fixation tasks. For molding, volunteers trained the robots to push and roll rubbery fabric-like substances around the surface of the central rod, as well as some temperature deployment tasks.

For each of the two tasks, the volunteers are asked to use each of the three training methods, first using a joystick to tererulate the robot, then kinesthetically manipulate the robot, and finally remove the attachment of the robot, performing the task as the robot records the force and movement of the attachment.

Researchers have found that volunteers generally prefer natural methods over distance surgery and kinesthetic training. Users who were all manufacturing experts provided scenarios where each method may have advantages over the others. For example, teleomanipulation may be desirable when training robots to deal with dangerous or toxic substances. Kinesthetic training helps workers adjust the position of the robots responsible for moving heavy packages. And natural education is beneficial in demonstrating tasks that involve delicate and precise manipulation.

“Imagine using the demonstration interface in a flexible manufacturing environment where one robot can assist with various tasks that benefit from a specific type of demonstration,” says Hagenow, who plans to improve the attachment design based on user feedback and use the new design to test robot learning. “We believe this study shows how co-robot flexibility can be achieved through an interface that expands the way end users interact with robots during education.”

This work was supported in part by the MIT Postdoctoral Fellowship Program for Engineering Excellence and the Wallenberg Foundation Postdoctoral Research Fellowship.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *