UVA and Toyota Research Institute

Machine Learning


UVA Link Lab Driving Simulator

image:

Yenlin Kuo, an assistant professor of computer science, is building a driving simulator similar to this one at the University of Virginia College of Engineering's Link Lab to collect data on driving behavior. She will use that data to help the robot's AI associate what it sees with the meaning of words by observing how humans interact with their environment, or how the robot itself interacts with its environment.

View more

Credit: Graeme Jenvey/University of Virginia School of Engineering and Applied Science

Self-driving cars are on the way, but will you really be content to just sit back and be passive while a 2,000-pound autonomous robot drives you and your family around town?

As self-driving technology matures over the next few years, would we feel more at ease if a semi-autonomous car were able to explain its actions to the driver, for example why it suddenly applied the brakes even though the driver had not?

Even better, what if you could help your teen not only learn to drive, but also drive safer?

Yenlin KuoAn Anita Jones Faculty Fellow and assistant professor of computer science at the University of Virginia's School of Engineering and Applied Science, she is training machines to do all this and more using human language and reasoning. Her research is funded by a two-year Young Faculty Researcher Grant from the Toyota Research Institute.

“This project is about how artificial intelligence can understand the meaning of driver actions through language models and use that understanding to augment human capabilities,” Kuo said.

“Robots are not perfect on their own, and neither are we. We don't necessarily want to outsource our work to machines, but we can work with them to get better results.”

Eliminate the need to program every scenario

Achieving that level of cooperation requires machine learning models that imbue robots with generalizable reasoning skills.

“This is in contrast to collecting large datasets to train on every scenario, which would be expensive, if not impossible,” Kuo said.

Kuo is working with a team at the Toyota Research Institute to build a linguistic representation of driving behavior that will allow the robot to associate what it sees with the meaning of words by observing how humans interact with their environment, and by the robot interacting with the environment itself.

Let's say you're an inexperienced driver, or you grew up in Miami and moved to Boston. Wouldn't it be nice to have a car that could handle icy roads?

This new intelligence will be particularly important in dealing with unusual situations, such as helping less experienced drivers adapt to road conditions or helping them navigate tricky situations.

“We want to apply learned representations in shared autonomy. For example, the AI ​​can describe the high-level intent to turn right without slipping and instruct the driver to slow down to a certain speed while turning,” Kuo said. “If the driver doesn't slow down enough, the AI ​​will further adjust the speed. Or, if the driver turns too sharply, the AI ​​will correct that.”

Kuo plans to develop language representations from a variety of data sources this summer, including a driving simulator he is building for his lab.

Her research has attracted attention: Kuo recently gave an invited talk on related research at the Japan Association for the Advancement of Artificial Intelligence. Highlights of New Faculty in 2024 program. She also has a paper, “Learning Representations for Robust Human-Robot Interaction,” due to be published in AI Magazine.

Promoting human-centric AI

Kuo's proposal aligns closely with the Toyota Research Institute's goals of advancing human-centric AI, interactive driving and robotics.

“Once language-based representations are learned, their semantics can be used to share autonomy between humans and vehicles or robots, promoting usability and teamwork,” said Kuo's collaborators. Guy RothmanHe manages the Institute's Human Aware Interaction and Learning team.

“It harnesses the power of language-based reasoning to driver-vehicle interaction, going far beyond existing approaches and making common sense concepts more generalizable,” Rothman said.

So if you do end up handing over the keys to your car, the trust that Kuo's research instills should help ease any worries you may have.


Disclaimer: Neither AAAS nor EurekAlert! are responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for any use of information provided through the EurekAlert system.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *