Giving Robots Social Skills | MIT News

Robots can deliver food to a college campus and one-up holes on a golf course, but even the most sophisticated robots can’t carry out the basic social interactions that are vital to everyday human life.

MIT researchers have now incorporated some social interactions into the framework of robotics, allowing machines to understand what it means to help or hinder one another, and learn to perform these social behaviors on their own. In a simulated environment, a robot looks at its partner, guesses what task it wants to accomplish, and then helps or hinders this other robot based on its own goals.

The researchers also showed that their model creates realistic and predictable social interactions. When they showed videos of these simulated robots interacting with humans with each other, the human audience mostly agreed with the model about what type of social behavior was taking place.

Enabling robots to demonstrate social skills can lead to smoother and more positive human-robot interactions. For example, a robot in an assisted living facility could use these capabilities to help create a more caring environment for elderly individuals. The new model could enable scientists to quantitatively measure social interactions, which could help psychologists study autism or analyze the effects of antidepressants.

“Robots will soon be in our world, and they really need to learn how to communicate with us on human terms. They need to understand when it’s time for them to help and it’s time for them to see what they can do to prevent something from happening. It’s very early work and we’re barely scratching the surface, but I think it’s the first serious attempt at understanding how socially important things have to do with humans and machines. What does it mean to have a conversation,” says lead research scientist and head of InfoLab, Boris Katz. group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and a member of the Center for Brains, Minds and Machines (CBMM).

Co-lead author with Katz on the paper is Ravi Tejwani, a research assistant at CSAIL; co-lead author Yen-Ling Kuo, a CSAIL PhD student; Tianmin Shu, postdoc in the Department of Brain and Cognitive Sciences; And the senior author is Andrei Barbu, a research scientist at CSAIL and CBMM. The research will be presented in November at the Conference on Robotic Learning.

a social simulation

To study social interactions, the researchers created a simulated environment where robots pursue physical and social goals as they move around a two-dimensional grid.

A physical target is related to the environment. For example, a robot’s physical goal might be to navigate a tree at a fixed point on the grid. A social goal involves guessing what another robot is trying to do and then acting on that guess, such as helping another robot water a tree.

The researchers use their model to specify what the robot’s physical goals are, what its social goals are, and how much emphasis it should place on each other. The robot is rewarded for actions that bring it closer to accomplishing its goals. If a robot is trying to help its partner, it adjusts its reward to match that of the other robot; If he is trying to hinder, he adjusts its reward to be the opposite. The planner, an algorithm that decides what tasks a robot should perform, uses this continuously updated reward to guide the robot to accomplish a mix of physical and social goals.

“We’ve opened up a new mathematical framework for how you model the social interaction between two agents. If you’re a robot, and you want to go to location X, and I’m another robot and I see that you’re trying to get to location X, I can assist by helping you get to location X faster. This may mean moving X closer to you, finding another better X, or Doing whatever action you want on X. Our formulation allows the scheme to discover the ‘how’; we specify the ‘what’ of what social interactions mean mathematically,” Tejwani says.

It’s important to blend a robot’s physical and social goals to create realistic interactions, because the humans who help each other have limits on how far they will go. For example, a rational person probably wouldn’t give their wallet to a stranger, says Barbu.

The researchers used this mathematical framework to define three types of robots. A level 0 robot has only physical goals and cannot reason socially. A level 1 robot has physical and social goals but assumes that all other robots have only physical goals. Level 1 robots can take actions based on other robots’ physical goals, such as helping and hindering. A Level 2 robot assumes that other robots have social and physical goals; These robots can perform more sophisticated actions such as joining together to help.

model evaluation

To see how their model compared to human perspectives about social interactions, they created 98 different scenarios with robots at levels 0, 1, and 2. Twelve humans watched 196 video clips of robots interacting, and were then asked to guess physical and social. The goals of those robots.

In most instances, his model agreed with what humans thought about the social interactions that took place in each frame.

“We have this long-term interest, both to build computational models for robots, but also to dig deeper into the human aspects of it. We want to find out what characteristics humans use to understand social interactions from these videos.” Can we objectively test your ability to recognize social interactions? Maybe there is some way to teach people to recognize these social interactions and improve their abilities. are distant, but even being able to measure social interactions effectively is a big step forward,” says Barbu.

towards greater sophistication

Researchers are working on developing a system with 3D agents in an environment that allows for many more types of interactions, such as manipulation of household objects. They also plan to modify their model to include environments where operations can fail.

The researchers also want to incorporate a neural network-based robot planner into the model, which learns from experience and performs rapidly. Ultimately, they hope to run an experiment to collect data about characteristics that humans use to determine whether two robots engage in social interaction.

“Hopefully, we’ll have a benchmark that allows all researchers to work on these social interactions and inspire the kinds of science and engineering advances we’ve seen in other areas, such as object and action recognition,” says Barbu. Huh.

“I think it’s a lovely application of structured logic to a complex but urgent challenge,” says Tomar Ullman, assistant professor in the Department of Psychology at Harvard University and head of the Computation, Cognition and Development Laboratory, who was not involved in it. Research. “Even young infants understand social interactions like helping and hindering, but we don’t yet have machines that can make this logic at anything like human-scale flexibility. Mine believes that the model proposed in this work, whose agents think about the rewards of others and socially plan how to fail or support them, is a good step in the right direction.

This research was supported by the Center for Brains, Minds and Machines; National Science Foundation; MIT CSAIL Systems Joe Learn Initiative; MIT-IBM Watson AI Lab; DARPA Artificial Social Intelligence for Successful Teams Program; US Air Force Research Laboratory; US Air Force artificial intelligence accelerator; and the Office of Naval Research.

Related Posts

Leave a Reply

Your email address will not be published.