Stanford Vision and Learning Lab develops social robot to roam campus

Oct. 19, 2018, 2:20 a.m.

Researchers in the Stanford Vision and Learning Lab (SVL) are teaching a robot called JackRabbot 2 to understand social rules that humans use to interact with each other, with the ultimate goal of operating a robot that navigates safely and autonomously.  

SVL conducts research on computer vision, working to solve visual perception and cognition problems using automated image and video analysis, according to their website. It is currently directed by computer science professor Fei-Fei Li, Stanford Artificial Intelligence Lab Senior Research Scientist Juan Carlos Niebles and computer science associate professor Silvio Savarese.

SVL launched the first iteration of JackRabbot in March 2015, intending to build a robot for navigation and on-campus delivery. The robot would be available to help with tasks such as bringing objects to people or guiding them to their destinations.

This quarter, SVL will be teleoperating JackRabbot 2 – the second iteration of the project – on campus to record information about how people move around the University.

Unveiled in late September, the JackRabbot 2 has expanded the project’s focus to make human-robot interactions even smoother, with advancements aimed at improving its social intelligence, such as better light sensors and a 360-degree camera.

“The goal becomes much grander in that what we’re looking at is how to teach the robot to follow common sense and social norms,” Savarese said. “When we walk on the street, we follow certain rules and a certain personal distance … These are the types of rules and typical norms that are very difficult to codify in words.”

Researchers are also focusing on teaching JackRabbot 2 to use facial expressions and basic sounds to elicit emotions.

“Because the robot is going to be around people, we want it to be able to communicate intentions and its own desires,” said Roberto Martín-Martín, a postdoctoral research fellow working on the project. “For example, if its happy, or if its angry.”

Using its digital eyes or the loudspeakers in its head, JackRabbot 2 can convey feelings to create a degree of connection with humans. LED lights built into the robot also change colors to reflect how it feels, and a new long, black arm is useful for holding objects or pointing to provide directions.

“Now it’s [not only] a combination of the robot understanding its environment, understanding common sense [and] understanding social norms, but also of expressing those norms and intentions through different outputs,” Savarese said.

Currently, the researchers are exploring how to use the robot’s emotions and gestures to help it achieve its tasks. In training the robot, the team has to balance efficiency and social intelligence, taking into account concepts such as personal space.

“Suppose there are people talking to one another,” said Ashwini Pokle M.S. ’19. “We want the robot to recognize that there is a group of people, they are talking and they are a single unit, so it shouldn’t just walk between them even if there is space.”

Because the rules of social navigation are very subtle and difficult to hard code, the researchers are exploring different learning techniques, such as imitation learning and training from demonstrations. Toward this end, the team has already obtained data through operating the robot in simulated environments.

“We simulate different behaviors,” Pokle said. “We have random people walking around, we have random obstacles put all around … and then we ask people to navigate the robot manually, and we collect that data. And then we use deep learning to mimic that data.”

Beyond this simulated data, however, the researchers are also planning to collect real-life data to help the robot train for what to do in cases with unseen and new situations involving human behavior and interaction. For now, the researchers are concentrating on collecting additional data in a controlled manner via teleoperation, but eventually, the team hopes that JackRabbot 2 will be able to operate on its own.

“Navigating human environments in a friendly way opens a lot of possibilities,” Martín-Martín said. “Guiding people in malls or airports, bringing objects the last mile, surveillance for buildings. There are a lot of potential applications for this role.”

Savarese said he foresees a future where robots and humans can cooperate in jobs, tasks and different situations, instead of replacing them.

“We want to show that it is possible for robots to intermingle with humans by learning human behavior [and] how to use social norms and common sense … so that people wouldn’t feel like the robot is an external agent, but rather is part of our community of humans,” he said.

 

Contact Michelle Leung at mleung2 ‘at’ stanford.edu.

Michelle Leung '22 is a writer for the Academics beat. She comes from Princeton, NJ and enjoys taking photos, making paper circuit cards and drinking tea with friends. Contact her at mleung2 'at' stanford.edu.

Login or create an account

JOIN THE STANFORD DAILY

application deadline
Friday, Oct. 11

Days
Hours
Minutes
Seconds