The new ML framework that could make robots safer in crowds

Safety remains the biggest barrier to 'real world' robots, but researchers have developed a framework that 'outperforms' current systems.
20 October 2020

Boston Dynamics’ ‘Spot’ robot. Source: AFP

  • Safety remains the biggest barrier to moving robots into the ‘real world’ around us
  • Researchers from Stanford and Toyota Research Institute have developed a framework that ‘outperforms’ current systems 

Robots will soon be moving around us in the ‘real world’. They won’t be confined behind the closed doors of warehouses or factories, they will operate in and interact with the world around us, taxiing us around town or delivering our mail. 

As new advances in technology are rapidly being made by organizations like Boston Dynamics and Amazon Robotics, 61% of executives expect their organizations to use robotics in uncontrolled environments within the next two years, according to Accenture. 

While these machines will be capable of performing the functions of roles traditionally previously carried out by people with ease, question marks still loom over their innate lack of instinct — the lack of a comparative mass of situational data generated and stored by humans throughout our lives, that subconsciously enables us to sense a risky maneuver in the road or somebody about to step in front of us. 

Building safety into robots and autonomous vehicles remains one of the key challenges to their deployment. Developing systems that can operate safely in the real-world requires test tracks, controlled road trials, and thousands of hours devoted to data-led scenario modeling and the creation of complex, unpredictable virtual environments. 

But all that testing and modeling can’t prepare for every scenario.

Researchers at Stanford University and Toyota Research Institute (TRI) have developed a framework that could help prevent accidents, as autonomous vehicles and robotics systems increasingly operate and interact in crowded environments. As reported by Tech Xplore, the framework combines two tools, a machine learning algorithm, and a technique to achieve risk-sensitive control. 

“The main goal of our work is to enable self-driving cars and other robots to operate safely among humans — human drivers, pedestrians, bicyclists — by being mindful of what these humans intend to do in the future,” Haruki Nishimura and Boris Ivanovic, lead authors of the paper, told the publication. 

The machine learning model was trained to predict the future actions of humans in a robot’s surroundings, while an algorithm can estimate the risk of collision based on each of the robot’s potential actions at a given time. The optimal maneuver can then be selected, minimizing the risk of the machine colliding with humans, cars, or obstacles, while it carries out its task. 

The researchers said this machine learning framework overcomes “oversimplifications” found in other methods of autonomous vehicle and robot navigation. 

“Firstly, [existing methods] make simplistic assumptions about what the humans will do in the future; secondly, they do not consider a trade-off between collision risk and progress for the robot. In contrast, our method uses a rich, stochastic model of human motion that is learned from data of real human motion.”

The framework’s model is based not on one single prediction, but on predicting multiple outcomes in a dynamic environment, and how ensuing actions by both the robot and other people could influence the actions of others. 

The researchers said enacting this framework means robots and self-driving systems consider “the full distribution of possible future human motions” in order to select the safest possible subsequent action for the robot to continue on with its task.  

All this can happen repeatedly “in a fraction of a second”. 

In simulation and real-world experiments using a robot called Ouijabot, the newly-developed framework outperformed three commonly-used collision avoidance systems in navigating a trajectory among a crowd. 

In order to become as good as or better than human operators in regard to safety, robotic and autonomous vehicle systems will require masses of situational data, continuously. 

Before it can be implemented at scale, the researchers said the framework will need to be trained on large databases containing videos of humans moving in crowded environments similar to the one in which robots will be operating. To simplify and streamline the training process, the researchers plan to develop a method that allows robots to gather training data online as they are operating on the hop. 

“We would also like for robots to be able to identify a model that fits the specific behavior of the humans in its immediate environment,” said the researchers. 

“It would be very useful, for example, if the robot could categorize an erratic driver or a drunk driver at any given moment, and avoid moving too close to that driver to mitigate the risk of collision. Human drivers do this naturally, but it is devilishly difficult to codify this in an algorithm that a robot can use.”