Study reveals new vulnerability in self-driving cars

Are self-driving cars ever going to be safe? A new Georgia Tech study feels there's much more work to be done before the technology is street-ready.
1 October 2018

Are autonomous cars ever going to be safe? Source: AFP Photo / Charly Triballeau

There’s something amazing when you sit down and think about the awesome things that self-driving cars can do in the future — it’s almost surreal.

But they’re not here yet, and from what cybersecurity, consultancies, and academia are saying, it’s not likely to be on the road anytime soon.

Tech giants and leading car companies pioneering the technology have all run into roadblocks at one point or another.

In fact, the limited trials running have already caused damage to public property and human life and regulators are now more vigilant than ever before.

To make matters worse, cybersecurity professionals have been sounding the alarm about the system vulnerabilities of autonomous vehicles — which have sparked fears in the minds of consumers — many of whom were already skeptical about the on-road use of the technology.

A new study by Georgia Tech has found that the state-of-the-art image detection systems used in self-driving cars are vulnerable to a particular type of attack known as “adversarial perturbation”.

In this type of attack, an object in the real world – like a stop sign – is intentionally altered to trick a machine learning system into identifying it as something else entirely.

The vulnerability was confirmed using ShapeShifter, an attack tool developed by Shang-Tse Chen, a Ph.D. student in the School of Computational Science and Engineering (CSE), and fellow researchers from CSE and Intel.

“Our motivation comes from vandalism on traffic signs. Despite real vandalism not affecting DNNs (deep neural networks) greatly, in our work we show that we can craft adversarial perturbations that look like normal vandalism. But these perturbations can drastically change the output of a DNN model causing it to malfunction and identify things incorrectly,” said Chen.

There are many different types of object detectors, and it just happens that the current leading edge object detectors use deep neural networks (DNNs) internally.

These detectors are able to recognize what objects are in an image and where they are located – much different than their simpler counterpart, image classifiers, that output a single label for an image.

Researchers at Georgia Tech created this particular attack system in order to reveal the weaknesses within image recognition systems that use object detectors, and figuring out how to defend against real attacks in the future.

“ShapeShifter tells us that self-driving cars that depend purely on vision-based input are not safe until we can defend this kind of attack. ShapeShifter was created to, and has succeeded in, attacking self-driving cars that use the state-of-the-art Faster R-CNN object detection algorithm,” said Chen.

Although it doesn’t seem like much, the experiment has big consequences for autonomous car makers and will add to the worries of regulators and consumers alike.

Georgia Tech’s study is a strong reminder that a lot of work still needs to be done before driverless cars become a mainstream reality, on New York’s streets or anywhere else in North America.