McAfee study raises the red flag on increasing threat of AI security

The study was able to demonstrate that a Tesla Model X and a Tesla Model S, both from 2016 and both with the Tesla hardware pack 1, were made to accelerate to 85 miles per hour instead of 35 miles per hour autonomously.

Autocar Pro News Desk By Autocar Pro News Desk calendar 21 Feb 2020 Views icon6224 Views Share - Share to Facebook Share to Twitter Share to LinkedIn Share to Whatsapp

The connected electric vehicle is no doubt the megatrend in the automotive world. As AI (Artificial Intelligence) models and features continue to populate the new car launches, one basic question keeps popping up, exactly how secure are these?

McAfee, an American global computer security software company that claims to be the world's largest dedicated technology security company undertook a 18-month study spearheded by McAfee's Steve Povolny and Shivangee Trivedi,on “model hacking” to enhance the reader’s comprehension of this increasing threat. 

They were able to demonstrate that a Tesla Model X and a Tesla Model S, both from 2016 and both with the Tesla hardware pack 1, were made to accelerate to 85 miles per hour instead of 35 miles per hour autonomously.  

The McAfee Advanced Threat Research (ATR) has a specific goal: identify and illuminate a broad spectrum of threats in today’s complex landscape. With model hacking, the study of how adversaries could target and evade artificial intelligence, it saw the opportunity to influence the awareness, understanding and development of more secure technologies before they are implemented in a way that has real value to the adversary. 
With this in mind, the McAfee team decided to focus on the broadly deployed MobilEye camera system, today utilised across over 40 million vehicles, including Tesla models that implement Hardware Pack 1. 

18 Months of Research
Steve Povolny, Head of Advanced Threat Research and Shivangee Trivedi, Advanced Threat Research in their detailed report during an 18-month project, ‘Ahead of the Curve: Model Hacking Advanced Driver Assistance Systems to Pave Safer Roads for Autonomous Vehicles’ highlighted how they tricked the system.

McAfee Advanced Threat Research has been studying the “Model Hacking” concept, also known in the industry as adversarial machine learning. Model Hacking is the concept of exploiting weaknesses universally present in machine learning algorithms to achieve adverse results. They are using it to identify the upcoming problems in an industry that is evolving technology at a pace that security has not kept up with. 

MobilEye is one of the leading vendors of Advanced Driver Assist Systems (ADAS) catering to some of the world’s most advanced automotive companies. Tesla, on the other hand, is a name synonymous with ground-breaking innovation, providing the world with the innovative and eco-friendly smart cars. 

Model hacking by replicating industry papers
The McAfee team started their journey into the world of model hacking by replicating industry papers on methods of attacking machine learning image classifier systems used in autonomous vehicles, with a focus on causing misclassifications of traffic signs. They were able to replicate and significantly expand upon previous research focused on stop signs, including both targeted attacks, which aim for a specific misclassification, as well as untargeted attacks, which don’t prescribe what an image is misclassified as, just that it is misclassified.

Ultimately, they were successful in creating extremely efficient digital attacks which could cause misclassifications of a highly robust classifier, built to determine with high precision and accuracy what it is looking at, approaching 100% confidence.

They further expanded efforts to create physical stickers, that model the same type of perturbations, or digital changes to the original photo, which trigger weaknesses in the classifier and cause it to misclassify the target image. 

These adversarial stickers cause the MobilEye on Tesla Model X to interpret the 35-mph speed sign as an 85-mph speed sign.

This set of stickers has been specifically created with the right combination of color, size and location on the target sign to cause a robust webcam-based image classifier to think it is looking at an “Added Lane” sign instead of a stop sign. In reality, modern vehicles don’t yet rely on stop signs to enable any kind of autonomous features such as applying the brakes, so the McAfee team decided to alter their approach and shift (pun intended) over to speed limit signs. They knew, for example, that the MobilEye camera is used by some vehicles to determine the speed limit, display it on the heads-up display (HUD), and potentially even feed that speed limit to certain features of the car related to autonomous driving. We’ll come back to that! 

They then repeated the stop sign experiments on traffic signs, using a highly robust classifier, and the trusty high-resolution webcam. And just to show how robust the classifier is, they make many changes to the sign— block it partially, place the stickers in random locations — and the classifier does an outstanding job of correctly predicting the true sign. While there were many obstacles to achieving the same success, they were ultimately able to prove both targeted and untargeted attacks, digitally and physically, against speed limit signs.

The next pitstop  
At this point, you might be wondering “what’s so special about tricking a webcam into misclassifying a speed limit sign, outside of just the cool factor?” It was time to test the “black box theory.” 

What this means, in its most simple form, is attacks leveraging model hacking which are trained and executed against white box, also known as open source systems, will successfully transfer to black box, or fully closed and proprietary systems, so long as the features and properties of the attack are similar enough. For example, if one system is relying on the specific numeric values of the pixels of an image to classify it, the attack should replicate on another camera system that relies on pixel-based features as well.

The last part of the lab-based testing involved simplifying this attack and applying it to a real-world target. The team wondered if the MobilEye camera would require several highly specific, and easily noticeable stickers to cause a misclassification? They were able to run repeated tests on a 2016 Model “S” and 2016 Model “X” Tesla using the MobilEye camera (Tesla’s hardware pack 1 with EyeQ3 mobilEye chip). The first test involved simply attempting to recreate the physical sticker test – and, it worked, almost immediately and with a high rate of reproducibility. 

In the lab tests, they had developed attacks that were resistant to change in angle, lighting and even reflectivity, knowing this would emulate real-world conditions. While these weren’t perfect, their results were relatively consistent in getting the MobilEye camera to think it was looking at a different speed limit sign than it was. The next step in testing was to reduce the number of stickers to determine at which point they failed to cause a misclassification. As the team began, it realised that the HUD continued to misclassify the speed limit sign. They continued reducing stickers from four adversarial stickers in the only locations possible to confuse the webcam, all the way down to a single piece of black electrical tape, approximately two inches long, and extending the middle of the three on the traffic sign. 

Even to a trained eye, this hardly looks suspicious or malicious, and many who saw it didn’t realise the sign had been altered at all. This tiny piece of sticker was all it took to make the MobilEye camera’s top prediction for the sign to be 85 mph.

The finish line was close
Finally, they began to investigate whether any of the features of the camera sensor might directly affect any of the mechanical, and even more relevant, autonomous features of the car. After extensive study, they came across a forum referencing the fact that a feature known as Tesla Automatic Cruise Control (TACC) could use speed limit signs as input to set the vehicle speed.

There was majority of consensus among owners that this might be a supported feature. It was clear that there was also confusion among forum members as to whether this capability was possible, so the next step was to verify by consulting Tesla software updates and new feature releases. 

A software release for TACC contained just enough information to point towards speed assist, with the following statement, under the Tesla Automatic Cruise Control feature description.
“You can now immediately adjust your set speed to the speed determined by Speed Assist.”

This took them down to the final documentation-searching rabbit hole; Speed Assist, a feature quietly rolled out by Tesla in 2014.

Finally, they can now add these all up to surmise that it might be possible, for Tesla models enabled with Speed Assist (SA) and Tesla Automatic Cruise Control (TACC), to use simple modification to a traffic sign to cause the car to increase speed on its own!

Despite being confident this was theoretically possible, they decided to simply run some tests to see for themselves. McAfee ATR’s lead researcher on the project, Shivangee Trivedi, partnered with another of our vulnerability researchers Mark Bereza, who just so happened to own a Tesla that exhibited all these features. 

For an exhaustive look at the number of tests, conditions, and equipment used to replicate and verify misclassification on this target, they have also published their test matrix here. 
The ultimate finding here is that they were able to achieve the original goal. By making a tiny sticker-based modification to the speed limit sign, they were able to cause a targeted misclassification of the MobilEye camera on a Tesla and use it to cause the vehicle to autonomously speed up to 85mph when reading a 35mph sign. For safety reasons, the video demonstration shows the speed starts to spike and TACC accelerate on its way to 85, but given the test conditions, they apply the brakes well before it reaches target speed. It is worth noting that this is seemingly only possible on the first implementation of TACC when the driver double taps the lever, engaging TACC. If the misclassification is successful, the autopilot engages 100% of the time. This quick demo video shows all these concepts coming together. 

Of note is that all these findings were tested against earlier versions (Tesla hardware pack 1, mobilEye version EyeQ3) of the MobilEye camera platform. They did get access to a 2020 vehicle implementing the latest version of the MobilEye camera and were pleased to see it did not appear to be susceptible to this attack vector or misclassification, though testing was very limited. They are thrilled to see that MobilEye appears to have embraced the community of researchers working to solve this issue and are working to improve the resilience of their product. Still, it will be quite some time before the latest MobilEye camera platform is widely deployed. The vulnerable version of the camera continues to account for a sizeable installation base among Tesla vehicles. The newest models of Tesla vehicles do not implement MobilEye technology any longer, and do not currently appear to support traffic sign recognition at all. 

Looking forward
Is there a feasible scenario where an adversary could leverage this type of an attack to cause harm? Yes, but in reality, this work is highly academic at this time. Still, it represents some of the most important work that they as an industry can focus on to get ahead of the problem. If vendors and researchers can work together to identify and solve these problems in advance, it would truly be an incredible win for us all. 

Autocar Professional reached out to both Tesla and MobilEye before publishing this study by McAfee. While Tesla has not yet responded to our query, MobilEye sent the following statement- “The modifications to the traffic signs introduced in this research can confuse a human eye and therefore we do not consider this an adversarial attack. Traffic sign fonts are determined by regulators, and so advanced driver assistance systems (ADAS) are primarily focused on other more challenging use cases, and this system in particular was designed to support human drivers – not autonomous driving. Autonomous vehicle technology will not rely on sensing alone, but will also be supported by various other technologies and data, such as crowdsourced mapping, to ensure the reliability of the information received from the camera sensor and offer more robust redundancies and safety.”

RELATED ARTICLES
Branded content: HL Klemove inaugurates first Local ADAS Radar Manufacturing Unit in India, marks a significant achievement in “Make in India” initiative

auther Autocar Pro News Desk calendar24 Apr 2024

The inauguration ceremony was held in the presence of Vinod Sahay, President and CPO of Mahindra & Mahindra Ltd. and Dr....

BluWheelz to 'Green Up' logistics sector

auther Autocar Pro News Desk calendar23 Apr 2024

With their EVs-as-a-service solution, the startup is playing it smart with costs and looking to electrify the entire seg...

BRANDED CONTENT: Spearheading the EV revolution in India

auther Autocar Pro News Desk calendar22 Apr 2024

Jio-bp is a joint venture between Reliance Industries and BP PLC where both entities have married international expertis...