Securing Deep Learning against Adversarial Attacks for Connected and Automated Vehicles

##plugins.themes.bootstrap3.article.main##

##plugins.themes.bootstrap3.article.sidebar##

Published Oct 28, 2022
Chunheng Zhao Pierluigi Pisu

Abstract

Recent developments on connected and automated vehicles (CAV) show that many companies, such as Tesla, Lyft, and Waymo, are substantially investing in the development of perception modules based on deep learning algorithms. However, deep learning algorithms are susceptible to adversarial attacks aimed at modifying the input of the neural network to induce a misclassification, which may compromise vehicle decision-making and, therefore, functional safety. The overall vision of this research is to develop defense techniques capable of making CAVs more resilient to adversarial attacks and, thus, able to satisfy more stringent system safety and performance requirements.

How to Cite

Zhao, C., & Pisu, P. (2022). Securing Deep Learning against Adversarial Attacks for Connected and Automated Vehicles. Annual Conference of the PHM Society, 14(1). Retrieved from https://papers.phmsociety.org/index.php/phmconf/article/view/3399
Abstract 459 |

##plugins.themes.bootstrap3.article.details##

Keywords

connected and automated vehicles, robust deep learning, vehicle safety and security

Section
Doctoral Symposium Summaries

Most read articles by the same author(s)