
This article explores the challenges and solutions for enhancing machine learning (ML) safety in autonomous vehicles. It examines the gaps in current automotive safety standards when applied to ML systems and proposes practical solutions to improve reliability and safety. The discussion is organized around two key strategies: implementing robust error detection mechanisms for safe failure modes, and improving algorithm robustness to enhance safety margins across various operational conditions. The article presents concrete implementations of these strategies, including a student model for predicting failures in steering control, an out-of-distribution sample detector, and a cross-domain object detection model for UAVs. Additionally, it outlines future research directions in security against adversarial attacks, procedural safeguards for user experience, and the need for interdisciplinary collaboration to address the complex challenges of ML safety in autonomous vehicles.
Machine Learning Safety, Autonomous Vehicles, Error Detection, Algorithm Robustness, Interdisciplinary Collaboration
Machine Learning Safety, Autonomous Vehicles, Error Detection, Algorithm Robustness, Interdisciplinary Collaboration
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
