
The SAFEXPLAIN Auto Use Case implements an autonomous navigation pipeline on top of the CARLA simulator, focusing on explainability and safety. The system integrates a Carla bridge for simulator interaction, a Perception node for lane and object detection, and a Controller for safe vehicle actuation. A Decision node performs rule-based fusion, braking logic, and collision warnings, while a Supervision node monitors perception quality and anomalies. The Overlay node generates real-time visualizations of detections, warnings, and system status, supporting debugging and explainability. The architecture is managed by the SafeXplain middleware (State, Lifecycle, and Health Managers), ensuring consistent initialization, monitoring, and fault detection. Together, these components enable robust autonomous driving scenarios with transparent safety monitoring. This work is part of the SAFEXPLAIN Community (zenodo.org/communities/safexplain).
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
