
Visually impaired individuals face serious challenges while navigating roads due to the inability to visually identify surface hazards such as potholes and uneven road conditions. Existing assistive tools like white canes provide only close-range detection and cannot deliver advance warnings about road surface defects ahead. This paper proposes a Transformer-based real-time pothole detection system to support safe and independent navigation for visually impaired individuals using live camera input and deep learning techniques. The system captures live road scenes using a camera and applies YOLOv8 integrated with Transformer-based attention mechanisms to detect potholes accurately in real time. When a pothole is identified, an audio alert is generated immediately to warn the user before they reach the hazard. The dataset used for training is sourced from Mendeley Data and Roboflow, containing annotated road images exported in YOLO format. The proposed system enhances safety, mobility, and independent movement for visually impaired users in real-world urban and rural road environments.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
