
This paper proposes a novel object detection method based on the visual saliency model in order to reliably detect objects such as rocks from single monocular planetary images. The algorithm takes advantage of the relatively homogeneous and distinct albedos present in planetary environments such as Mars or the Moon to extract a Digital Terrain Model of a scene using photoclinometry. The Digital Terrain Model is then incorporated into a bottom-up visual saliency algorithm to augment objects that protrude out of the ground. This Structure Augmented Monocular Saliency algorithm (SAMS) improves the accuracy and reliability of detecting objects in a planetary environment with no training requirements, greater robustness and lower computational complexity than 3D saliency models. Comprehensive analysis of the proposed method is performed using three challenging benchmark datasets. The results show that the Structure Augmented Monocular Saliency (SAMS) algorithm performs better than against commonly used visual saliency models on the same datasets
Visual saliency, 000, Object detection, 006, Planetary rovers, 004
Visual saliency, 000, Object detection, 006, Planetary rovers, 004
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 2 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
