
Vision based navigation algorithms estimate the position and attitude of a sensor platform by tracking stationary features in the neighboring environment across multiple image frames captured with an on-board camera. The set of feature matches between two frames is used to compute camera motion using algorithms based on multi-view geometry. The presence of bad feature matches in the data can introduce significant errors in the computed values of the rotation and translation. In this article, we present a robust estimation method to remove the false matched features. Performance of the proposed method is illustrated with numerical examples and compared with conventional Ransac approach. Reduction in computational load is observed for typical datasets without loss of performance making the scheme attractive for real-time implementations.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
