
Appearance-based person re-identification (PRID) is currently an active and challenging research topic. Recently proposed approaches have mostly dealt with low- and middle-level processing of images. Furthermore, there is very limited research that has focused on view information. View variation limits the performance of most approaches because a person’s appearance from one view can be completely different from that of another view, which makes the re-identification challenging. In this work, we study the influence of the view on PRID and propose several fusion strategies that utilize multi-view information to handle the PRID problem. We perform experiments on a re-mapped version of Market-1501 dataset and an internal dataset. Our proposed multi-view strategy increases the recognition rate at rank-one by a large margin in comparison with that obtained via random view matching or multi-shot.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 1 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
