
Due to high storage and calculation efficiency, hash-based methods have been widely used in image retrieval systems. Unsupervised deep hashing methods can learn the binary representations of images effectively without any annotations. The strategy of constraining hash code in the previous unsupervised methods may not fully utilize the structural information in semantic similarity. To address this problem, we propose a new strategy based on contrastive learning to capture high-level semantic similarity among features and preserve it in generated hash codes. In addition, we employ a novel framework to handle hash codes with different lengths simultaneously which is more time-saving in generating hash codes than existing methods. Extensive experiments on MIRFlickr, NUS-WIDE, and COCO benchmark datasets show that our method makes great improvement on the performance of unsupervised image retrieval.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 7 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
