
Cross-modal hashing can efficiently retrieve data across different modalities and has been successfully applied in various domains. Although many supervised cross-modal hashing methods have been proposed, they generally focus on two modals only and assume that the labels of training data are sufficient and complete. This assumption is not practical in real scenarios. In this paper, we propose the Weakly-supervised Cross-modal Hashing (WCHash), which takes into account the widely witnessed weakly-supervised information (incomplete and insufficient labels) of training data. Specifically, WCHash first uses an efficient multi-label weak-label method to enrich the labels of training data, and measures the semantic similarity between data points based on the enriched labels. Next, it optimizes a latent central modality with respect to other modalities. After that, it uses this similarity to guide the correlation maximization between the respective data models and the central modal, and thus achieves the hash functions for cross-modal retrieval. Experimental results on real-world datasets demonstrate that WCHash is more efficient and effective than related state-of-the-art crossmodal hashing methods. WCHash can significantly reduce the complexity of cross-modal hashing on three or more modalities.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 14 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
