
Cross-modal audio-visual correlation learning has been an active research topic, which aims to embed audio and visual feature sequences into a common subspace where their correlation is maximized. The challenge of audio-visual correlation learning lies in two major aspects: 1) audio and visual feature sequences respectively contain different patterns belonging to different feature spaces, and 2) semantic mismatch between audio and visual sequences inevitably happens during cross-modal matching. Most existing methods only take the first aspect into account, therefore facing the difficulty in distinguishing matched and mismatched semantic correlations between audio and visual sequences. In this work, an adversarial contrastive autoencoder with a shared attention network (ACASA) is proposed for correlation learning in audio-visual retrieval. In particular, the proposed shared attention mechanism is parameterized, in which local salient information is enhanced to contribute to the final feature representation. Simultaneously, adversarial contrastive learning is exploited to maximize semantic feature consistency and improve the ability to distinguish matched and mismatched samples. Both inter-modal and intra-modal semantic information are utilized to supervise the model to learn more discriminative feature representation. Extensive experiments on the VEGAS and AVE datasets demonstrate that the proposed ACASA method outperforms state-of-the-art approaches in cross-modal audio-visual retrieval.
inter-intra modal loss, Cross-modal retrieval, Electrical engineering. Electronics. Nuclear engineering, adversarial learning, shared attention, TK1-9971
inter-intra modal loss, Cross-modal retrieval, Electrical engineering. Electronics. Nuclear engineering, adversarial learning, shared attention, TK1-9971
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
