Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ IEEE Accessarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
IEEE Access
Article . 2025 . Peer-reviewed
License: CC BY
Data sources: Crossref
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
IEEE Access
Article . 2025
Data sources: DOAJ
versions View all 2 versions
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

Adversarial Contrastive Autoencoder With Shared Attention for Audio-Visual Correlation Learning

Authors: Jiwei Zhang; Yi Yu; Suhua Tang; Wei Li;

Adversarial Contrastive Autoencoder With Shared Attention for Audio-Visual Correlation Learning

Abstract

Cross-modal audio-visual correlation learning has been an active research topic, which aims to embed audio and visual feature sequences into a common subspace where their correlation is maximized. The challenge of audio-visual correlation learning lies in two major aspects: 1) audio and visual feature sequences respectively contain different patterns belonging to different feature spaces, and 2) semantic mismatch between audio and visual sequences inevitably happens during cross-modal matching. Most existing methods only take the first aspect into account, therefore facing the difficulty in distinguishing matched and mismatched semantic correlations between audio and visual sequences. In this work, an adversarial contrastive autoencoder with a shared attention network (ACASA) is proposed for correlation learning in audio-visual retrieval. In particular, the proposed shared attention mechanism is parameterized, in which local salient information is enhanced to contribute to the final feature representation. Simultaneously, adversarial contrastive learning is exploited to maximize semantic feature consistency and improve the ability to distinguish matched and mismatched samples. Both inter-modal and intra-modal semantic information are utilized to supervise the model to learn more discriminative feature representation. Extensive experiments on the VEGAS and AVE datasets demonstrate that the proposed ACASA method outperforms state-of-the-art approaches in cross-modal audio-visual retrieval.

Keywords

inter-intra modal loss, Cross-modal retrieval, Electrical engineering. Electronics. Nuclear engineering, adversarial learning, shared attention, TK1-9971

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average
gold
Related to Research communities