Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Applied Soft Computi...arrow_drop_down
image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
Applied Soft Computing
Article . 2021 . Peer-reviewed
License: Elsevier TDM
Data sources: Crossref
versions View all 1 versions
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

Embedded stacked group sparse autoencoder ensemble with L1 regularization and manifold reduction

Authors: Yongming Li; Yan Lei; Pin Wang; Mingfeng Jiang; Yuchuan Liu;

Embedded stacked group sparse autoencoder ensemble with L1 regularization and manifold reduction

Abstract

Abstract Learning useful representations from original features is a key issue in classification tasks. Stacked autoencoders (SAEs) are easy to understand and realize, and they are powerful tools that learn deep features from original features, so they are popular for classification problems. The deep features can further combine the original features to construct more representative features for classification. However, existing SAEs do not consider the original features within the network structure and during training, so the deep features have low complementarity with the original features. To solve the problem, this paper proposes an embedded stacked group sparse autoencoder (ESGSAE) for more effective feature learning. Different from traditional stacked autoencoders, the ESGSAE model considers the complementarity between the original feature and the hidden outputs by embedding the original features into hidden layers. To alleviate the impact of the small sample problem on the generalization of the proposed ESGSAE model, an L 1 regularization-based feature selection strategy is designed to further improve the feature quality. After that, an ensemble model with support vector machine (SVM) and weighted local discriminant preservation projection (w_LPPD) is designed to further enhance the feature quality. Based on the designs above, an embedded stacked group sparse autoencoder ensemble with L1 regularization and manifold reduction is proposed to obtain deep features with high complementarity in the context of the small sample problem. At the end of this paper, several representative public datasets are used for verification of the proposed algorithm. The results demonstrate that the ESGSAE ensemble model with L 1 regularization and manifold reduction yields superior performance compared to other existing and state-of-the-art feature learning algorithms, including some representative deep stacked autoencoder methods. Specifically, compared with the original features, the representative feature extraction algorithms and the improved autoencoders, the algorithm proposed in this paper can improve the classification accuracy by up to 13.33%, 7.33%, and 9.55%, respectively. The data and codes can be found in: https://share.weiyun.com/Jt7qeORm

Related Organizations
  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    14
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Top 10%
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Top 10%
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
14
Top 10%
Average
Top 10%
Upload OA version
Are you the author of this publication? Upload your Open Access version to Zenodo!
It’s fast and easy, just two clicks!