Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Neural Computing and...arrow_drop_down
image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
Neural Computing and Applications
Article . 2016 . Peer-reviewed
License: Springer TDM
Data sources: Crossref
DBLP
Article
Data sources: DBLP
versions View all 2 versions
addClaim

Research of stacked denoising sparse autoencoder

Authors: Lingheng Meng; Shifei Ding; Nan Zhang 0014; Jian Zhang 0019;

Research of stacked denoising sparse autoencoder

Abstract

Learning results depend on the representation of data, so how to efficiently represent data has been a research hot spot in machine learning and artificial intelligence. With the deepening of the deep learning research, studying how to train the deep networks to express high dimensional data efficiently also has been a research frontier. In order to present data more efficiently and study how to express data through deep networks, we propose a novel stacked denoising sparse autoencoder in this paper. Firstly, we construct denoising sparse autoencoder through introducing both corrupting operation and sparsity constraint into traditional autoencoder. Then, we build stacked denoising sparse autoencoders which has multi-hidden layers by layer-wisely stacking denoising sparse autoencoders. Experiments are designed to explore the influences of corrupting operation and sparsity constraint on different datasets, using the networks with various depth and hidden units. The comparative experiments reveal that test accuracy of stacked denoising sparse autoencoder is much higher than other stacked models, no matter what dataset is used and how many layers the model has. We also find that the deeper the network is, the less activated neurons in every layer will have. More importantly, we find that the strengthening of sparsity constraint is to some extent equal to the increase in corrupted level.

Related Organizations
  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    19
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Top 10%
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Top 10%
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
19
Top 10%
Top 10%
Average
Upload OA version
Are you the author of this publication? Upload your Open Access version to Zenodo!
It’s fast and easy, just two clicks!