
Abstract We propose a new feature representation algorithm using cross-covariance in the context of deep learning. Existing feature representation algorithms based on the sparse autoencoder and nonnegativity-constrained autoencoder tend to produce duplicative encoding and decoding receptive fields, which leads to feature redundancy and overfitting. We propose using the cross-covariance to regularize the feature weight vector to construct a new objective function to eliminate feature redundancy and reduce overfitting. The results from the MNIST handwritten digits dataset, the NORB normalized-uniform dataset and the Yale face dataset indicate that relative to other algorithms based on the conventional sparse autoencoder and nonnegativity-constrained autoencoder, our method can effectively eliminate feature redundancy, extract more distinctive features, and improve sparsity and reconstruction quality. Furthermore, this method improves the image classification performance and reduces the overfitting of conventional networks without adding more computational time.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 13 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
