
In this paper, we introduce a novel deep model called Deep Sparse Coding Network (DeepSCNet) for Image classification. This new model includes four types of layers: the Sparse-coding layer, the Pooling layer, the Normalization layer and the Map reduction layer. The Sparse-coding laying does the general linear coding work for local patch within the receptive field. The Pooling layer and the Normalization layer do the same work as that in CNN. The Map reduction layer reduces the CPU and memory consumptions by reducing the number of feature maps. The paper further discusses the multi-scale, multi-locality extension to the basic DeepSCNet. Compared to CNN, training DeepSCNet is relatively easier even with a training set of moderate size. Experiments show that DeepSCNet can automatically discover highly discriminative feature directly from raw image pixels.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 4 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
