
This paper delivers a strategy to build a deep neural network, established by heaping layers of autoencoder, which in turn consists of both encoder and decoder layers, which are generally being locally trained to denoise the corrupted inputs and reconstruct an approximation to the original input. The outcome as an algorithm is a candid variation by stacking the ordinary autoencoder. It is basically a classification problem of machine learning yielding to obtain less classification error, and therefore spanning the performance gap with deep belief neural networks and in majority of the cases surpassing it. Results show that the reconstruction of the inputs depend upon the training parameters such as the upsurge of the epoch and batch size will increase the training period, thus increasing the accuracy in representing the denoised reconstruction.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 1 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
