
handle: 10576/37761
Neural networks employ massive interconnection of simple computing units called neurons to compute the problems that are highly nonlinear and could not be hard coded into a program. These neural networks are computation-intensive, and training them requires a lot of training data. Each training example requires heavy computations. We look at different ways in which we can reduce the heavy computation requirement and possibly make them work on mobile devices. In this paper, we survey various techniques that can be matched and combined in order to improve the training time of neural networks. Additionally, we also review some extra recommendations to make the process work for mobile devices as well. We finally survey deep compression technique that tries to solve the problem by network pruning, quantization, and encoding the network weights. Deep compression reduces the time required for training the network by first pruning the irrelevant connections, i.e., the pruning stage, which is then followed by quantizing the network weights via choosing centroids for each layer. Finally, at the third stage, it employs Huffman encoding algorithm to deal with the storage issue of the remaining weights.
Artificial neural network, Artificial intelligence, Convolutional neural network, Signal encoding, Deep Learning, Quantization (signal processing), Artificial Intelligence, Huffman coding, Image Compression Techniques and Standards, Encoding (memory), Biology, QA75.5-76.95, Neural Network Fundamentals and Applications, Computer science, Agronomy, Pruning, Process (computing), Algorithm, Operating system, Electronic computers. Computer science, Data compression, Computer Science, Physical Sciences, Computation, Convolutional neural networks, Computer Vision and Pattern Recognition, Image Denoising Techniques and Algorithms, Training example
Artificial neural network, Artificial intelligence, Convolutional neural network, Signal encoding, Deep Learning, Quantization (signal processing), Artificial Intelligence, Huffman coding, Image Compression Techniques and Standards, Encoding (memory), Biology, QA75.5-76.95, Neural Network Fundamentals and Applications, Computer science, Agronomy, Pruning, Process (computing), Algorithm, Operating system, Electronic computers. Computer science, Data compression, Computer Science, Physical Sciences, Computation, Convolutional neural networks, Computer Vision and Pattern Recognition, Image Denoising Techniques and Algorithms, Training example
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 4 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
