publication . Preprint . Other literature type . Conference object . 2016

Benchmarking State-of-the-Art Deep Learning Software Tools

Shaohuai Shi; Qiang Wang; Pengfei Xu; Xiaowen Chu;
Open Access English
  • Published: 25 Aug 2016
Abstract
Deep learning has been shown as a successful machine learning method for a variety of tasks, and its popularity results in numerous open-source deep learning software tools. Training a deep network is usually a very time-consuming process. To address the computational challenge in deep learning, many tools exploit hardware features such as multi-core CPUs and many-core GPUs to shorten the training time. However, different tools exhibit different features and running performance when training different types of deep networks on different hardware platforms, which makes it difficult for end users to select an appropriate pair of software and hardware. In this pape...
Subjects
free text keywords: Computer Science - Distributed, Parallel, and Cluster Computing, Computer Science - Learning, Benchmark (computing), Deep learning, Artificial neural network, Benchmarking, Exploit, Computer science, Artificial intelligence, business.industry, business, Software, Distributed computing, Instruction set, Software engineering, End user
Related Organizations
31 references, page 1 of 3

521, no. 7553, pp. 436-444, 2015.

L. Deng, “Three classes of deep learning architectures and their applications: a tutorial survey,” APSIPA transactions on signal and information processing, 2012.

Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: Convolutional architecture for fast feature embedding,” in Proceedings of the 22nd ACM international conference on Multimedia, 2014, pp. 675-678.

D. Yu, A. Eversole, M. Seltzer, K. Yao, Z. Huang, B. Guenter, O. Kuchaiev, Y. Zhang, F. Seide, H. Wang et al., “An introduction to computational networks and the computational network toolkit,” Technical report, Tech. Rep. MSR, Microsoft Research, 2014, 2014.

research. microsoft. com/apps/pubs, Tech. Rep., 2014.

Corrado, A. Davis, J. Dean, M. Devin et al., “Tensorflow: Largescale machine learning on heterogeneous systems, 2015,” Software available from tensorflow. org, vol. 1, 2015.

R. Collobert, K. Kavukcuoglu, and C. Farabet, “Torch7: A matlablike environment for machine learning,” in BigLearn, NIPS Workshop, no. EPFL-CONF-192376, 2011.

T. T. D. Team, R. Al-Rfou, G. Alain, A. Almahairi, C. Angermueller, D. Bahdanau, N. Ballas, F. Bastien, J. Bayer, A. Belikov et al., “Theano: A python framework for fast computation of mathematical expressions,” arXiv preprint arXiv:1605.02688, 2016.

T. Chen, M. Li, Y. Li, M. Lin, N. Wang, M. Wang, T. Xiao, B. Xu, C. Zhang, and Z. Zhang, “Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems,” arXiv preprint arXiv:1512.01274, 2015.

“Eigen,” http://eigen.tuxfamily.org/index.php, accessed: 2016-07-03.

[10] “Openblas,” http://www.openblas.net/, accessed: 2016-07-12.

[11] C. Toolkit, “4.0 cublas library,” Nvidia Corporation, 2011.

[12] S. Chetlur, C. Woolley, P. Vandermersch, J. Cohen, J. Tran, B. Catanzaro, and E. Shelhamer, “cudnn: Efficient primitives for deep learning,” arXiv preprint arXiv:1410.0759, 2014. [OpenAIRE]

[13] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, “Backpropagation applied to handwritten zip code recognition,” Neural computation, vol. 1, no. 4, pp. 541-551, 1989.

[14] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097-1105.

31 references, page 1 of 3
Powered by OpenAIRE Open Research Graph
Any information missing or wrong?Report an Issue
publication . Preprint . Other literature type . Conference object . 2016

Benchmarking State-of-the-Art Deep Learning Software Tools

Shaohuai Shi; Qiang Wang; Pengfei Xu; Xiaowen Chu;