publication . Conference object . Preprint . 2014

Deep Fried Convnets

Yang, Zichao; Moczulski, Marcin; Denil, Misha; de Freitas, Nando; Smola, Alex; Song, Le; Wang, Ziyu;
Open Access
  • Published: 22 Dec 2014
  • Publisher: IEEE
Abstract
Comment: svd experiments included
Subjects
free text keywords: Computer Science - Learning, Computer Science - Neural and Evolutionary Computing, Statistics - Machine Learning
24 references, page 1 of 2

Cho, Youngmin and Saul, Lawrence K. Kernel methods for deep learning. In Advances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Processing Systems 2009. Proceedings of a meeting held 7-10 December 2009, Vancouver, British Columbia, Canada., pp. 342{350, 2009.

Collins, Maxwell D. and Kohli, Pushmeet. Memory bounded deep convolutional networks. Technical report, University of Wisconsin-Madison, 2014. [OpenAIRE]

Dai, B., Xie, B., He, N., Liang, Y., Raj, A., Balcan, M., and Song, L. Scalable kernel methods via doubly stochastic gradients. In NIPS, 2014.

Denil, M., Bazzani, L., Larochelle, H., and de Freitas, N. Learning where to attend with deep architectures for image tracking. Neural Computation, 24(8):2151{2184, 2012.

Denil, Misha, Shakibi, Babak, Dinh, Laurent, Ranzato, Marc'Aurelio, and de Freitas, Nando. Predicting parameters in deep learning. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., pp. 2148{2156, 2013.

Denton, Emily L, Zaremba, Wojciech, Bruna, Joan, LeCun, Yann, and Fergus, Rob. Exploiting linear structure within convolutional networks for e cient evaluation. In Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., and Weinberger, K.Q. (eds.), Advances in Neural Information Processing Systems 27, pp. 1269{1277. Curran Associates, Inc., 2014.

Farabet, Clement, Martini, Berin, Akselrod, Polina, Talay, Selcuk, LeCun, Yann, and Culurciello, Eugenio. Hardware accelerated convolutional neural networks for synthetic vision systems. In Circuits and Systems (ISCAS), pp. 257{260. IEEE, 2010. [OpenAIRE]

Huang, Po-Sen, Avron, Haim, Sainath, Tara N, Sindhwani, Vikas, and Ramabhadran, Bhuvana. Kernel methods match deep neural networks on timit. In IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 1, pp. 6, 2014.

Jaderberg, M., Vedaldi, A., and Zisserman, A. Speeding up convolutional neural networks with low rank expansions. In British Machine Vision Conference, 2014. [OpenAIRE]

Jia, Yangqing, Shelhamer, Evan, Donahue, Je , Karayev, Sergey, Long, Jonathan, Girshick, Ross, Guadarrama, Sergio, and Darrell, Trevor. Ca e: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.

Krizhevsky, Alex. One weird trick for parallelizing convolutional neural networks. Technical report, Google, 2014. [OpenAIRE]

Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geo rey E. Imagenet classi cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held December 3-6, 2012, Lake Tahoe, Nevada, United States., pp. 1106{1114, 2012.

Le, Quoc, Sarlos, Tamas, and Smola, Alex. Fastfood { approximating kernel expansions in loglinear time. In ICML, 2013.

Li, Hongsheng, Zhao, Rui, and Wang, Xiaogang. Highly e cient forward and backward propagation of convolutional neural networks for pixelwise classi cation. Technical report, Chinese University of Hong Kong, 2014. [OpenAIRE]

Lin, Min, Chen, Qiang, and Yan, Shuicheng. Kernel methods match deep neural networks on timit. In International Conference on Learning Representations, 2014.

24 references, page 1 of 2
Abstract
Comment: svd experiments included
Subjects
free text keywords: Computer Science - Learning, Computer Science - Neural and Evolutionary Computing, Statistics - Machine Learning
24 references, page 1 of 2

Cho, Youngmin and Saul, Lawrence K. Kernel methods for deep learning. In Advances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Processing Systems 2009. Proceedings of a meeting held 7-10 December 2009, Vancouver, British Columbia, Canada., pp. 342{350, 2009.

Collins, Maxwell D. and Kohli, Pushmeet. Memory bounded deep convolutional networks. Technical report, University of Wisconsin-Madison, 2014. [OpenAIRE]

Dai, B., Xie, B., He, N., Liang, Y., Raj, A., Balcan, M., and Song, L. Scalable kernel methods via doubly stochastic gradients. In NIPS, 2014.

Denil, M., Bazzani, L., Larochelle, H., and de Freitas, N. Learning where to attend with deep architectures for image tracking. Neural Computation, 24(8):2151{2184, 2012.

Denil, Misha, Shakibi, Babak, Dinh, Laurent, Ranzato, Marc'Aurelio, and de Freitas, Nando. Predicting parameters in deep learning. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., pp. 2148{2156, 2013.

Denton, Emily L, Zaremba, Wojciech, Bruna, Joan, LeCun, Yann, and Fergus, Rob. Exploiting linear structure within convolutional networks for e cient evaluation. In Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., and Weinberger, K.Q. (eds.), Advances in Neural Information Processing Systems 27, pp. 1269{1277. Curran Associates, Inc., 2014.

Farabet, Clement, Martini, Berin, Akselrod, Polina, Talay, Selcuk, LeCun, Yann, and Culurciello, Eugenio. Hardware accelerated convolutional neural networks for synthetic vision systems. In Circuits and Systems (ISCAS), pp. 257{260. IEEE, 2010. [OpenAIRE]

Huang, Po-Sen, Avron, Haim, Sainath, Tara N, Sindhwani, Vikas, and Ramabhadran, Bhuvana. Kernel methods match deep neural networks on timit. In IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 1, pp. 6, 2014.

Jaderberg, M., Vedaldi, A., and Zisserman, A. Speeding up convolutional neural networks with low rank expansions. In British Machine Vision Conference, 2014. [OpenAIRE]

Jia, Yangqing, Shelhamer, Evan, Donahue, Je , Karayev, Sergey, Long, Jonathan, Girshick, Ross, Guadarrama, Sergio, and Darrell, Trevor. Ca e: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.

Krizhevsky, Alex. One weird trick for parallelizing convolutional neural networks. Technical report, Google, 2014. [OpenAIRE]

Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geo rey E. Imagenet classi cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held December 3-6, 2012, Lake Tahoe, Nevada, United States., pp. 1106{1114, 2012.

Le, Quoc, Sarlos, Tamas, and Smola, Alex. Fastfood { approximating kernel expansions in loglinear time. In ICML, 2013.

Li, Hongsheng, Zhao, Rui, and Wang, Xiaogang. Highly e cient forward and backward propagation of convolutional neural networks for pixelwise classi cation. Technical report, Chinese University of Hong Kong, 2014. [OpenAIRE]

Lin, Min, Chen, Qiang, and Yan, Shuicheng. Kernel methods match deep neural networks on timit. In International Conference on Learning Representations, 2014.

24 references, page 1 of 2
Powered by OpenAIRE Research Graph
Any information missing or wrong?Report an Issue