publication . Conference object . Preprint . 2017

Deep Incremental Boosting

Mosca, Alan; Magoulas, George D;
Open Access
  • Published: 11 Aug 2017
  • Publisher: EasyChair
Abstract
<jats:p>This paper introduces Deep Incremental Boosting, a new technique derived from AdaBoost, specifically adapted to work with Deep Learning methods, that reduces the required training time and improves generalisation. We draw inspiration from Transfer of Learning approaches to reduce the start-up time to training each incremental Ensemble member. We show a set of experiments that outlines some preliminary results on some common Deep Learning datasets and discuss the potential improvements Deep Incremental Boosting brings to traditional Ensemble methods</jats:p><jats:p>in Deep Learning.</jats:p>
Subjects
free text keywords: Computer science, Artificial intelligence, business.industry, business, Boosting (machine learning), Machine learning, computer.software_genre, computer, Statistics - Machine Learning, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Learning

[1] Lei Jimmy Ba and Rich Caurana. Do deep nets really need to be deep? Advances in neural information processing systems, pages 2654-2662, 2014.

[2] Yoshua Bengio. Deep learning of representations for unsupervised and transfer learning. Unsupervised and Transfer Learning Challenges in Machine Learning, 7:19, 2012.

[3] Thomas G Dietterich. An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Machine learning, 40(2):139-157, 2000.

[4] Benjamin Graham. Fractional max-pooling. CoRR, abs/1412.6071, 2014.

[5] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.

[6] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images, 2009.

[8] Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013.

[9] R. E. Schapire. The strength of weak learnability. Machine Learning, 5:197-227, 1990. [OpenAIRE]

[10] R. E. Schapire and Y Freund. Experiments with a new boosting algorithm. Machine Learning: proceedings of the Thirteenth International Conference, pages 148-156, 1996.

[11] Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014. [OpenAIRE]

[12] Li Wan, Matthew Zeiler, Sixin Zhang, Yann L Cun, and Rob Fergus. Regularization of neural networks using dropconnect. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages 1058-1066, 2013.

[13] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems, pages 3320-3328, 2014. [OpenAIRE]

Related research
Abstract
<jats:p>This paper introduces Deep Incremental Boosting, a new technique derived from AdaBoost, specifically adapted to work with Deep Learning methods, that reduces the required training time and improves generalisation. We draw inspiration from Transfer of Learning approaches to reduce the start-up time to training each incremental Ensemble member. We show a set of experiments that outlines some preliminary results on some common Deep Learning datasets and discuss the potential improvements Deep Incremental Boosting brings to traditional Ensemble methods</jats:p><jats:p>in Deep Learning.</jats:p>
Subjects
free text keywords: Computer science, Artificial intelligence, business.industry, business, Boosting (machine learning), Machine learning, computer.software_genre, computer, Statistics - Machine Learning, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Learning

[1] Lei Jimmy Ba and Rich Caurana. Do deep nets really need to be deep? Advances in neural information processing systems, pages 2654-2662, 2014.

[2] Yoshua Bengio. Deep learning of representations for unsupervised and transfer learning. Unsupervised and Transfer Learning Challenges in Machine Learning, 7:19, 2012.

[3] Thomas G Dietterich. An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Machine learning, 40(2):139-157, 2000.

[4] Benjamin Graham. Fractional max-pooling. CoRR, abs/1412.6071, 2014.

[5] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.

[6] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images, 2009.

[8] Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013.

[9] R. E. Schapire. The strength of weak learnability. Machine Learning, 5:197-227, 1990. [OpenAIRE]

[10] R. E. Schapire and Y Freund. Experiments with a new boosting algorithm. Machine Learning: proceedings of the Thirteenth International Conference, pages 148-156, 1996.

[11] Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014. [OpenAIRE]

[12] Li Wan, Matthew Zeiler, Sixin Zhang, Yann L Cun, and Rob Fergus. Regularization of neural networks using dropconnect. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages 1058-1066, 2013.

[13] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems, pages 3320-3328, 2014. [OpenAIRE]

Related research
Powered by OpenAIRE Open Research Graph
Any information missing or wrong?Report an Issue
publication . Conference object . Preprint . 2017

Deep Incremental Boosting

Mosca, Alan; Magoulas, George D;