publication . Preprint . 2019

Automated Machine Learning: State-of-The-Art and Open Challenges

Elshawi, Radwa; Maher, Mohamed; Sakr, Sherif;
Open Access English
  • Published: 05 Jun 2019
Abstract
With the continuous and vast increase in the amount of data in our digital world, it has been acknowledged that the number of knowledgeable data scientists can not scale to address these challenges. Thus, there was a crucial need for automating the process of building good machine learning models. In the last few years, several techniques and frameworks have been introduced to tackle the challenge of automating the process of Combined Algorithm Selection and Hyper-parameter tuning (CASH) in the machine learning domain. The main aim of these techniques is to reduce the role of the human in the loop and fill the gap for non-expert machine learning users by playing...
Subjects
free text keywords: Computer Science - Machine Learning, Statistics - Machine Learning
Download from
129 references, page 1 of 9

[1] Karim Ahmed and Lorenzo Torresani. Maskconnect: Connectivity learning by gradient descent. Lecture Notes in Computer Science, page 362a^AS378, 2018. [OpenAIRE]

[2] Peter J Angeline, Gregory M Saunders, and Jordan B Pollack. An evolutionary algorithm that constructs recurrent neural networks. IEEE transactions on Neural Networks, 5(1):54{65, 1994.

[3] Peter Bailis, Edward Gan, Samuel Madden, Deepak Narayanan, Kexin Rong, and Sahaana Suri. Macrobase: Prioritizing attention in fast data. In Proceedings of the 2017 ACM International Conference on Management of Data, SIGMOD '17, pages 541{556, New York, NY, USA, 2017. ACM.

[4] Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar. Designing neural network architectures using reinforcement learning. arXiv preprint arXiv:1611.02167, 2016. [OpenAIRE]

[5] Remi Bardenet, Matyas Brendel, Balazs Kegl, and Michele Sebag. Collaborative hyperparameter tuning. In International conference on machine learning, pages 199{207, 2013. [OpenAIRE]

[6] Jonathan Baxter. Learning internal representations. Flinders University of S. Aust., 1995.

[7] Richard E Bellman. Adaptive control processes: a guided tour, volume 2045. Princeton university press, 2015.

[8] Yoshua Bengio. Deep learning of representations for unsupervised and transfer learning. In Proceedings of ICML Workshop on Unsupervised and Transfer Learning, pages 17{36, 2012.

[9] Yoshua Bengio et al. Learning deep architectures for ai. Foundations and trends R in Machine Learning, 2(1):1{127, 2009.

[10] James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13(Feb):281{305, 2012.

[11] James Bergstra, Dan Yamins, and David D Cox. Hyperopt: A python library for optimizing the hyperparameters of machine learning algorithms. In Proceedings of the 12th Python in science conference, pages 13{20. Citeseer, 2013.

[12] James Bergstra, Daniel Yamins, and David Daniel Cox. Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures. 2013. [OpenAIRE]

[13] James S Bergstra, Remi Bardenet, Yoshua Bengio, and Balazs Kegl. Algorithms for hyper-parameter optimization. In Advances in neural information processing systems, pages 2546{2554, 2011. [OpenAIRE]

[14] Anant P. Bhardwaj, Souvik Bhattacherjee, Amit Chavan, Amol Deshpande, Aaron J. Elmore, Samuel Madden, and Aditya G. Parameswaran. Datahub: Collaborative data science & dataset version management at scale. CoRR, abs/1409.0798, 2014. [OpenAIRE]

[15] Besim Bilalli, Alberto Abello, and Tomas Aluja-Banet. On the predictive power of meta-features in openml. International Journal of Applied Mathematics and Computer Science, 27(4):697{712, 2017.

129 references, page 1 of 9
Abstract
With the continuous and vast increase in the amount of data in our digital world, it has been acknowledged that the number of knowledgeable data scientists can not scale to address these challenges. Thus, there was a crucial need for automating the process of building good machine learning models. In the last few years, several techniques and frameworks have been introduced to tackle the challenge of automating the process of Combined Algorithm Selection and Hyper-parameter tuning (CASH) in the machine learning domain. The main aim of these techniques is to reduce the role of the human in the loop and fill the gap for non-expert machine learning users by playing...
Subjects
free text keywords: Computer Science - Machine Learning, Statistics - Machine Learning
Download from
129 references, page 1 of 9

[1] Karim Ahmed and Lorenzo Torresani. Maskconnect: Connectivity learning by gradient descent. Lecture Notes in Computer Science, page 362a^AS378, 2018. [OpenAIRE]

[2] Peter J Angeline, Gregory M Saunders, and Jordan B Pollack. An evolutionary algorithm that constructs recurrent neural networks. IEEE transactions on Neural Networks, 5(1):54{65, 1994.

[3] Peter Bailis, Edward Gan, Samuel Madden, Deepak Narayanan, Kexin Rong, and Sahaana Suri. Macrobase: Prioritizing attention in fast data. In Proceedings of the 2017 ACM International Conference on Management of Data, SIGMOD '17, pages 541{556, New York, NY, USA, 2017. ACM.

[4] Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar. Designing neural network architectures using reinforcement learning. arXiv preprint arXiv:1611.02167, 2016. [OpenAIRE]

[5] Remi Bardenet, Matyas Brendel, Balazs Kegl, and Michele Sebag. Collaborative hyperparameter tuning. In International conference on machine learning, pages 199{207, 2013. [OpenAIRE]

[6] Jonathan Baxter. Learning internal representations. Flinders University of S. Aust., 1995.

[7] Richard E Bellman. Adaptive control processes: a guided tour, volume 2045. Princeton university press, 2015.

[8] Yoshua Bengio. Deep learning of representations for unsupervised and transfer learning. In Proceedings of ICML Workshop on Unsupervised and Transfer Learning, pages 17{36, 2012.

[9] Yoshua Bengio et al. Learning deep architectures for ai. Foundations and trends R in Machine Learning, 2(1):1{127, 2009.

[10] James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13(Feb):281{305, 2012.

[11] James Bergstra, Dan Yamins, and David D Cox. Hyperopt: A python library for optimizing the hyperparameters of machine learning algorithms. In Proceedings of the 12th Python in science conference, pages 13{20. Citeseer, 2013.

[12] James Bergstra, Daniel Yamins, and David Daniel Cox. Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures. 2013. [OpenAIRE]

[13] James S Bergstra, Remi Bardenet, Yoshua Bengio, and Balazs Kegl. Algorithms for hyper-parameter optimization. In Advances in neural information processing systems, pages 2546{2554, 2011. [OpenAIRE]

[14] Anant P. Bhardwaj, Souvik Bhattacherjee, Amit Chavan, Amol Deshpande, Aaron J. Elmore, Samuel Madden, and Aditya G. Parameswaran. Datahub: Collaborative data science & dataset version management at scale. CoRR, abs/1409.0798, 2014. [OpenAIRE]

[15] Besim Bilalli, Alberto Abello, and Tomas Aluja-Banet. On the predictive power of meta-features in openml. International Journal of Applied Mathematics and Computer Science, 27(4):697{712, 2017.

129 references, page 1 of 9
Powered by OpenAIRE Research Graph
Any information missing or wrong?Report an Issue