publication . Other literature type . Article . Preprint . 2017

Deep Learning for Video Game Playing

Niels Justesen; Philip Bontrager; Julian Togelius; Sebastian Risi;
Open Access
  • Published: 25 Aug 2017
  • Publisher: Institute of Electrical and Electronics Engineers (IEEE)
  • Country: Denmark
In this article, we review recent Deep Learning advances in the context of how they have been applied to play different types of video games such as first-person shooters, arcade games, and real-time strategy games. We analyze the unique requirements that different game genres pose to a deep learning system and highlight important open challenges in the context of applying these machine learning methods to video games, such as general game playing, dealing with extremely large decision spaces and sparse rewards.
ACM Computing Classification System: ComputingMilieux_PERSONALCOMPUTING
free text keywords: Computer Science - Artificial Intelligence
Related Organizations
125 references, page 1 of 9

[1] S. Alvernaz and J. Togelius. Autoencoder-augmented neuroevolution for visual doom playing. In Computational Intelligence and Games (CIG), 2017 IEEE Conference on. IEEE, 2017. [OpenAIRE]

[2] C. Beattie, J. Z. Leibo, D. Teplyashin, T. Ward, M. Wainwright, H. Ku¨ttler, A. Lefrancq, S. Green, V. Valde´s, A. Sadik, et al. Deepmind lab. arXiv preprint arXiv:1612.03801, 2016.

[3] M. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. In Twenty-Fourth International Joint Conference on Artificial Intelligence, 2015.

[4] M. Bellemare, S. Srinivasan, G. Ostrovski, T. Schaul, D. Saxton, and R. Munos. Unifying count-based exploration and intrinsic motivation. In Advances in Neural Information Processing Systems, pages 1471- 1479, 2016.

[5] M. G. Bellemare, W. Dabney, and R. Munos. A distributional perspective on reinforcement learning. arXiv preprint arXiv:1707.06887, 2017. [OpenAIRE]

[6] M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. J. Artif. Intell. Res.(JAIR), 47:253-279, 2013. [OpenAIRE]

[7] Y. Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41-48. ACM, 2009. [OpenAIRE]

[8] S. Bhatti, A. Desmaison, O. Miksik, N. Nardelli, N. Siddharth, and P. H. Torr. Playing doom with slam-augmented deep reinforcement learning. arXiv preprint arXiv:1612.00380, 2016.

[9] N. Bhonker, S. Rozenberg, and I. Hubara. Playing snes in the retro learning environment. arXiv preprint arXiv:1611.02205, 2016. [OpenAIRE]

[10] M. Bogdanovic, D. Markovikj, M. Denil, and N. De Freitas. Deep apprenticeship learning for playing video games. In Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015.

[11] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.

[12] C. B. Browne, E. Powley, D. Whitehouse, S. M. Lucas, P. I. Cowling, P. Rohlfshagen, S. Tavener, D. Perez, S. Samothrakis, and S. Colton. A survey of monte carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in games, 4(1):1-43, 2012.

[13] Y.-H. Chang, T. Ho, and L. P. Kaelbling. All learning is local: Multiagent learning in global reward games. In NIPS, pages 807-814, 2003.

[14] D. S. Chaplot, G. Lample, K. M. Sathyendra, and R. Salakhutdinov. Transfer deep reinforcement learning in 3d environments: An empirical study.

[15] C. Chen, A. Seff, A. Kornhauser, and J. Xiao. Deepdriving: Learning affordance for direct perception in autonomous driving. In Proceedings of the IEEE International Conference on Computer Vision, pages 2722- 2730, 2015.

125 references, page 1 of 9
Powered by OpenAIRE Research Graph
Any information missing or wrong?Report an Issue