publication . Preprint . 2017

Are GANs Created Equal? A Large-Scale Study

Lucic, Mario; Kurach, Karol; Michalski, Marcin; Gelly, Sylvain; Bousquet, Olivier;
Open Access English
  • Published: 28 Nov 2017
Abstract
Comment: NIPS'18: Added a section on the limitations of the study and additional empirical results
Subjects
free text keywords: Statistics - Machine Learning, Computer Science - Machine Learning
Download from

[1] Mart´ın Arjovsky, Soumith Chintala, and L e´on Bottou. Wasserstein generative adversarial networks. In International Conference on Machine Learning (ICML), 2017. 1, 2

[2] Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. Generalization and equilibrium in generative adversarial nets (gans). In International Conference on Machine Learning (ICML), 2017. 2

[3] Philip Bachman and Doina Precup. Variational generative stochastic networks with collaborative shaping. In International Conference on Machine Learning (ICML), 2015. 2

[4] David Berthelot, Tom Schumm, and Luke Metz. BEGAN: Boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717, 2017. 1, 3

[5] Xi Chen, Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems (NIPS), 2016. 6

[6] William Fedus, Mihaela Rosca, Balaji Lakshminarayanan, Andrew M Dai, Shakir Mohamed, and Ian Goodfellow. Many paths to equilibrium: GANs do not need to decrease a divergence at every step. arXiv preprint arXiv:1710.08446, 2017. 2, 3

[7] Holly E Gerhard, Felix A Wichmann, and Matthias Bethge. How sensitive is the human visual system to the local statistics of natural images? PLoS computational biology, 9(1), 2013. 1

[8] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NIPS), 2014. 1, 2, 8

[9] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of Wasserstein GANs. arXiv preprint arXiv:1704.00028, 2017. 1, 3

[10] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, Gu¨nter Klambauer, and Sepp Hochreiter. GANs trained by a two time-scale update rule converge to a Nash equilibrium. arXiv preprint arXiv:1706.08500, 2017. 1, 2, 4

Powered by OpenAIRE Open Research Graph
Any information missing or wrong?Report an Issue