Variational Autoencoders Pursue PCA Directions (by Accident)

Preprint English OPEN
Rolinek, Michal; Zietlow, Dominik; Martius, Georg;
(2018)
  • Subject: Computer Science - Computer Vision and Pattern Recognition | Statistics - Machine Learning | Computer Science - Machine Learning

The Variational Autoencoder (VAE) is a powerful architecture capable of representation learning and generative modeling. When it comes to learning interpretable (disentangled) representations, VAE and its variants show unparalleled performance. However, the reasons for ... View more
  • References (26)
    26 references, page 1 of 3

    International Max Planck Research School for Intelli- [12] G. H. Golub and W. Kahan. Calculating the singular values gent Systems (IMPRS-IS) for supporting Dominik Ziet- and pseudo-inverse of a matrix. Journal of the Society for low. Industrial and Applied Mathematics: Series B, Numerical Analysis, 2(2):205-224, 1965. 3 [13] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu,

    [1] A. Alemi, B. Poole, I. Fischer, J. Dillon, R. A. Saurous, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinand K. Murphy. Fixing a broken ELBO. In Proc. 35th berger, editors, Advances in Neural Information ProcessIntl. Conference on Machine Learning (ICML), volume 80, ing Systems 27, pages 2672-2680. Curran Associates, Inc., pages 159-168. PMLR, 2018. 2 2014. 1

    [2] Anonymous. ISA-VAE: Independent subspace analysis [14] K. Gregor, F. Besse, D. Jimenez Rezende, I. Danihelka, with variational autoencoders. In Submitted to Inter- and D. Wierstra. Towards conceptual compression. ArXiv national Conference on Learning Representations, 2019. e-prints, abs/1604.08772, 2016. 1 under review (https://openreview.net/forum? [15] K. Gregor, I. Danihelka, A. Graves, D. Rezende, and id=rJl_NhR9K7). 2, 4, 10 D. Wierstra. Draw: A recurrent neural network for image

    [3] Y. Bengio, A. Courville, and P. Vincent. Representation generation. In F. Bach and D. Blei, editors, Proc. ICML, learning: A review and new perspectives. IEEE Trans- volume 37, pages 1462-1471. PMLR, 2015. 1 actions on Pattern Analysis and Machine Intelligence, [16] I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, 35(8):1798-1828, Aug 2013. 1, 2, 3 M. Botvinick, S. Mohamed, and A. Lerchner. Beta-VAE:

    [4] M. Blaauw and J. Bonada. Modeling and transforming Learning basic visual concepts with a constrained variaspeech using variational autoencoders. In INTERSPEECH, tional framework. ICLR, 2017. 1, 2, 3, 10 pages 1770-1774, 2016. 1 [17] I. Higgins, A. Pal, A. A. Rusu, L. Matthey, C. P. Burgess,

    [5] H. Bourlard and Y. Kamp. Auto-association by multilayer A. Pritzel, M. Botvinick, C. Blundell, and A. Lerchner. perceptrons and singular value decomposition. Manuscript DARLA: Improving zero-shot transfer in reinforcement M217, Philips Research Laboratory, Brussels, Belgium, learning. ArXiv e-prints, July 2017. 1 1987. 3 [18] X. Hou, L. Shen, K. Sun, and G. Qiu. Deep feature consis-

    [6] C. P. Burgess, I. Higgins, A. Pal, L. Matthey, N. Watters, tent variational autoencoder. In 2017 IEEE Winter ConferG. Desjardins, and A. Lerchner. Understanding disentan- ence on Applications of Computer Vision (WACV), pages gling in -vae. ArXiv e-prints, abs/1804.03599, 2018. 2, 1133-1141, March 2017. 1 4 [19] G. James, D. Witten, T. Hastie, and R. Tibshirani. An In-

    [7] T. Q. Chen, X. Li, R. B. Grosse, and D. K. Duvenaud. troduction to Statistical Learning: With Applications in R. Isolating sources of disentanglement in variational autoen- Springer Publishing Company, Incorporated, 2014. 6 coders. ArXiv e-prints, abs/1802.04942, 2018. 1, 3, 9, 10, [20] H. Kim and A. Mnih. Disentangling by factorising. In 18 J. Dy and A. Krause, editors, Proc. ICML, volume 80,

    [8] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, pages 2649-2658. PMLR, 2018. 1, 3, 10 and P. Abbeel. InfoGAN: Interpretable Representation [21] D. P. Kingma, T. Salimans, R. Jozefowicz, X. Chen, Learning by Information Maximizing Generative Adver- I. Sutskever, and M. Welling. Improved variational insarial Nets. ArXiv e-prints, June 2016. 1 ference with inverse autoregressive flow. In D. D. Lee,

    [9] B. Dai, Y. Wang, J. Aston, G. Hua, and D. Wipf. Hid- M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, den talents of the variational autoencoder. ArXiv e-prints, editors, Advances in Neural Information Processing Sysabs/1706.05148, 2018. 2 tems 29, pages 4743-4751. Curran Associates, Inc., 2016.

  • Related Research Results (1)
  • Metrics
Share - Bookmark