A topological insight into restricted Boltzmann machines

Article, Preprint English OPEN
Mocanu, Decebal Constantin; Mocanu, Elena; Nguyen, Phuong H.; Gibescu, Madeleine; Liotta, Antonio; (2016)
  • Publisher: Springer Nature
  • Journal: Machine Learning,volume 104,issue 2-3,pages243-270 (issn: 0885-6125, eissn: 1573-0565)
  • Related identifiers: doi: 10.1007/s10994-016-5570-z, doi: 10.13039/501100003005
  • Subject: Software | Computer Science - Artificial Intelligence | Computer Science - Social and Information Networks | Computer Science - Neural and Evolutionary Computing | Artificial Intelligence
    arxiv: Computer Science::Neural and Evolutionary Computation

Restricted Boltzmann Machines (RBMs) and models derived from them have been successfully used as basic building blocks in deep artificial neural networks for automatic features extraction, unsupervised weights initialization, but also as density estimators. Thus, their ... View more
  • References (47)
    47 references, page 1 of 5

    Ackley, H., Hinton, E., & Sejnowski, J. (1985). A learning algorithm for Boltzmann machines. Cognitive Science, 9, 147-169.

    Ammar, H. B., Mocanu, D. C., Taylor, M., Driessens, K., Tuyls, K., & Weiss, G. (2013). Automatically mapped transfer between reinforcement learning tasks via three-way restricted boltzmann machines. In H. Blockeel, K. Kersting, S. Nijssen, & F. Elezn (Eds.), Machine learning and knowledge discovery in databases (Vol. 8189, pp. 449-464). Lecture Notes in Computer Science Berlin: Springer. doi:10.1007/ 978-3-642-40991-2_29.

    Ba, J., & Caruana, R. (2014). Do deep nets really need to be deep? Advances in Neural Information Processing Systems, 27, 2654-2662.

    Barabasi, A. L., & Albert, R. (1999). Emergence of scaling in random networks. Science, 286(5439), 509-512. doi:10.1126/science.286.5439.509.

    Bengio, Y. (2009). Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1), 1-127. doi:10.1561/2200000006.

    Brgge, K., Fischer, A., & Igel, C. (2013). The flip-the-state transition operator for restricted Boltzmann machines. Machine Learning, 93(1), 53-69. doi:10.1007/s10994-013-5390-3.

    Carreira-Perpinan, M. A., & Hinton, G. E. (2005). On contrastive divergence learning. In 10th international workshop on artificial intelligence and statistics (AISTATS).

    Clauset, A., Shalizi, C. R., & Newman, M. E. J. (2009). Power-law distributions in empirical data. SIAM Review, 51(4), 661-703. doi:10.1137/070710111.

    Del Genio, C. I., Gross, T., & Bassler, K. E. (2011). All scale-free networks are sparse. Physical Review Letter, 107, 178701. doi:10.1103/PhysRevLett.107.178701.

    Desjardins, G., Courville, A., Bengio, Y., Vincent, P., & Delalleau, O. (2010). Tempered Markov Chain Monte Carlo for training of restricted Boltzmann machines. In Y. W. Teh, & M. Titterington (Eds.), Proceedings of the thirteenth international conference on artificial intelligence and statistics (pp. 145-152), May 13-15, 2010. Sardinia: Chia Laguna Resort.

  • Metrics
    No metrics available
Share - Bookmark