Equivalence of Equilibrium Propagation and Recurrent Backpropagation

Preprint English OPEN
Scellier, Benjamin; Bengio, Yoshua;
  • Subject: Computer Science - Learning
    arxiv: Computer Science::Neural and Evolutionary Computation

Recurrent Backpropagation and Equilibrium Propagation are supervised learning algorithms for fixed point recurrent neural networks which differ in their second phase. In the first phase, both algorithms converge to a fixed point which corresponds to the configuration wh... View more
  • References (6)

    L. B. Almeida. A learning rule for asynchronous perceptrons with feedback in a combinatorial environment. volume 2, pages 609-618, San Diego 1987, 1987. IEEE, New York.

    F. Crick. The recent excitement about neural networks. Nature, 337(6203):129-132, 1989.

    G. E. Hinton and J. L. McClelland. Learning representations by recirculation. In D. Z. Anderson, editor, Neural Information Processing Systems, pages 358-366. American Institute of Physics, 1988.

    J. J. Hopfield. Neurons with graded responses have collective computational properties like those of two-state neurons. 81, 1984.

    F. J. Pineda. Generalization of back-propagation to recurrent neural networks. 59:2229-2232, 1987.

    B. Scellier and Y. Bengio. Equilibrium propagation: Bridging the gap between energy-based models and backpropagation. Frontiers in computational neuroscience, 11, 2017.

  • Metrics
Share - Bookmark