A single hidden layer feedforward network with only one neuron in the hidden layer can approximate any univariate function

Article, Preprint English OPEN
Guliyev , Namig; Ismailov , Vugar;
(2015)
  • Publisher: Massachusetts Institute of Technology Press (MIT Press)
  • Related identifiers: doi: 10.1162/NECO_a_00849
  • Subject: Sigmoidal functions | λ-monotonicity | Continued fractions | ACM : I.: Computing Methodologies/I.2: ARTIFICIAL INTELLIGENCE/I.2.6: Learning/I.2.6.2: Connectionism and neural nets | 41A30, 65D15, 92B20 | ACM : I.: Computing Methodologies/I.5: PATTERN RECOGNITION/I.5.1: Models/I.5.1.3: Neural nets | [ MATH.MATH-NA ] Mathematics [math]/Numerical Analysis [math.NA] | Bernstein polynomials | [ INFO.INFO-NE ] Computer Science [cs]/Neural and Evolutionary Computing [cs.NE] | Calkin--Wilf sequence | [ INFO.INFO-IT ] Computer Science [cs]/Information Theory [cs.IT] | Smooth transition function | Computer Science - Information Theory | [ MATH.MATH-IT ] Mathematics [math]/Information Theory [math.IT] | Computer Science - Neural and Evolutionary Computing | ACM : C.: Computer Systems Organization/C.1: PROCESSOR ARCHITECTURES/C.1.3: Other Architecture Styles/C.1.3.7: Neural nets | Mathematics - Numerical Analysis | ACM : F.: Theory of Computation/F.1: COMPUTATION BY ABSTRACT DEVICES/F.1.1: Models of Computation/F.1.1.4: Self-modifying machines (e.g., neural networks)

The possibility of approximating a continuous function on a compact subset of the real line by a feedforward single hidden layer neural network with a sigmoidal activation function has been studied in many papers. Such networks can approximate an arbitrary continuous fu... View more
  • References (30)
    30 references, page 1 of 3

    [1] A. R. Barron, Universal approximation bounds for superposition of a sigmoidal function, IEEE Trans. Information Theory 39 (1993), 930{945.

    [2] N. Calkin and H. S. Wilf, Recounting the rationals, Amer. Math. Monthly 107 (2000), 360{ 367.

    [3] F. Cao, T. Xie, and Z. Xu, The estimate for approximation error of neural networks: A constructive approach, Neurocomputing 71 (2008), no. 4{6, 626{630.

    [4] S. M. Carroll and B. W. Dickinson, Construction of neural nets using the Radon transform, in Proceedings of the IEEE 1989 International Joint Conference on Neural Networks, 1989, Vol. 1, IEEE, New York, pp. 607{611.

    [5] T. Chen, H. Chen, and R. Liu, A constructive proof of Cybenko's approximation theorem and its extensions, In Computing Science and Statistics, Springer-Verlag, 1992, pp. 163{168.

    [6] C. K. Chui and X. Li, Approximation by ridge functions and neural networks with one hidden layer, J. Approx. Theory 70 (1992), 131{141.

    [7] D. Costarelli and R. Spigler, Constructive approximation by superposition of sigmoidal functions, Anal. Theory Appl. 29 (2013), 169{196.

    [8] N. E. Cotter, The Stone{Weierstrass theorem and its application to neural networks, IEEE Trans. Neural Networks 1 (1990), 290{295.

    [9] G. Cybenko, Approximation by superpositions of a sigmoidal function, Math. Control, Signals, and Systems 2 (1989), 303{314.

    [10] K. Funahashi, On the approximate realization of continuous mapping by neural networks, Neural Networks 2 (1989), 183{192.

  • Metrics
Share - Bookmark