On the approximation by single hidden layer feedforward neural networks with fixed weights

Article, Preprint English OPEN
Guliyev , Namig , ; Ismailov , Vugar , (2018)
  • Publisher: Elsevier
  • Related identifiers: doi: 10.1016/j.neunet.2017.12.007
  • Subject: activation function | sigmoidal function | approximation | ACM : I.: Computing Methodologies/I.2: ARTIFICIAL INTELLIGENCE/I.2.6: Learning/I.2.6.2: Connectionism and neural nets | C.1.3 | ACM : I.: Computing Methodologies/I.5: PATTERN RECOGNITION/I.5.1: Models/I.5.1.3: Neural nets | [ MATH.MATH-NA ] Mathematics [math]/Numerical Analysis [math.NA] | I.2.6 | weight | [ INFO.INFO-NE ] Computer Science [cs]/Neural and Evolutionary Computing [cs.NE] | I.5.1 | [ INFO.INFO-IT ] Computer Science [cs]/Information Theory [cs.IT] | feedforward neural network | Computer Science - Information Theory | hidden layer | [ MATH.MATH-IT ] Mathematics [math]/Information Theory [math.IT] | 2010 MSC: 41A30, 41A63, 65D15, 68T05, 92B20 | Computer Science - Neural and Evolutionary Computing | ACM : C.: Computer Systems Organization/C.1: PROCESSOR ARCHITECTURES/C.1.3: Other Architecture Styles/C.1.3.7: Neural nets | Mathematics - Numerical Analysis | 41A30, 41A63, 65D15, 68T05, 92B20 | ACM : F.: Theory of Computation/F.1: COMPUTATION BY ABSTRACT DEVICES/F.1.1: Models of Computation/F.1.1.4: Self-modifying machines (e.g., neural networks) | F.1.1
    arxiv: Quantitative Biology::Neurons and Cognition

International audience; Feedforward neural networks have wide applicability in various disciplines of science due to their universal approximation property. Some authors have shown that single hidden layer feedforward neural networks (SLFNs) with fixed weights still possess the universal approximation property provided that approximated functions are univariate. But this phenomenon does not lay any restrictions on the number of neurons in the hidden layer. The more this number, the more the probability of the considered network to give precise results. In this note, we constructively prove that SLFNs with the fixed weight 1 and two neurons in the hidden layer can approximate any continuous function on a compact subset of the real line. The applicability of this result is demonstrated in various numerical examples. Finally, we show that SLFNs with fixed weights cannot approximate all continuous multivariate functions.
  • References (34)
    34 references, page 1 of 4

    [1] N. Calkin and H. S. Wilf, Recounting the rationals, Amer. Math. Monthly 107 (2000), 360{ 367.

    [2] F. Cao and T. Xie, The construction and approximation for feedforword neural networks with xed weights, Proceedings of the ninth international conference on machine learning and cybernetics, Qingdao, 2010, pp. 3164{3168.

    [3] T. Chen and H. Chen, Approximation of continuous functionals by neural networks with application to dynamic systems, IEEE Trans. Neural Networks 4 (1993), 910{918.

    [4] C. K. Chui and X. Li, Approximation by ridge functions and neural networks with one hidden layer, J. Approx. Theory 70 (1992), 131{141.

    [5] C. K. Chui, X. Li and H. N. Mhaskar, Limitations of the approximation capabilities of neural networks with one hidden layer, Adv. Comput. Math. 5 (1996), no. 2-3, 233{243.

    [6] D. Costarelli and R. Spigler, Constructive approximation by superposition of sigmoidal functions, Anal. Theory Appl. 29 (2013), 169{196.

    [7] N. E. Cotter, The Stone{Weierstrass theorem and its application to neural networks, IEEE Trans. Neural Networks 1 (1990), 290{295.

    [8] G. Cybenko, Approximation by superpositions of a sigmoidal function, Math. Control Signal Systems 2 (1989), 303{314.

    [9] S. Draghici, On the capabilities of neural networks using limited precision weights, Neural Networks 15 (2002), 395{414.

    [10] K. Funahashi, On the approximate realization of continuous mapping by neural networks, Neural Networks 2 (1989), 183{192.

  • Similar Research Results (1)
  • Metrics
    No metrics available
Share - Bookmark