Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Conference object . 2025
License: CC BY
Data sources: ZENODO
ZENODO
Article . 2025
License: CC BY
Data sources: Datacite
ZENODO
Article . 2025
License: CC BY
Data sources: Datacite
versions View all 2 versions
addClaim

Remarks on the Universal Approximation Property of Feedforward Neural Networks

Authors: Kupka, Jiri; Alijani, Zahra; Števuliáková, Petra;

Remarks on the Universal Approximation Property of Feedforward Neural Networks

Abstract

Abstract This paper presents a structured overview and novel insights into the universal approximation property offeedforward neural networks. We categorize existing results based on the characteristics of activation functions— ranging from strictly monotonic to weakly monotonic and continuous almost everywhere — and examinetheir implications under architectural constraints such as bounded depth and width. Building on classical resultsby Cybenko [1], Hornik [2], and Maiorov [3], we introduce new activation functions that enable even simplerneural network architectures to retain universal approximation capabilities. Notably, we demonstrate thatsingle-layer networks with only two neurons and fixed weights can approximate any continuous univariatefunction, and that two-layer networks can extend this capability to multivariate functions. These findings refinethe known lower bounds of neural network complexity and offer constructive approaches that preserve strictmonotonicity, improving upon prior work that relied on relaxed monotonicity conditions. Our results contributeto the theoretical foundation of neural networks and open pathways for designing minimal yet expressivearchitectures.

Related Organizations
Keywords

Universal Approximation Theorem, Neural Network, Activation Function

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average
Green