
Abstract This paper presents a structured overview and novel insights into the universal approximation property offeedforward neural networks. We categorize existing results based on the characteristics of activation functions— ranging from strictly monotonic to weakly monotonic and continuous almost everywhere — and examinetheir implications under architectural constraints such as bounded depth and width. Building on classical resultsby Cybenko [1], Hornik [2], and Maiorov [3], we introduce new activation functions that enable even simplerneural network architectures to retain universal approximation capabilities. Notably, we demonstrate thatsingle-layer networks with only two neurons and fixed weights can approximate any continuous univariatefunction, and that two-layer networks can extend this capability to multivariate functions. These findings refinethe known lower bounds of neural network complexity and offer constructive approaches that preserve strictmonotonicity, improving upon prior work that relied on relaxed monotonicity conditions. Our results contributeto the theoretical foundation of neural networks and open pathways for designing minimal yet expressivearchitectures.
Universal Approximation Theorem, Neural Network, Activation Function
Universal Approximation Theorem, Neural Network, Activation Function
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
