
It is well-known that any matrix $A$ has an LU decomposition. Less well-known is the fact that it has a 'Toeplitz decomposition' $A = T_1 T_2 \cdots T_r$ where $T_i$'s are Toeplitz matrices. We will prove that any continuous function $f : \mathbb{R}^n \to \mathbb{R}^m$ has an approximation to arbitrary accuracy by a neural network that takes the form $L_1 σ_1 U_1 σ_2 L_2 σ_3 U_2 \cdots L_r σ_{2r-1} U_r$, i.e., where the weight matrices alternate between lower and upper triangular matrices, $σ_i(x) := σ(x - b_i)$ for some bias vector $b_i$, and the activation $σ$ may be chosen to be essentially any uniformly continuous nonpolynomial function. The same result also holds with Toeplitz matrices, i.e., $f \approx T_1 σ_1 T_2 σ_2 \cdots σ_{r-1} T_r$ to arbitrary accuracy, and likewise for Hankel matrices. A consequence of our Toeplitz result is a fixed-width universal approximation theorem for convolutional neural networks, which so far have only arbitrary width versions. Since our results apply in particular to the case when $f$ is a general neural network, we may regard them as LU and Toeplitz decompositions of a neural network. The practical implication of our results is that one may vastly reduce the number of weight parameters in a neural network without sacrificing its power of universal approximation. We will present several experiments on real data sets to show that imposing such structures on the weight matrices sharply reduces the number of training parameters with almost no noticeable effect on test accuracy.
14 pages, 3 figures
Numerical computation of eigenvalues and eigenvectors of matrices, FOS: Computer and information sciences, Computer Science - Machine Learning, Toeplitz, Cauchy, and related matrices, Learning and adaptive systems in artificial intelligence, universal approximation, Numerical Analysis (math.NA), neural networks, 68T07, 41A30, 41A46, 15B05, Factorization of matrices, Approximation by other special function classes, Machine Learning (cs.LG), Hankel matrices, Toeplitz matrices, convolutional neural networks, FOS: Mathematics, triangular matrices, Mathematics - Numerical Analysis
Numerical computation of eigenvalues and eigenvectors of matrices, FOS: Computer and information sciences, Computer Science - Machine Learning, Toeplitz, Cauchy, and related matrices, Learning and adaptive systems in artificial intelligence, universal approximation, Numerical Analysis (math.NA), neural networks, 68T07, 41A30, 41A46, 15B05, Factorization of matrices, Approximation by other special function classes, Machine Learning (cs.LG), Hankel matrices, Toeplitz matrices, convolutional neural networks, FOS: Mathematics, triangular matrices, Mathematics - Numerical Analysis
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 6 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
