
We propose the Polynomial Mirror, a theoretical framework to algebraically approximate any trained neural network using only polynomial functions. By replacing activation functions and affine transformations with polynomial approximations, we construct a symbolic representation of the original network. Relying on classical results in approximation theory, this framework invites rigorous exploration of neural networks from a purely algebraic and interpretable perspective. Beyond interpretability, this framework also enables neuron-level customization: since each activation function is replaced by a polynomial, the shape of each neuron’s response becomes tunable. This suggests a potential path toward enhancing model performance through fine-grained control of activation behaviors. While this is a predicted consequence, we invite further research into its effectiveness and its relation to prior work on learned activation functions.
| citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
