
doi: 10.3390/math13132210
This systematic review explores modern optimization methods for machine learning, distinguishing between gradient-based techniques using derivative information and population-based approaches employing stochastic search. Key innovations focus on enhanced regularization, adaptive control mechanisms, and biologically inspired strategies to address challenges like scaling to large models, navigating complex non-convex landscapes, and adapting to dynamic constraints. These methods underpin core ML tasks including model training, hyperparameter tuning, and feature selection. While significant progress is evident, limitations in scalability and theoretical guarantees persist, directing future work toward more robust and adaptive frameworks to advance AI applications in areas like autonomous systems and scientific discovery.
machine learning, swarm intelligence, QA1-939, optimization methods, deep learning, gradient-based optimization, Mathematics
machine learning, swarm intelligence, QA1-939, optimization methods, deep learning, gradient-based optimization, Mathematics
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 4 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
