Downloads provided by UsageCounts
We introduce a general framework for nonlinear stochastic gradient descent (SGD) for the scenarios when gradient noise exhibits heavy tails. The proposed framework subsumes several popular nonlinearity choices, like clipped, normalized, signed or quantized gradient, but we also consider novel nonlinearity choices. We establish for the considered class of methods strong convergence guarantees assuming a strongly convex cost function with Lipschitz continuous gradients under very general assumptions on the gradient noise. Most notably, we show that, for a nonlinearity with bounded outputs and for the gradient noise that may not have finite moments of order greater than one, the nonlinear SGD’s mean squared error (MSE), or equivalently, the expected cost function’s optimality gap, converges to zero at rate O(1/tζ ), ζ ∈ (0, 1). In contrast, for the same noise setting, the linear SGD generates a sequence with unbounded variances. Furthermore, for general nonlinearities that can be decoupled component wise and a class of joint nonlinearities, we show that the nonlinear SGD asymptotically (locally) achieves a O(1/t) rate in the weak convergence sense and explicitly quantify the corresponding asymptotic variance. Experiments show that, while our framework is more general than existing studies of SGD under heavy-tail noise, several easy-to-implement nonlinearities from our framework are competitive with state-of-the-art alternatives on real data sets with heavy tail noises.
FOS: Computer and information sciences, Computer Science - Machine Learning, Computer Science - Information Theory, Information Theory (cs.IT), asymptotic normality, Stochastic optimization, nonlinear mapping, heavy-tail noise, Machine Learning (cs.LG), convergence rate, Optimization and Control (math.OC), stochastic gradient descent, stochastic approximation, FOS: Mathematics, mean square analysis, Mathematics - Optimization and Control
FOS: Computer and information sciences, Computer Science - Machine Learning, Computer Science - Information Theory, Information Theory (cs.IT), asymptotic normality, Stochastic optimization, nonlinear mapping, heavy-tail noise, Machine Learning (cs.LG), convergence rate, Optimization and Control (math.OC), stochastic gradient descent, stochastic approximation, FOS: Mathematics, mean square analysis, Mathematics - Optimization and Control
| citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 1 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 4 | |
| downloads | 13 |

Views provided by UsageCounts
Downloads provided by UsageCounts