
Summary: This paper considers online classification learning algorithms based on regularization schemes in reproducing kernel Hilbert spaces associated with general convex loss functions. A novel capacity independent approach is presented. It verifies the strong convergence of the algorithm under a very weak assumption of the step sizes and yields satisfactory convergence rates for polynomially decaying step sizes. Explicit learning rates with respect to the misclassification error are given in terms of the choice of step sizes and the regularization parameter (depending on the sample size). Error bounds associated with the hinge loss, the least square loss, and the support vector machine q-norm loss are presented to illustrate our method.
Classification and discrimination; cluster analysis (statistical aspects), Learning and adaptive systems in artificial intelligence
Classification and discrimination; cluster analysis (statistical aspects), Learning and adaptive systems in artificial intelligence
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 70 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
