
Abstract This paper focuses on the problem of sparse learning-to-rank , where the learned ranking models usually have very few non-zero coefficients. An exponential gradient algorithm is proposed to learn sparse models for learning-to-rank, which can be formulated as a convex optimization problem with the l 1 constraint. Our proposed algorithm has a competitive theoretical worst-case performance guarantee, that is, we can obtain an ϵ-accurate solution after O ( 1 ϵ ) iterations. An early stopping criterion based on Fenchel duality is proposed to make the algorithm be more efficient in practice. Extensive experiments are conducted on some benchmark datasets. The results demonstrate that a sparse ranking model can significantly improve the accuracy of ranking prediction compared to dense models, and the proposed algorithm shows stable and competitive performance compared to several state-of-the-art baseline algorithms.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 4 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
