
We introduce a method that incorporates robustness to one of the main building blocks of sparse modeling: dictionary learning. Particularly, we exploit correntropy to compute the principal components in cases where outliers might be detrimental without proper care. This is further added to one of the most utilized dictionary learning tools: K-SVD; the result is Correntropy K-SVD, or CK-SVD, a method that is based on a Maximum Correntropy Criterion (MCC) instead of the somewhat limited Minimum Squared Error (MSE) approach. The optimization is performed using the well-known Half-Quadratic (HQ) technique, which allows a fast and efficient implementation. The results show the importance of this work not only by outperforming K-SVD, but also by circumventing one of the main assumptions during learning overcomplete representations: the availability of untampered, noiseless and outlier-free samples for training stages.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 6 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
