Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ INRIA2arrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
INRIA2
Doctoral thesis . 2017
Data sources: INRIA2
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
versions View all 4 versions
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

Efficient Sequential Learning in Structured and Constrained Environments

Authors: Calandriello, Daniele;

Efficient Sequential Learning in Structured and Constrained Environments

Abstract

The main advantage of non-parametric models is that the accuracy of the model (degreesof freedom) adapts to the number of samples. The main drawback is the so-called "curseof kernelization": to learn the model we must first compute a similarity matrix among allsamples, which requires quadratic space and time and is unfeasible for large datasets.Nonetheless the underlying effective dimension (effective d.o.f.) of the dataset is often muchsmaller than its size, and we can replace the dataset with a subset (dictionary) of highlyinformative samples. Unfortunately, fast data-oblivious selection methods (e.g., uniformsampling) almost always discard useful information, while data-adaptive methods thatprovably construct an accurate dictionary, such as ridge leverage score (RLS) sampling,have a quadratic time/space cost.In this thesis we introduce a new single-pass streaming RLS sampling approach thatsequentially construct the dictionary, where each step compares a new sample only withthe current intermediate dictionary and not all past samples. We prove that the size ofall intermediate dictionaries scales only with the effective dimension of the dataset, andtherefore guarantee a per-step time and space complexity independent from the number ofsamples. This reduces the overall time required to construct provably accurate dictionariesfrom quadratic to near-linear, or even logarithmic when parallelized.Finally, for many non-parametric learning problems (e.g., K-PCA, graph SSL, online kernellearning) we we show that we can can use the generated dictionaries to compute approximatesolutions in near-linear that are both provably accurate and empirically competitive.

L’avantage principal des méthodes d’apprentissage non-paramétriques réside dans le faitque la nombre de degrés de libertés du modèle appris s’adapte automatiquement au nombred’échantillons. Ces méthodes sont cependant limitées par le "fléau de la kernelisation":apprendre le modèle requière dans un premier temps de construire une matrice de similitudeentre tous les échantillons. La complexité est alors quadratique en temps et espace, ce quis’avère rapidement trop coûteux pour les jeux de données de grande dimension.Cependant, la dimension "effective" d’un jeu de donnée est bien souvent beaucoup pluspetite que le nombre d’échantillons lui-même. Il est alors possible de substituer le jeude donnée réel par un jeu de données de taille réduite (appelé "dictionnaire") composéexclusivement d’échantillons informatifs. Malheureusement, les méthodes avec garantiesthéoriques utilisant des dictionnaires comme "Ridge Leverage Score" (RLS) ont aussi unecomplexité quadratique.Dans cette thèse nous présentons une nouvelle méthode d’échantillonage RLS qui met àjour le dictionnaire séquentiellement en ne comparant chaque nouvel échantillon qu’avecle dictionnaire actuel, et non avec l’ensemble des échantillons passés. Nous montrons quela taille de tous les dictionnaires ainsi construits est de l’ordre de la dimension effectivedu jeu de données final, guarantissant ainsi une complexité en temps et espace à chaqueétape indépendante du nombre total d’échantillons. Cette méthode présente l’avantage depouvoir être parallélisée.Enfin, nous montrons que de nombreux problèmes d’apprentissage non-paramétriquespeuvent être résolus de manière approchée grâce à notre méthode.

Keywords

Distributed learning, Stochastic gradient method, Principal component analysis method, Newton and quasi-Newton methods, Sequential learning, Kernel learning, Dictionary learning, Nystrom-type algorithm, [INFO.INFO-LG] Computer Science [cs]/Machine Learning [cs.LG], Low-rank Matrix Approximation, Apprentissage, [STAT.ML] Statistics [stat]/Machine Learning [stat.ML], Nystrom approximation, Online learning, [INFO.INFO-NA] Computer Science [cs]/Numerical Analysis [cs.NA], Semi-supervised learning, Graph Laplacian spectrum, Gaussian process

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average
Green
Related to Research communities