Powered by OpenAIRE graph
Found an issue? Give us feedback
addClaim

Self-supervision for reinforcement learning

Authors: Anand, Ankesh;

Self-supervision for reinforcement learning

Abstract

Cette thèse tente de construire de meilleurs agents d'apprentissage par renforcement (RL) en tirant parti de l'apprentissage auto-supervisé. Il se présente sous la forme d'une thèse par article qui contient trois travaux. Dans le premier article, nous construisons un benchmark basé sur les jeux Atari pour évaluer systématiquement les méthodes d'apprentissage auto-supervisé dans les environnements RL. Nous comparons un éventail de ces méthodes à travers une suite de tâches de sondage pour identifier leurs forces et leurs faiblesses. Nous montrons en outre qu'une nouvelle méthode contrastive ST-DIM excelle à capturer la plupart des facteurs génératifs dans les environnements étudiés, sans avoir besoin de s'appuyer sur des étiquettes ou des récompenses. Dans le deuxième article, nous proposons des représentations auto-prédictives (SPR) qui apprennent un modèle latent auto-supervisé de la dynamique de l'environnement parallèlement à la résolution de la tâche RL en cours. Nous montrons que SPR réalise des améliorations spectaculaires dans l'état de l'art sur le benchmark Atari 100k difficile où les agents n'ont droit qu'à 2 heures d'expérience en temps réel. Le troisième article étudie le rôle de la RL basée sur un modèle et de l'apprentissage auto-supervisé dans le contexte de la généralisation en RL. Grâce à des contrôles minutieux, nous montrons que la planification et l'apprentissage de représentation basé sur un modèle contribuent tous deux à une meilleure généralisation pour l'agent Muzero. Nous améliorons encore MuZero avec des objectifs d'apprentissage auto-supervisés auxiliaires, et montrons que cet agent MuZero++ obtient des résultats de pointe sur les benchmarks Procgen et Metaworld.

This thesis tries to build better Reinforcement Learning (RL) agents by leveraging self-supervised learning. It is presented as a thesis by article that contains three pieces of work. In the first article, we construct a benchmark based on Atari games to systematically evaluate self-supervised learning methods in RL environments. We compare an array of such methods across a suite of probing tasks to identify their strengths and weaknesses. We further show that a novel contrastive method ST-DIM excels at capturing most generative factors in the studied environments, without needing to rely on labels or rewards. In the second article, we propose Self-Predictive Representations (SPR) that learns a self-supervised latent model of the environment dynamics alongside solving the RL task at hand. We show that SPR achieves dramatic improvements in state-of-the-art on the challenging Atari 100k benchmark where agents are allowed only 2 hours of real-time experience. The third article studies the role of model-based RL and self-supervised learning in the context of generalization in RL. Through careful controls, we show that planning and model-based representation learning both contribute towards better generalization for the Muzero agent. We further improve MuZero with auxiliary self-supervised learning objectives, and show that this MuZero++ agent achieves state-of-the-art results on the Procgen and Metaworld benchmarks.

Keywords

Deep Learning, Self-Supervised Learning, Apprentissage auto-supervisé, Apprentissage par renforcement, Apprentissage en profondeur, Reinforcement Learning

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average
Upload OA version
Are you the author of this publication? Upload your Open Access version to Zenodo!
It’s fast and easy, just two clicks!