Actions
  • shareshare
  • link
  • cite
  • add
add
Publication . Conference object . 2021

Rational LAMOL: A Rationale-based Lifelong Learning Framework

Kasidis Kanwatchara; Thanapapas Horsuwan; Piyawat Lertvittayakumjorn; Boonserm Kijsirikul; Peerapon Vateekul;
Open Access
Published: 01 Jan 2021
Publisher: Association for Computational Linguistics
Abstract

Lifelong learning (LL) aims to train a neural network on a stream of tasks while retaining knowledge from previous tasks. However, many prior attempts in NLP still suffer from the catastrophic forgetting issue, where the model completely forgets what it just learned in the previous tasks. In this paper, we introduce Rational LAMOL, a novel end-to-end LL framework for language models. In order to alleviate catastrophic forgetting, Rational LAMOL enhances LAMOL, a recent LL model, by applying critical freezing guided by human rationales. When the human rationales are not available, we propose exploiting unsupervised generated rationales as substitutions. In the experiment, we tested Rational LAMOL on permutations of three datasets from the ERASER benchmark. The results show that our proposed framework outperformed vanilla LAMOL on most permutations. Furthermore, unsupervised rationale generation was able to consistently improve the overall LL performance from the baseline without relying on human-annotated rationales.

Subjects by Vocabulary

Microsoft Academic Graph classification: Artificial neural network Baseline (configuration management) Language model Benchmark (computing) Lifelong learning Computer science Forgetting Artificial intelligence business.industry business Machine learning computer.software_genre computer

Related Organizations
Download fromView all 2 sources
lock_open
https://aclanthology.org/2021....
Conference object
License: cc-by
Providers: UnpayWall
moresidebar