Downloads provided by UsageCounts
In this work, we explore how to learn task-specific language models aimed towards learning rich representation of keyphrases from text documents. We experiment with different masking strategies for pre-training transformer language models (LMs) in discriminative as well as generative settings. In the discriminative setting, we introduce a new pre-training objective - Keyphrase Boundary Infilling with Replacement (KBIR), showing large gains in performance (up to 9.26 points in F1) over SOTA, when LM pre-trained using KBIR is fine-tuned for the task of keyphrase extraction. In the generative setting, we introduce a new pre-training setup for BART - KeyBART, that reproduces the keyphrases related to the input text in the CatSeq format, instead of the denoised original input. This also led to gains in performance (up to 4.33 points in F1@M) over SOTA for keyphrase generation. Additionally, we also fine-tune the pre-trained language models on named entity recognition (NER), question answering (QA), relation extraction (RE), abstractive summarization and achieve comparable performance with that of the SOTA, showing that learning rich representation of keyphrases is indeed beneficial for many other fundamental NLP tasks. As a part of this zip file we release the KBIR model which is continually pre-trained on RoBERTa-Large and also the KeyBART model which is continually pre-trained on BART-Large. Both these models can be used in place of a RoBERTa-Large or BART-Large model in PyTorch codebases and also with HuggingFace.
Keyphrase Extraction, Pre-training Objectives, Summarization, Keyphrases, Pre-trained language model, Named Entity Recognition, Keyphrase Generation, Question Answering, Relation Extraction, Natural Language Processing
Keyphrase Extraction, Pre-training Objectives, Summarization, Keyphrases, Pre-trained language model, Named Entity Recognition, Keyphrase Generation, Question Answering, Relation Extraction, Natural Language Processing
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 18 | |
| downloads | 1 |

Views provided by UsageCounts
Downloads provided by UsageCounts