Downloads provided by UsageCounts
Despite advances in deep learning and knowledge graphs (KGs), using language models for natural language understanding and question answering remains a challenging task. Pre-trained language models (PLMs) have shown the ability to leverage contextual information to complete cloze prompts, next sentence completion, and question answering tasks in various domains. Unlike structured data querying in KGs, mapping an input question to data that may or may not be stored by the language model is not a simple task. Recent studies have highlighted the improvements that can be made to the quality of information retrieved from PLMs by adding auxiliary data to otherwise naive prompts. In this paper, we explore the effects of enriching prompts with additional contextual information leveraged from the Wikidata KG on language model performance. Specifically, we compare the performance of naive vs. KG-engineered cloze prompts for entity genre classification in the movie domain. Selecting a broad range of commonly available Wikidata properties, we show that enrichment of cloze-style prompts with Wikidata information can result in a significantly higher recall for the investigated BERT and RoBERTa large PLMs. However, it is also apparent that the optimum level of data enrichment differs between models.
info:eu-repo/classification/ddc/330, 330, knowledge graph, Knowledge Graph, ddc:330, Economics, Pre-trained Language Model, Prompt Learning, prompt learning, pre-trained language model
info:eu-repo/classification/ddc/330, 330, knowledge graph, Knowledge Graph, ddc:330, Economics, Pre-trained Language Model, Prompt Learning, prompt learning, pre-trained language model
| citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 3 | |
| downloads | 13 |

Views provided by UsageCounts
Downloads provided by UsageCounts