
annotations4all is a modular framework for performing text annotation tasks with Large Language Models (LLMs) in a reproducible workflow. While recent studies have shown that contextual prompting allows LLMs to achieve high-quality Named Entity Recognition (NER) without fine-tuning, most implementations remain ad-hoc and difficult to reproduce. annotations4all provides a configurable pipeline that separates prompt templates, model interaction, response parsing, and evaluation. It supports both local and API-based models and produces structured span annotations that can be exported in standard formats. The framework was developed in the context of the NER4All project, where contextual prompting outperformed established NER tools such as spaCy and flair on historical texts. annotations4all generalizes these methods into reusable components and enables researchers to apply LLM-based annotation workflows to new corpora and domains.
LLM, Prompt Engineering, Text Annotation, Research Software, Named Entity Recognition, Digital humanities
LLM, Prompt Engineering, Text Annotation, Research Software, Named Entity Recognition, Digital humanities
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
