Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Other literature type . 2025
License: CC BY
Data sources: ZENODO
ZENODO
Presentation . 2025
License: CC BY
Data sources: Datacite
ZENODO
Presentation . 2025
License: CC BY
Data sources: Datacite
versions View all 2 versions
addClaim

Automated Archaeological Image Annotation

AI-Assisted Object Recognition and Metadata Enrichment
Authors: Pajdla, Petr; Ronald, Harasim; Novák, David; Lečbychová, Olga;

Automated Archaeological Image Annotation

Abstract

The ongoing digitisation of archaeological image archives presents significant opportunities for knowledge discovery, yet also poses considerable challenges, as processing vast amounts of visual data remains a time-intensive task well suited for automation. The application of artificial intelligence (AI) and distant viewing methods offers a scalable solution to enhance the usability, accessibility, and interoperability of large archaeological image archives. Without such automation, achieving comparable results would require years of manual processing. This paper presents a workflow for automatic image annotation, developed to improve (meta)data quality in the Archaeological Map of the Czech Republic (AMCR) repository and discovery services. We outline the training process and pilot implementation of a deep learning model fine-tuned for archaeological datasets, employing ResNet architecture. The workflow enables segmentation and annotation of archaeological images using domain-specific controlled vocabulary terms, facilitating the identification of artefact types and other relevant visual elements. To address the diversity of archaeological photography, we train the model on two distinct image categories: single artefact/find images, typically photographed on standardised backgrounds with scales, and excavation and fieldwork photographs, capturing a wide range of archaeological contexts, from entire excavations and sites to individual trenches and burials. The planned outcomes of this research are: a documented workflow adaptable for similar applications, a ground-truth dataset for training and benchmarking archaeological image recognition models, the implementation of automated annotation into the metadata creation process, particularly for non-professional data providers and the bulk processing of (legacy) data, and enhanced metadata quality in the AMCR repository and discovery services, improving the searchability and accessibility of archaeological images.

Presentation at the 31st EAA 2025 Annual Meeting in session #268 Archaeology, artificial intelligence, and image analysis.

Keywords

Artificial intelligence, Archaeology, Computer vision

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average