Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Other literature type . 2026
License: CC BY
Data sources: ZENODO
ZENODO
Research . 2026
License: CC BY
Data sources: Datacite
ZENODO
Research . 2026
License: CC BY
Data sources: Datacite
versions View all 2 versions
addClaim

When Models Agree but Institutions Disagree: Cross-Model Convergence on Genre-Boundary Blindness in AI-Assisted Journal Selection

Authors: Ma, Sincere Ann;

When Models Agree but Institutions Disagree: Cross-Model Convergence on Genre-Boundary Blindness in AI-Assisted Journal Selection

Abstract

This working paper examines a recurrent failure mode in AI-assisted journal selection: the tendency of large language models (LLMs) to conflate topical relevance and article-like format with institutional genre admissibility. While AI systems are increasingly used by researchers to identify suitable publication venues, the underlying judgment they perform remains poorly understood and insufficiently governed. Using a trace-based comparative case study combined with a small controlled mini-benchmark, the paper analyses journal-fit recommendations generated independently by two widely used LLM-based systems. Despite architectural and product-level differences, both systems converge on the same judgment pattern: confident venue recommendations driven by scope alignment and surface scholarly features, accompanied by a persistent absence of explicit reasoning about whether a journal typically accepts the proposed type of contribution. The paper characterises this shared failure mode as Genre-Boundary Blindness (GBB)—a judgment-infrastructure gap in which tacit editorial genre gates are bypassed through proxy signals. Importantly, the analysis reframes the issue from model accuracy to judgment delegation: the systems are not merely “wrong,” but are governing the wrong object. Cross-model agreement, rather than mitigating risk, may amplify automation bias by creating an illusion of corroborated institutional judgment. To address this gap, the paper proposes a Journal-Fit Safety Protocol (JFSP) that treats journal selection as a governed institutional judgment rather than a pattern-matching task. The protocol requires explicit contribution-type classification, venue-specific disqualifiers, and conditional uncertainty disclosure, offering a practical governance intervention for AI-assisted research workflows. This paper contributes to the literature on AI governance, human–AI judgment delegation, and academic publishing by making editorial judgment flows observable and auditable. It is intended as part of the broader Judgment Infrastructure research programme and serves as a foundational case for understanding how AI systems misrepresent institutional decision processes in knowledge production.

Keywords

AI governance; judgment infrastructure; academic publishing; automation bias; large language models; journal selection; genre governance

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average