
This working paper examines a recurrent failure mode in AI-assisted journal selection: the tendency of large language models (LLMs) to conflate topical relevance and article-like format with institutional genre admissibility. While AI systems are increasingly used by researchers to identify suitable publication venues, the underlying judgment they perform remains poorly understood and insufficiently governed. Using a trace-based comparative case study combined with a small controlled mini-benchmark, the paper analyses journal-fit recommendations generated independently by two widely used LLM-based systems. Despite architectural and product-level differences, both systems converge on the same judgment pattern: confident venue recommendations driven by scope alignment and surface scholarly features, accompanied by a persistent absence of explicit reasoning about whether a journal typically accepts the proposed type of contribution. The paper characterises this shared failure mode as Genre-Boundary Blindness (GBB)—a judgment-infrastructure gap in which tacit editorial genre gates are bypassed through proxy signals. Importantly, the analysis reframes the issue from model accuracy to judgment delegation: the systems are not merely “wrong,” but are governing the wrong object. Cross-model agreement, rather than mitigating risk, may amplify automation bias by creating an illusion of corroborated institutional judgment. To address this gap, the paper proposes a Journal-Fit Safety Protocol (JFSP) that treats journal selection as a governed institutional judgment rather than a pattern-matching task. The protocol requires explicit contribution-type classification, venue-specific disqualifiers, and conditional uncertainty disclosure, offering a practical governance intervention for AI-assisted research workflows. This paper contributes to the literature on AI governance, human–AI judgment delegation, and academic publishing by making editorial judgment flows observable and auditable. It is intended as part of the broader Judgment Infrastructure research programme and serves as a foundational case for understanding how AI systems misrepresent institutional decision processes in knowledge production.
AI governance; judgment infrastructure; academic publishing; automation bias; large language models; journal selection; genre governance
AI governance; judgment infrastructure; academic publishing; automation bias; large language models; journal selection; genre governance
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
