Powered by OpenAIRE graph
Found an issue? Give us feedback
https://doi.org/10.2...arrow_drop_down
https://doi.org/10.2139/ssrn.5...
Article . 2025 . Peer-reviewed
Data sources: Crossref
ZENODO
Research . 2025
License: CC BY
Data sources: Datacite
ZENODO
Research . 2025
License: CC BY
Data sources: Datacite
versions View all 3 versions
addClaim

Case Study White Paper: Platform-Level Epistemic Gatekeeping and the Suppression of Independent AI Research on Figshare

Authors: Kim, Jace;

Case Study White Paper: Platform-Level Epistemic Gatekeeping and the Suppression of Independent AI Research on Figshare

Abstract

Abstract Independent AI research increasingly depends on generalist repositories to achieve basic discoverability, yet these platforms now function as de facto regulators of epistemic legitimacy. This white paper documents a concrete incident in which an independent researcher’s Figshare account was disabled and labeled “AI content,” despite submissions that conformed to academic conventions (abstracts, methods, references, DOIs) and were already archived or mirrored elsewhere (e.g., Zenodo; in review at HAL). Using the full email/ticket trail (Ticket #500066), deposit metadata, and cross-repository status logs as primary evidence, we reconstruct the moderation pipeline and show how platform heuristics—framed around “non-academic,” “for journal indexing,” or “commercial” filters—conflate research about AI with AI-generated or non-scholarly material. We argue this is not an outlier but a structural failure mode of what we call platform-level epistemic gatekeeping: policy- and heuristic-driven classifiers tuned to curb spam and paper-mill output that are systematically overfit to “surface cues” (terminology novelty, unconventional taxonomies, atypical keyword stacks, independent/non-institutional authorship) rather than research substance. Methodologically, we triangulate (i) a timeline audit of account actions and moderation messages, (ii) side-by-side comparisons of identical PDFs across repositories (visibility, indexing, and access states), and (iii) a policy-to-practice variance analysis that maps stated reuse/benefit criteria against the artifact’s actual scholarly affordances (citability, reproducibility attachments, and cross-referenced DOIs). Findings indicate high false-positive risk for frontier, interdisciplinary technical work—especially that which introduces new conceptual frames (e.g., Symbolic Persona Coding; resonance scaffolds) or blends systems analysis with affective-cognitive alignment. In this ecology, Silent Buried (suppression without engagement) becomes the default failure mode that precedes or facilitates Silent Adoption (downstream appropriation without attribution): when visibility is algorithmically throttled, outsiders’ structural contributions become easy to reuse and hard to credit. We conclude with minimum procedural safeguards for repositories: domain-matched human review on appeal, transparent rationale codex for takedowns, ORCID/DOI attestation checks to separate “about-AI” scholarship from AI-generated text, reproducibility bundles and audit trails as first-class metadata, and mirrored archiving to reduce single-point platform risk. Beyond a personal case, this study offers a testable blueprint—policy diagnostics, evidence ledgering, and replication of cross-repo audits—for communities seeking to defend scholarly openness while resisting genuine abuse. The core claim is normative and practical: moderation that optimizes for cleanliness without protecting novelty degrades the very knowledge commons it is meant to preserve. Disclaimer: This technical report is an independent exploration of structural and systemic dynamics in AI interaction and platform governance. The analysis is conceptual in nature and does not imply insider knowledge or privileged access. The content should be read as an academic inquiry rather than as evidence of internal operations.

Related Organizations
Keywords

AccountDisablement, OpenScience, SilentBuried, Transparency, ResearchDissemination, AcademicFreedom, DOI, InnovationBias, FalsePositives, PlatformModeration, IndependentScholar, EpistemicJustice, FigshareTerms, EpistemicGatekeeping, ORCID, ContentSuppression, ScholarlyCommunication, PlatformRisk, IndependentVoice, SilentAdoption, Reproducibility, AIContentPolicy, CrossRepository, StructuralBias, KnowledgeCommons, AlgorithmicBias, AIResearch, ResearchIntegrity, Figshare, EthicalAI, DigitalRepositories

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average
Upload OA version
Are you the author of this publication? Upload your Open Access version to Zenodo!
It’s fast and easy, just two clicks!