
Abstract Systematic analysis of academic literature increasingly requires extraction of structured information from theoretical texts—a task that challenges both traditional manual coding and naive computational approaches. Sedlar et al. (2023), in their systematic review of normalization of deviance, concluded that "behavioral research is desperately needed to support the mostly conceptual nature of the academic literature" (p. 303). Yet their own review—constrained by a single coder and 33 papers—exemplifies the scalability limitations they identified. This paper presents a methodology that directly addresses this gap: adapting large language models to extract structured assertions through parameter-efficient fine-tuning, enabling the kind of comprehensive literature analysis that manual methods cannot achieve. We describe schema design principles that preserve theoretical nuance while enabling quantitative aggregation, training data construction procedures that maintain interpretive validity, and fidelity verification protocols that integrate human oversight with computational efficiency. The methodology is demonstrated through application to normalization of deviance literature, yielding 5,678 classified tokens across 27 source documents—including the foundational Vaughan corpus that Sedlar's aerospace-excluding methodology omitted. Our extraction reveals that 67% of Vaughan's core mechanisms (practical drift, social construction of risk) are absent from Sedlar's framework—a "Type II Error Irony" wherein a framework designed to detect missed risks itself misses critical mechanisms. While the demonstration domain is organizational safety research, the approach generalizes to any meta-synthesis requiring structured extraction from theoretical texts. The human-AI research workflow documented here—wherein researchers establish standard operating procedures with AI agents, deploy fine-tuned extraction models, and close work orders through fidelity verification—provides a replicable template for rigorous computational scholarship. --- Keywords fine-tuning, large language models, meta-synthesis, literature analysis, assertion extraction, systematic review, normalization of deviance, human-AI collaboration, qualitative research methodology
normalization of deviance, meta-synthesis, systematic review, human-ai collaboration, qualitative research methodology, large language models, literature analysis, assertion extraction, fine-tuning
normalization of deviance, meta-synthesis, systematic review, human-ai collaboration, qualitative research methodology, large language models, literature analysis, assertion extraction, fine-tuning
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
