
Practitioner Notes What is already known about this topic: Transformative Learning Theory demonstrates how adult learners restructure meaning perspectives through critical reflection on disorienting dilemmas, particularly when supported by intersubjective discourse. Students often approach AI systems with polarised misconceptions, viewing them as either infallible authorities or prohibited cheating tools. The technology’s rapid evolution creates pedagogical challenges for educators working with frontier models. AI is a textual technology particularly suited to humanities methodologies, offering opportunities for disciplinary knowledge application. Productive failure enhances learning when properly contextualised and reflected upon. What this paper adds: A four-pillar framework for teaching AI literacy through disciplinary applications in humanities contexts; Practical strategies including a collaborative “classroom grimoire” for prompt iteration and educator vulnerability as pedagogical method, demonstrating how epistemic humility enables effective AI literacy development; and Evidence that humanities educators possess foundational analytical competencies directly transferable to AI pedagogy, challenging deficit-based professional development models. Implications for practice and/or policy: Effective AI pedagogy requires sustained investment in psychological safety through structured reflection cycles and formative assessment, challenging traditional mastery-based educational models. Institutional support must address the capability gap between advanced and basic AI models, as this differential fundamentally alters pedagogical possibilities and exacerbates educational inequities. Universities require new quality frameworks that value process-focused assessment and epistemic uncertainty navigation over demonstrable competency acquisition.
Generative artificial intelligence (AI) has disrupted education systems worldwide. This disruption necessitates pedagogical approaches that embrace uncertainty while developing student agency. We examined how decoupling task success from assessment outcomes created environments where students developed critical AI literacy through structured risk-taking. Drawing on Transformative Learning Theory and Rumsfeld's epistemological matrix as interpretive frameworks, we analysed an experimental undergraduate AI unit across three disciplinary streams: Ancient History, Philosophy, and Politics and International Relations ($N=23$). Data included student reflections, classroom observations, and AI interaction logs collected over a 13-week semester. Our pedagogical framework operationalised four interconnected pillars: risk-embracing assessment structures, intentional classroom culture development, systematic navigation of technological uncertainty, and facilitation of transformative learning experiences. This paper presents the implications of these pillars for 1) educational theory, where productive failure serves as an effective pedagogical strategy; 2) educator praxis, viewing AI as a textual technology that extends the capabilities of the humanities; and 3) implications for the university teaching context, where AI-enabled teaching should focus on reflection and process rather than demonstratable competencies.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
