
Generative AI is creating a credibility shock for higher education assessment. The issue is not only misconduct but a weakened evidentiary link between student work and the learning claims behind grades and credentials. Surveys and guidance from 2024 to 2025 indicate rapid AI uptake and that detection cannot provide assurance. This paper argues that traditional assessment systems, including take-home coursework graded mainly on the final product and invigilated exams, were already noisy evidence for higher-order learning. Generative AI accelerates these weaknesses by making plausible outputs cheap and by displacing tool influence into preparation and rehearsal. In response, the paper introduces FARABI (Framework for AI-Resilient Assessment and Balanced Integrity), a risk-based triage method for assessment portfolios. FARABI supports proportionate controls while acting as a diagnostic that shifts assessment toward applied judgment, observable process, and accountable tool use. The paper closes by questioning higher education’s warrant if employability weakens as the organizing metric.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
