
Machine unlearning has emerged as a critical capability for modern machine learning systems, driven by increasing regulatory, ethical, and security requirements such as the right to be forgotten, data privacy compliance, and trustworthy AI deployment. As models are trained on large-scale, sensitive, and continuously evolving datasets, the inability to selectively remove the influence of specific training data poses serious challenges for accountability, privacy, and long-term usability. This preprint presents a comprehensive survey of machine unlearning methods and introduces a unified framework that integrates unlearning techniques with verification strategies, evaluation metrics, and real-world application requirements. The paper systematically organizes existing approaches into a clear taxonomy, covering exact unlearning, approximate and gradient-based methods, structure-aware techniques, and emerging unlearning strategies for large language and foundation models. Beyond reviewing prior work, the study emphasizes verification and evaluation—two aspects often treated inconsistently in existing literature. Behavioral, parametric, and certified verification methods are analyzed, along with standardized metrics for forgetting effectiveness, utility preservation, privacy leakage, and computational efficiency. By framing machine unlearning as a system-level problem rather than a purely algorithmic task, the proposed framework enables principled reasoning about trade-offs between guarantees, efficiency, and deployability. The paper also discusses practical deployment scenarios, including privacy compliance, federated and continual learning, large-scale production systems, and large language models. Finally, it identifies open challenges and outlines future research directions toward building transparent, verifiable, and scalable machine unlearning systems. This work is intended to serve as both a reference survey and a conceptual foundation for researchers and practitioners working on trustworthy and deployable machine unlearning.
Machine Unlearning, Data Deletion, Right to be Forgotten, Privacy-Preserving Machine Learning, Model Forgetting, Certified Unlearning, Approximate Unlearning, Verification Methods, Evaluation Metrics, Federated Learning, Continual Learning, Large Language Models, Trustworthy AI, Trustworthy and Responsible AI, Data Privacy and Security, Machine Learning,
Machine Unlearning, Data Deletion, Right to be Forgotten, Privacy-Preserving Machine Learning, Model Forgetting, Certified Unlearning, Approximate Unlearning, Verification Methods, Evaluation Metrics, Federated Learning, Continual Learning, Large Language Models, Trustworthy AI, Trustworthy and Responsible AI, Data Privacy and Security, Machine Learning,
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
