
Artificial Intelligence (AI) has gained prominence in recent years, with widespread adoption in academic and industrial contexts raising challenges related to the auditability of AI-based systems. Explainable Artificial Intelligence (XAI) addresses this issue through post-hoc methods that provide insight into model decisions. However, the integration of XAI mechanisms into software engineering artifacts and architectural representations remains limited. At the same time, the European Union’s AI Act (EU AI Act, Regulation 2024/1689) demands extensive technical requirements for high-risk AI systems, which in practice often lead to large, fragmented, and costly compliance efforts that are difficult to maintain, verify, and trace back to concrete system implementations. To address this gap, this work proposes a UML-based framework for documenting post-hoc XAI systems aligned with EU AI Act requirements. The framework introduces a minimal set of Unified Modeling Language (UML) stereotypes, tagged values, and relationships to represent data sources, training orchestration, trained models, and associated explainability mechanisms, relying on architectural information directly derivable from object-oriented (OO) source code. As an additional contribution, this work introduces the UMLOOModeler, a tool that automates the generation of UML class diagrams from OO Python implementations, ensuring consistency between code-level artifacts and architectural representations. The framework is illustrated through examples involving heterogeneous data modalities, demonstrating support for architectural traceability and auditability across different XAI pipelines.
