
Artificial Intelligence (AI) is increasingly being applied in the healthcare field through Machine Learning (ML) and Deep Learning (DL) models. However, the complexity of modern black-box models creates a need for transparent interpretation methods. Explainable AI (XAI) emerges to bridge this gap by providing better understanding of model performance. This study implements the Local Interpretable Model-agnostic Explanations (LIME) method to visualize the classification results of a DL model based on the ResNet18 architecture on Chest X-ray (CXR) images across three classes: normal, COVID-19, and pneumonia. The model achieved a precision of 97%, recall of 97%, and F1-score of 97%, with an accuracy of 98%. LIME visualizations highlight the image regions that significantly contribute to the classification and effectively distinguish among the three classes. The results of this study demonstrate that applying XAI specifically LIME with a ResNet18-based DL model can provide interpretability in CXR image classification tasks.
Explainable AI, LIME, ResNet18, COVID-19, Pneumonia
Explainable AI, LIME, ResNet18, COVID-19, Pneumonia
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
