
doi: 10.3233/faia231451
Accurate air quality analysis is essential for comprehending the reasons for and consequences of air pollution, which is a serious environmental concern. Understanding the underlying causes contributing to pollution levels is challenging when using traditional methodologies for air quality analysis since they frequently lack transparency and interpretability. This work examines the integration of XAI with deep learning to enhance air quality prediction. Explainable AI provides a solution by illuminating the ways in which AI models make decisions. It emphasizes the requirement for clear and understandable AI models to win stakeholders’ trust and adoption. Utilizing explainable AI makes it feasible to improve the readability and transparency of air quality studies, allowing stakeholders to comprehend and verify the predictions and suggestions made by AI systems. Rough woodland. To classify the data, XGBoost and KNN are used. SHAP and LIME are then applied to discover the major characteristics and variables that affect air quality predictions. These findings can help to improve decision-making and the creation of efficient plans for the management and mitigation of air quality.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
