EXPLAINABLE AI: TECHNIQUES FOR INTERPRETABLE MACHINE LEARNING MODELS
DOI:
https://doi.org/10.7492/yega9k95Abstract
The increasing complexity of machine learning models has led to significant challenges in understanding their decision-making processes, raising concerns over transparency, trust, and ethical accountability. Explainable Artificial Intelligence (XAI) addresses these challenges by developing techniques to make model predictions interpretable to humans. This paper presents a comprehensive review and analysis of various explainability methods, categorizing them into intrinsic and post-hoc approaches, as well as model-specific and model-agnostic techniques. Through a mixed-methods approach involving a systematic literature review and empirical evaluation on benchmark datasets, the study examines the strengths, limitations, and applicability of different XAI techniques across domains such as healthcare, finance, and autonomous systems. The findings highlight that while no single method perfectly balances interpretability and accuracy, a combination of techniques tailored to specific contexts enhances transparency and trustworthiness. The paper also discusses current challenges and future directions in the development of robust, user-centric explainability tools, underscoring the critical role of XAI in responsible AI deployment.