BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations



Zhao, X ORCID: 0000-0002-3474-349X, Huang, W, Huang, X ORCID: 0000-0001-6267-0366, Robu, V and Flynn, D
(2021) BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations. .

WarningThere is a more recent version of this item available.
[img] Text
main_uai21.pdf - Author Accepted Manuscript

Download (3MB) | Preview

Abstract

Given the pressing need for assuring algorithmic transparency, Explainable AI (XAI) has emerged as one of the key areas of AI research. In this paper, we develop a novel Bayesian extension to the LIME framework, one of the most widely used approaches in XAI – which we call BayLIME. Compared to LIME, BayLIME exploits prior knowledge and Bayesian reasoning to improve both the consistency in repeated explanations of a single prediction and the robustness to kernel settings. BayLIME also exhibits better explanation fidelity than the state-of-the-art (LIME, SHAP and GradCAM) by its ability to integrate prior knowledge from, e.g., a variety of other XAI techniques, as well as verification and validation (V&V) methods. We demonstrate the desirable properties of BayLIME through both theoretical analysis and extensive experiments.

Item Type: Conference or Workshop Item (Unspecified)
Uncontrolled Keywords: cs.AI, cs.AI
Divisions: Faculty of Science and Engineering > School of Electrical Engineering, Electronics and Computer Science
Depositing User: Symplectic Admin
Date Deposited: 14 May 2021 07:51
Last Modified: 19 Jul 2023 10:39
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3122584

Available Versions of this Item