On the Link Between Model Performance and Causal Scoring of Medical Image Explanations
Academic Article in Scopus
-
- Overview
-
- Identity
-
- Additional document info
-
- View All
-
Overview
abstract
-
Contemporary Deep Learning (DL) image classifier approaches typically harness training set correlations to discern meaningful associations between inputs and outputs, often without differentiating causal connections from mere correlations. This practice can lead to Explainable Artificial Intelligence (XAI) techniques that, while identifying key input features, may base explanations on these correlations, thus risking confounded interpretations. This issue is particularly critical in medical imaging, where precise model explanations are vital. To tackle this, we build upon previous efforts for estimating causal links between model features and outputs, introducing the Explainable and Causal Feature Analysis (ECFA) method. Employing ECFA in a medical classification case study, we aim to empower medical professionals to differentiate between causally relevant model-extracted features and correlated features. Our experiments show that ECFA reliably pinpoints the top 1% of causal and anti-causal features to the output labels of a CNN-based classifier, aiding in the assessment of whether model predictions are causally grounded or correlation-based. This facilitates a more informed evaluation of whether a model's predictions derive from distinguishable causal links or not, marking a notable stride toward enhancing the reliability and interpretability of DL models in medical diagnostics. © 2024 IEEE.
status
publication date
published in
Identity
Digital Object Identifier (DOI)
Additional document info
has global citation frequency
start page
end page