Skip to main content

Shapley variable importance cloud for interpretable machine learning.

Publication ,  Journal Article
Ning, Y; Ong, MEH; Chakraborty, B; Goldstein, BA; Ting, DSW; Vaughan, R; Liu, N
Published in: Patterns (N Y)
April 8, 2022

Interpretable machine learning has been focusing on explaining final models that optimize performance. The state-of-the-art Shapley additive explanations (SHAP) locally explains the variable impact on individual predictions and has recently been extended to provide global assessments across the dataset. Our work further extends "global" assessments to a set of models that are "good enough" and are practically as relevant as the final model to a prediction task. The resulting Shapley variable importance cloud consists of Shapley-based importance measures from each good model and pools information across models to provide an overall importance measure, with uncertainty explicitly quantified to support formal statistical inference. We developed visualizations to highlight the uncertainty and to illustrate its implications to practical inference. Building on a common theoretical basis, our method seamlessly complements the widely adopted SHAP assessments of a single final model to avoid biased inference, which we demonstrate in two experiments using recidivism prediction data and clinical data.

Duke Scholars

Altmetric Attention Stats
Dimensions Citation Stats

Published In

Patterns (N Y)

DOI

EISSN

2666-3899

Publication Date

April 8, 2022

Volume

3

Issue

4

Start / End Page

100452

Location

United States

Related Subject Headings

  • 4905 Statistics
  • 4611 Machine learning
  • 4603 Computer vision and multimedia computation
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Ning, Y., Ong, M. E. H., Chakraborty, B., Goldstein, B. A., Ting, D. S. W., Vaughan, R., & Liu, N. (2022). Shapley variable importance cloud for interpretable machine learning. Patterns (N Y), 3(4), 100452. https://doi.org/10.1016/j.patter.2022.100452
Ning, Yilin, Marcus Eng Hock Ong, Bibhas Chakraborty, Benjamin Alan Goldstein, Daniel Shu Wei Ting, Roger Vaughan, and Nan Liu. “Shapley variable importance cloud for interpretable machine learning.Patterns (N Y) 3, no. 4 (April 8, 2022): 100452. https://doi.org/10.1016/j.patter.2022.100452.
Ning Y, Ong MEH, Chakraborty B, Goldstein BA, Ting DSW, Vaughan R, et al. Shapley variable importance cloud for interpretable machine learning. Patterns (N Y). 2022 Apr 8;3(4):100452.
Ning, Yilin, et al. “Shapley variable importance cloud for interpretable machine learning.Patterns (N Y), vol. 3, no. 4, Apr. 2022, p. 100452. Pubmed, doi:10.1016/j.patter.2022.100452.
Ning Y, Ong MEH, Chakraborty B, Goldstein BA, Ting DSW, Vaughan R, Liu N. Shapley variable importance cloud for interpretable machine learning. Patterns (N Y). 2022 Apr 8;3(4):100452.

Published In

Patterns (N Y)

DOI

EISSN

2666-3899

Publication Date

April 8, 2022

Volume

3

Issue

4

Start / End Page

100452

Location

United States

Related Subject Headings

  • 4905 Statistics
  • 4611 Machine learning
  • 4603 Computer vision and multimedia computation