Skip to main content

Interpretable machine learning: Fundamental principles and 10 grand challenges

Publication ,  Journal Article
Rudin, C; Chen, C; Chen, Z; Huang, H; Semenova, L; Zhong, C
Published in: Statistics Surveys
January 1, 2022

Interpretability in machine learning (ML) is crucial for high stakes decisions and troubleshooting. In this work, we provide fundamental principles for interpretable ML, and dispel common misunderstandings that dilute the importance of this crucial topic. We also identify 10 technical challenge areas in interpretable machine learning and provide history and background on each problem. Some of these problems are classically important, and some are recent problems that have arisen in the last few years. These problems are: (1) Optimizing sparse logical models such as decision trees; (2) Optimization of scoring systems; (3) Placing constraints into generalized additive models to encourage sparsity and better interpretability; (4) Modern case-based reasoning, including neural networks and matching for causal inference; (5) Complete supervised disentanglement of neural networks; (6) Complete or even partial unsupervised disentanglement of neural networks; (7) Dimensionality reduction for data visualization; (8) Machine learning models that can incorporate physics and other generative or causal constraints; (9) Characterization of the “Rashomon set” of good models; and (10) Interpretable reinforcement learning. This survey is suitable as a starting point for statisticians and computer scientists interested in working in interpretable machine learning.

Duke Scholars

Altmetric Attention Stats
Dimensions Citation Stats

Published In

Statistics Surveys

DOI

EISSN

1935-7516

Publication Date

January 1, 2022

Volume

16

Start / End Page

1 / 85

Related Subject Headings

  • 4905 Statistics
  • 0104 Statistics
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Rudin, C., Chen, C., Chen, Z., Huang, H., Semenova, L., & Zhong, C. (2022). Interpretable machine learning: Fundamental principles and 10 grand challenges. Statistics Surveys, 16, 1–85. https://doi.org/10.1214/21-SS133
Rudin, C., C. Chen, Z. Chen, H. Huang, L. Semenova, and C. Zhong. “Interpretable machine learning: Fundamental principles and 10 grand challenges.” Statistics Surveys 16 (January 1, 2022): 1–85. https://doi.org/10.1214/21-SS133.
Rudin C, Chen C, Chen Z, Huang H, Semenova L, Zhong C. Interpretable machine learning: Fundamental principles and 10 grand challenges. Statistics Surveys. 2022 Jan 1;16:1–85.
Rudin, C., et al. “Interpretable machine learning: Fundamental principles and 10 grand challenges.” Statistics Surveys, vol. 16, Jan. 2022, pp. 1–85. Scopus, doi:10.1214/21-SS133.
Rudin C, Chen C, Chen Z, Huang H, Semenova L, Zhong C. Interpretable machine learning: Fundamental principles and 10 grand challenges. Statistics Surveys. 2022 Jan 1;16:1–85.

Published In

Statistics Surveys

DOI

EISSN

1935-7516

Publication Date

January 1, 2022

Volume

16

Start / End Page

1 / 85

Related Subject Headings

  • 4905 Statistics
  • 0104 Statistics