Concept whitening for interpretable image recognition

Journal Article (Journal Article)

What does a neural network encode about a concept as we traverse through the layers? Interpretability in machine learning is undoubtedly important, but the calculations of neural networks are very challenging to understand. Attempts to see inside their hidden layers can be misleading, unusable or rely on the latent space to possess properties that it may not have. Here, rather than attempting to analyse a neural network post hoc, we introduce a mechanism, called concept whitening (CW), to alter a given layer of the network to allow us to better understand the computation leading up to that layer. When a concept whitening module is added to a convolutional neural network, the latent space is whitened (that is, decorrelated and normalized) and the axes of the latent space are aligned with known concepts of interest. By experiment, we show that CW can provide us with a much clearer understanding of how the network gradually learns concepts over layers. CW is an alternative to a batch normalization layer in that it normalizes, and also decorrelates (whitens), the latent space. CW can be used in any layer of the network without hurting predictive performance.

Full Text

Duke Authors

Cited Authors

  • Chen, Z; Bei, Y; Rudin, C

Published Date

  • December 1, 2020

Published In

Volume / Issue

  • 2 / 12

Start / End Page

  • 772 - 782

Electronic International Standard Serial Number (EISSN)

  • 2522-5839

Digital Object Identifier (DOI)

  • 10.1038/s42256-020-00265-z

Citation Source

  • Scopus