A Bayesian framework for learning rule sets for interpretable classification
Journal Article (Journal Article)
We present a machine learning algorithm for building classifiers that are comprised of a small number of short rules. These are restricted disjunctive normal form models. An example of a classifier of this form is as follows: If X satisfies (condition A AND condition B) OR (condition C) OR · · · , then Y = 1. Models of this form have the advantage of being interpretable to human experts since they produce a set of rules that concisely describe a specific class. We present two probabilistic models with prior parameters that the user can set to encourage the model to have a desired size and shape, to conform with a domain-specific definition of interpretability. We provide a scalable MAP inference approach and develop theoretical bounds to reduce computation by iteratively pruning the search space. We apply our method (Bayesian Rule Sets – BRS) to characterize and predict user behavior with respect to in-vehicle context-aware personalized recommender systems. Our method has a major advantage over classical associative classification methods and decision trees in that it does not greedily grow the model.
Duke Authors
Cited Authors
- Wang, T; Rudin, C; Doshi-Velez, F; Liu, Y; Klampfl, E; MacNeille, P
Published Date
- August 1, 2017
Published In
Volume / Issue
- 18 /
Start / End Page
- 1 - 37
Electronic International Standard Serial Number (EISSN)
- 1533-7928
International Standard Serial Number (ISSN)
- 1532-4435
Citation Source
- Scopus