With widespread use of machine learning, there have been serious societal consequences from using black box models for high-stakes decisions, including flawed bail and parole decisions in criminal justice. Explanations for black box models are not reliable, and can be misleading. If we use interpretable machine learning models, they come with their own explanations, which are faithful to what the model actually computes.
I will present work on a new alternative to CART (classification and regression trees) for decision tree and decision list modeling. This method is called Certifiably Optimal Rule Lists (CORELS). CORELS uses a combination of theorems (bounds) to reduce the search space, fast bit-vector computations, and carefully chosen data structures. It is the first practical method that produces certifiably optimal decision trees of any form.
In our applications, we have been able to achieve interpretable models with the same accuracy as black box models in criminal justice.