Friday, October 2, 2020
October 2, 2020
Recorded seminar available to watch here.
Professor Cynthia Rudin (PI, Prediction Analysis Lab – Duke University): Current Approaches in Interpretable Machine Learning
With widespread use of machine learning, there have been serious societal consequences from using black box models for high-stakes decisions, including flawed bail and parole decisions in criminal justice, flawed models in healthcare, and black box loan decisions in finance. Transparency and interpretability of machine learning models is critical in high stakes decisions. In this talk, I will focus on two of the most fundamental and important problems in the field of interpretable machine learning: optimal sparse decision trees and optimal scoring systems. I will also briefly describe work on interpretable neural networks for computer vision.
Optimal sparse decision trees: We want to find trees that maximize accuracy and minimize the number of leaves in the tree (sparsity). This is an NP hard optimization problem with no polynomial time approximation. I will present the first practical algorithm for solving this problem, which uses a highly customized dynamic-programming-with-bounds procedure, computational reuse, specialized data structures, analytical bounds, and bit-vector computations.
Optimal scoring systems: Scoring systems are sparse linear models with integer coefficients. Traditionally, scoring systems have been designed using manual feature elimination on logistic regression models, with a post-processing step where coefficients have been rounded. However, this process can fail badly to produce optimal (or near optimal) solutions. I will present a novel cutting plane method for producing scoring systems from data. The solutions are globally optimal according to the logistic loss, regularized by the number of terms (sparsity), with coefficients constrained to be integers. Predictive models from our algorithm have been used for many medical and criminal justice applications, including in intensive care units in hospitals.
Interpretable neural networks for computer vision: We have developed a neural network that performs case-based reasoning. It aims to explains its reasoning process in a way that humans can understand, even for complex classification tasks such as bird identification.
1) Jimmy Lin, Chudi Zhong, Diane Hu, Cynthia Rudin, Margo Seltzer “Generalized and Scalable Optimal Sparse Decision Trees” ICML, 2020.
2) Berk Ustun and Cynthia Rudin “Learning Optimized Risk Scores”. JMLR, 2019. Shorter version at KDD 2017.
Struck et al. “Association of an Electroencephalography-Based Risk Score With Seizure Probability in Hospitalized Patients” JAMA Neurology, 2017.
3) Chaofan Chen, Oscar Li, Chaofan Tao, Alina Barnett, Jonathan Su, Cynthia Rudin “This Looks Like That: Deep Learning for Interpretable Image Recognition” NeurIPS, 2019.
Cynthia Rudin is a professor of computer science, electrical and computer engineering, and statistical science at Duke University. Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. Her degrees are from the University at Buffalo and Princeton University. She is a three time winner of the INFORMS Innovative Applications in Analytics Award, was named as one of the “Top 40 Under 40” by Poets and Quants in 2015, and was named by Businessinsider.com as one of the 12 most impressive professors at MIT in 2015. She has served on committees for INFORMS, the National Academies, the American Statistical Association, DARPA, the NIJ, and AAAI. She is a fellow of both the American Statistical Association and Institute of Mathematical Statistics.
MIE’s Distinguished Seminar Series features top international researchers and leading experts across major areas of Mechanical Engineering and Industrial Engineering. The speakers present about their latest research and offer their perspectives on the current state of their field. The seminars are part of the program requirements for MIE Master of Applied Science and PhD students. The Distinguished Seminar Series is coordinated for 2020-2021 by Associate Professor Tobin Filleter.
View all upcoming MIE Distinguished Seminars.