Interpretable Machine Learning for Science

Feb 11, 2022, 12:30 pm1:30 pm
26 Prospect Room 103


Event Description

Would Kepler have discovered his laws if machine learning had been around in 1609? Or would he have been satisfied with the accuracy of some black box regression model, leaving Newton without the inspiration to discover the law of gravitation? In this talk I will discuss problems with the use of industry-oriented machine learning algorithms being used in the natural sciences. I will describe recent approaches I have developed with collaborators for building interpretable machine learning algorithms for science, largely based on a mix of symbolic learning and neural networks. I will discuss the inner workings of my open-source symbolic regression library, PySR (, which forms a central part of this interpretable learning toolkit. Finally, I will present examples of how these methods have been used in the past two years in scientific discovery, and outline some current efforts.

Miles Cranmer is a 4th-year PhD candidate and NSERC fellow in the department of astrophysical sciences, having also recently concluded a PhD research scientist internship in the structured learning group of DeepMind research. He develops interpretable and robust machine learning algorithms for scientific discovery. He simultaneously applies these algorithms to understanding multi-scale dynamical systems in astrophysics, particularly planetary dynamics, turbulence, and cosmology.

Zoom Recording:

Center for Statistics and Machine Learning