Managing Uncertainty in Machine Learning

Princeton University is actively monitoring the situation around coronavirus (COVID-19) and the evolving guidance from government and health authorities. The latest guidance for Princeton members and visitors is available on the University’s Emergency Management website

Due to ongoing concerns and public safety and health restrictions associated with COVID-19, this workshop has been postponed.  Additional details will be provided at a later date.

Managing Uncertainty in Machine Learning


Machine Learning methods are now widely used in scientific and engineering applications. In many application domains data are affected by prominent levels of noise, possibly of complex heteroscedastic nature. Yet, little effort has been made to investigate the behavior of ML methods under noise. This situation profoundly limits the applicability of ML methods for accurate parameter inferences, e.g. in physical systems. It also raises concerns about the robustness, repeatability, and explainability. 
This workshop seeks to survey the current state of uncertainty quantification by foundational approaches or the application-specific techniques. The goal of the workshop is to establish best practices to estimate and report uncertainties for ML-derived parameters and identify research avenues to improve the applicability and robustness of machine learning methods to noisy data.

Workshop Organizer

  • Peter Melchior



We gratefully acknowledge financial support from the Schmidt DataX Fund at Princeton University made possible through a major gift from the Schmidt Futures Foundation and our Princeton University partners:

CSMLDataX Logo - A series of lines converging at a 45 degree angle to a point with DataX in the foreground