- The Center for Statistics and Machine Learning
- Mert Gürbüzbalaban
Risk-averse optimization plays a major role in the design of safety for machine learning applications. In this talk, we will present a set of tools to enhance the robustness of models and algorithms to potentially harmful data shifts. First, we will expose some general results on the modeling of risk aversion and highlight the interest of superquantile-based risk measures to enforce robustness to worst-case events. We will then show how such a measure may be minimized in the centralized setting based on a smoothing à la Nesterov of the superquantile. We will then present applications of this framework in federated learning to handle statistical heterogeneity among the devices of a given network. Second, we will revisit the bias-variance trade-off of first-order stochastic algorithms from a robust perspective. Precisely, we study the convergence properties of accelerated methods on saddle-point problems for diverse robustness metrics. We deduce new ways to set the associated hyperparameters to stabilize the algorithm at equilibrium.
Yassine Laguel is a postdoctoral researcher at Rutgers University and a departmental guest at the Center for Statistics and Machine Learning at Princeton University. He received his M.Sc. in applied mathematics from the Ecole Nationale Supérieure de l’Informatique et des Mathématiques Appliquées (ENSIMAG) in 2018. He then completed his Ph.D. under the supervision of Jérôme Malick at the Université Grenoble Alpes. He is now working in the Management Sciences and Information Systems department at Rutgers Business School, in the team of Mert Gürbüzbalaban. Dr. Laguel’s research interests center around optimization under uncertainty and its applications to stochastic programming and machine learning. A common thread in his research is the design and analysis of numerical algorithms to address risk in data-driven applications.
Lunch from 12:15 p.m., RSVP to firstname.lastname@example.org