# Seminars

## Upcoming Seminars

No upcoming events found.

## Previous Seminars

### Can learning theory resist deep learning?

Fri, Nov 15, 2019, 12:30 pm

Machine learning algorithms are ubiquitous in most scientific, industrial and personal domains, with many successful applications. As a scientific field, machine learning has always been characterized by the constant exchanges between theory and practice, with a stream of algorithms that exhibit both good empirical performance on real-world...

Speaker(s):

Francis Bach

INRIA

### Convergence Rates of Stochastic Algorithms in Nonsmooth Nonconvex Optimization

Thu, Nov 14, 2019, 4:30 pm

Stochastic iterative methods lie at the core of large-scale optimization and its modern applications to data science. Though such algorithms are routinely and successfully used in practice on highly irregular problems (e.g. deep neural networks), few performance guarantees are available outside of smooth or convex settings. In this talk, I will...

Speaker(s):

Dmitriy Drusvyatskiy

University of Washington

Convergence Rates of Stochastic Algorithms in Nonsmooth Nonconvex Optimization

### Exploration by Optimization in Partial Monitoring

Tue, Nov 12, 2019, 4:30 pm

In many real-world problems learners cannot directly observe their own rewards but can still infer whether some particular action is successful. How should a learner take actions to balance its need of information while maximizing their reward in this setting? Partial monitoring is a framework introduced a few decades ago to model learning...

Speaker(s):

Csaba Szepesvari

Professor of Computing Science, University of Alberta

### Randomized Methods for Low-Rank Tensor Decomposition in Unsupervised Learning

Mon, Nov 11, 2019, 4:00 pm

Tensor decomposition discovers latent structure in higher-order data sets and is the higher-order analogue of the matrix decomposition.

Speaker(s):

Tamara Kolda

Sandia National Laboratories

### Algorithm and Statistical Inference for Recovery of Discrete Structure

Fri, Nov 8, 2019, 12:30 pm

Discrete structure recovery is an important topic in modern high-dimensional inference. Examples of discrete structure include clustering labels, ranks of players, and signs of variables in a regression model.

Speaker(s):

Chao Gao

Assistant Professor, Statistics University of Chicago

### Optimizing for Fairness in ML

Thu, Nov 7, 2019, 4:30 pm

Recent events have made evident the fact that algorithms can be discriminatory, reinforce human prejudices, accelerate the spread of misinformation, and are generally not as objective as they are widely thought to be.

Speaker(s):

Elisa Celis

Assistant Professor of Statistics and Data Science, Yale University

### Word Embeddings: What works, what doesn’t, and how to tell the difference for applied research

Fri, Oct 25, 2019, 12:00 pm

We consider the properties and performance of word embeddings techniques in the context of political science research. In particular, we explore key parameter choices—including context window length, embedding vector dimensions and the use of pre-trained vs locally fit variants—with respect to efficiency and quality of inferences possible with...

Speaker(s):

Arthur Spirling

New York University

### Meisam Razaviyayn

Thu, Oct 17, 2019, 4:30 pm

Recent applications that arise in machine learning have surged significant interest in solving min-max saddle point games. This problem has been extensively studied in the convex-concave regime for which a global equilibrium solution can be computed efficiently. In this talk, we study the problem in the non-convex regime and show that an $\epsilon...

Speaker(s):

Meisam Razaviyayn

University of Southern California

### Deep Neural Networks for Estimation and Inference: Application to Causal Effects and Other Semiparametric Estimands

Mon, Oct 14, 2019, 12:30 pm

We study deep neural networks and their use in semiparametric inference. We prove valid inference after first-step estimation with deep learning, a result new to the literature. We provide new rates of convergence for deep feedforward neural nets and, because our rates are sufficiently fast (in some cases minimax optimal), obtain valid...

Speaker(s):

Max H. Farrell, Associate Professor of Econometrics and Statistics

University of Chicago

### Control with Learning On the Fly: First Toy Problems

Thu, Oct 10, 2019, 4:30 pm

How can we control a system without knowing beforehand what the controls do? In particular, how should we balance the imperatives to "explore" (learn what the controls do) and "exploit" (use what we've learned so far to make the system do what we want)? We won't have enough data to apply deep learning. The talk poses several toy problems and...

Speaker(s):

Charles Fefferman

Math Department, Princeton University