Seminars

Upcoming Seminars

CITP Distinguished Lecture Series: Lorrie Cranor – Designing Usable and Useful Privacy Choice Interfaces
Thu, Mar 30, 2023, 4:30 pm

Co-sponsored by the Department of Computer Science and the Department of Electrical and Computer Engineering

Users who wish to exercise privacy rights or make privacy choices must often rely on website or app user interfaces. However, too often, these user interfaces suffer from usability deficiencies ranging from being…

Speaker

Previous Seminars

Seminars on security and privacy in machine learning: Kamalika Chaudhuri (UCSD and Meta AI)
Tue, Jun 28, 2022, 1:00 pm

The motivation for the seminar is to build a platform to discuss and disseminate the progress made by the community in solving some of the core challenges. We intend to host weekly talks from leading researchers in both academia and industry. Each session will be split into a talk (40 mins) followed by a Q&A + short discussion session (20 mins).

Speaker

Seminars on security and privacy in machine learning: Ben Y. Zhao (University of Chicago)
Tue, Jun 21, 2022, 1:00 pm

The motivation for the seminar is to build a platform to discuss and disseminate the progress made by the community in solving some of the core challenges. We intend to host weekly talks from leading researchers in both academia and industry. Each session will be split into a talk (40 mins) followed by a Q&A + short discussion session (20 mins).

Speaker

Seminars on security and privacy in machine learning: Bo Li (University of Illinois Urbana-Champaign)
Tue, Jun 14, 2022, 1:00 pm

The motivation for the seminar is to build a platform to discuss and disseminate the progress made by the community in solving some of the core challenges. We intend to host weekly talks from leading researchers in both academia and industry. Each session will be split into a talk (40 mins) followed by a Q&A + short discussion session (20 mins).

Speaker

Some Thoughts on Generalization in Deep Learning: Mehul Motani (National University of Singapore)
Tue, Jun 14, 2022, 10:30 am

A good learning algorithm is characterized primarily by its ability to predict beyond the training data, i.e., its ability to generalize. What makes a learning algorithm have the ability to generalize? And can we predict when a learning algorithm will generalize well? We believe that a clear answer to these questions is still elusive. In this talk, we will share some perspectives based on our work to understand generalization in deep learning algorithms.

Speaker

Seminars on security and privacy in machine learning: Tom Goldstein (University of Maryland)
Tue, Jun 7, 2022, 1:00 pm

The motivation for the seminar is to build a platform to discuss and disseminate the progress made by the community in solving some of the core challenges. We intend to host weekly talks from leading researchers in both academia and industry. Each session will be split into a talk (40 mins) followed by a Q&A + short discussion session (20 mins).

Speaker

Learning by Random Features: Sharp Asymptotics and Universality Laws
Wed, Dec 8, 2021, 4:30 pm

Many new random matrix ensembles arise in learning and modern signal processing. As shown in recent studies, the spectral properties of these matrices help answer crucial questions regarding the training and generalization performance of neural networks, and the fundamental limits of high-dimensional signal recovery. As a result, there has been growing interest in precisely understanding the spectra and other asymptotic properties of these matrices. Unlike their classical counterparts, these new random matrices are often highly structured and are the result of nonlinear transformations. 

Speaker

Bayesian Risk Optimization (BRO): A New Approach to Data-driven Stochastic Optimization
Wed, Dec 1, 2021, 4:30 pm

A large class of stochastic optimization problems involves optimizing an expectation taken with respect to an underlying distribution that is unknown in practice. One popular approach to addressing the distributional uncertainty, known as the distributionally robust optimization (DRO), is to hedge against the worst case among an ambiguity set of candidate distributions. However, given that worst-case rarely happens, inappropriate construction of the ambiguity set can sometimes result in over-conservative solutions. To explore the middle ground between optimistically ignoring the distributional uncertainty and pessimistically fixating on the worst-case scenario, we propose a Bayesian risk optimization (BRO) framework for parametric underlying distributions, which is to optimize a risk functional applied to the posterior distribution of the unknown distribution parameter. Of our particular interest are four risk functionals: mean, mean-variance, value-at-risk, and conditional value-at-risk. To reveal the implication of BRO, we establish the consistency of objective functions and optimal solutions, as well as the asymptotic normality of objective functions and optimal values. We also extend the BRO framework to online and multi-stage settings.

Speaker

MCMC vs. Variational Inference -- for Credible Learning and Decision-Making at Scale
Tue, Nov 23, 2021, 4:30 pm

I will introduce some recent progress towards understanding the scalability of Markov chain Monte Carlo (MCMC) methods and their comparative advantage with respect to variational inference. I will discuss an optimization perspective on the infinite dimensional probability space, where MCMC leverages stochastic sample paths while variational inference projects the probabilities onto a finite dimensional parameter space. Three ingredients will be the focus of this discussion: non-convexity, acceleration, and stochasticity. This line of work is motivated by epidemic prediction, where we need uncertainty quantification for credible predictions and informed decision making with complex models and evolving data.

Speaker

Optimal No-Regret Learning in Repeated First-Price Auctions
Wed, Nov 17, 2021, 4:30 pm

First-price auctions have very recently swept the online advertising industry, replacing second-price auctions as the predominant auction mechanism on many platforms for display ads bidding. This shift has brought forth important challenges for a bidder: how should one bid in a first-price auction, where unlike in second-price auctions, it is no longer optimal to bid one's private value truthfully and hard to know the others' bidding behaviors? In this paper, we take an online learning angle and address the fundamental problem of learning to bid in repeated first-price auctions. We discuss our recent work in leveraging the special structures of the first-price auctions to design minimax optimal no-regret bidding algorithms. 

Speaker

CITP Seminar: Matt Weinberg – A Crash Course on Algorithmic Mechanism Design
Tue, Nov 16, 2021, 12:30 pm

Matt is an assistant professor at Princeton University in the Department of Computer Science. His primary research interest is in Algorithmic Mechanism Design: algorithm design in settings where users have their own incentives. He is also interested more broadly in Algorithmic Game Theory, Algorithms Under Uncertainty, and Theoretical Computer Science in general. Please click here for more details.

Speaker