Upcoming Events

Upcoming Events

Machine Learning in Physics
Wed, Sep 27, 2023, 4:30 pm

A new seminar hosted jointly between Physics and ORFE focusing on interdisciplinary work at the intersection of physics and machine learning.

What? A seminar series highlighting research on both physics-inspired approaches to understanding ML and the use of ML for physics applications

Location
Jadwin Hall A10
Speakers
Logical reasoning and Transformers
Mon, Oct 2, 2023, 4:30 pm

Abstract: Transformers have become the dominant neural network architecture in deep learning, in particular with the GPT language models. While they dominate in language and vision tasks, their performance is less convincing in so-called “reasoning” tasks. 

In this talk, we introduce the “generalization on the…

Location
214 Fine Hall
Speaker
Machine Learning in Physics
Wed, Oct 4, 2023, 4:30 pm

A new seminar hosted jointly between Physics and ORFE focusing on interdisciplinary work at the intersection of physics and machine learning.

What? A seminar series highlighting research on both physics-inspired approaches to understanding ML and the use of ML for physics applications

Location
Jadwin Hall A10
Speakers
Princeton Symposium on Biological & Artificial Intelligence
Thu, Oct 19, 2023

The Symposium will bring together neuroscientists and computer scientists at Princeton who work on problems cutting across the boundaries of biological and artificial intelligence systems.

Thursday, October 19, 2023 4PM-8PM             

Friday,…

Location
Princeton Neuroscience Institute
Machine Learning in Physics
Wed, Nov 1, 2023, 4:30 pm

A new seminar hosted jointly between Physics and ORFE focusing on interdisciplinary work at the intersection of physics and machine learning.

What? A seminar series highlighting research on both physics-inspired approaches to understanding ML and the use of ML for physics applications

Location
Jadwin Hall A10
Speakers
Machine Learning in Physics
Wed, Nov 15, 2023, 4:30 pm

A new seminar hosted jointly between Physics and ORFE focusing on interdisciplinary work at the intersection of physics and machine learning.

What? A seminar series highlighting research on both physics-inspired approaches to understanding ML and the use of ML for physics applications

Location
Jadwin Hall A10
Speakers
Machine Learning in Physics
Wed, Nov 29, 2023, 4:30 pm

A new seminar hosted jointly between Physics and ORFE focusing on interdisciplinary work at the intersection of physics and machine learning.

What? A seminar series highlighting research on both physics-inspired approaches to understanding ML and the use of ML for physics applications

Location
Jadwin Hall A10
Speakers

Events Archive

Bayesian Risk Optimization (BRO): A New Approach to Data-driven Stochastic Optimization

A large class of stochastic optimization problems involves optimizing an expectation taken with respect to an underlying distribution that is unknown in practice. One popular approach to addressing the distributional uncertainty, known as the distributionally robust optimization (DRO), is to hedge against the worst case among an ambiguity set of candidate distributions. However, given that worst-case rarely happens, inappropriate construction of the ambiguity set can sometimes result in over-conservative solutions. To explore the middle ground between optimistically ignoring the distributional uncertainty and pessimistically fixating on the worst-case scenario, we propose a Bayesian risk optimization (BRO) framework for parametric underlying distributions, which is to optimize a risk functional applied to the posterior distribution of the unknown distribution parameter. Of our particular interest are four risk functionals: mean, mean-variance, value-at-risk, and conditional value-at-risk. To reveal the implication of BRO, we establish the consistency of objective functions and optimal solutions, as well as the asymptotic normality of objective functions and optimal values. We also extend the BRO framework to online and multi-stage settings.

Location
Virtual Seminar
Speaker
MCMC vs. Variational Inference -- for Credible Learning and Decision-Making at Scale

I will introduce some recent progress towards understanding the scalability of Markov chain Monte Carlo (MCMC) methods and their comparative advantage with respect to variational inference. I will discuss an optimization perspective on the infinite dimensional probability space, where MCMC leverages stochastic sample paths while variational inference projects the probabilities onto a finite dimensional parameter space. Three ingredients will be the focus of this discussion: non-convexity, acceleration, and stochasticity. This line of work is motivated by epidemic prediction, where we need uncertainty quantification for credible predictions and informed decision making with complex models and evolving data.

Location
Zoom
Speaker
Optimal No-Regret Learning in Repeated First-Price Auctions

First-price auctions have very recently swept the online advertising industry, replacing second-price auctions as the predominant auction mechanism on many platforms for display ads bidding. This shift has brought forth important challenges for a bidder: how should one bid in a first-price auction, where unlike in second-price auctions, it is no longer optimal to bid one's private value truthfully and hard to know the others' bidding behaviors? In this paper, we take an online learning angle and address the fundamental problem of learning to bid in repeated first-price auctions. We discuss our recent work in leveraging the special structures of the first-price auctions to design minimax optimal no-regret bidding algorithms. 

Speaker
Machine Learning + Humanities Working Group

Join the new Machine Learning + Humanities Working Group for our second meeting of the semester on Wednesday, November 17. We’ll discuss the latest trends in research at the intersections of ML+Hum, and will look specifically at several projects including Machines Reading Maps from the Turing Institute and Newspaper Navigator from the Library of Congress.

Scientific Visualization

Visualization enables insight, allows verification, and enhances presentations and publications. This workshop introduces the VisIt visualization software package, which has a graphical user interface for exploring and displaying data.  It can also produce animation to represent complex behavior of variables over time. The software is freely…

Speaker
CITP Seminar: Matt Weinberg – A Crash Course on Algorithmic Mechanism Design

Matt is an assistant professor at Princeton University in the Department of Computer Science. His primary research interest is in Algorithmic Mechanism Design: algorithm design in settings where users have their own incentives. He is also interested more broadly in Algorithmic Game Theory, Algorithms Under Uncertainty, and Theoretical Computer Science in general. Please click here for more details.

Speaker
From Shallow to Deep Representation Learning: Global Nonconvex Theory and Algorithms

In this talk, we consider two fundamental problems in signal processing and machine learning: (convolutional) dictionary learning and deep network training. For both problems, we provide the first global nonconvex landscape analysis of the learned representations, which will in turn provide new guiding principles for better model/architecture design, optimization, robustness, and in both supervised and unsupervised scenarios. More specifically, the first part of the talk will focus on (convolutional) dictionary learning (aka shallow representation learning) in the unsupervised setting, we show that a nonconvex  L_4 loss over the sphere has no spurious local minimizers. This further inspires us to design efficient optimization methods for convolutional dictionary learning, with applications in imaging sciences. Second, we study the last-layer representation in deep learning, where recent seminal work by Donoho et al. showed a prevalence empirical phenomenon during the terminal phase of network training - neural collapse. By studying the optimization landscape of the training loss under an unconstrained feature model, we provide the theoretical justification for this phenomenon, which could have broad implications for network training, design, and beyond.

Location
Zoom - Register via Link in Body
Speaker
CITP Seminar: Alex Hanna – Beyond Bias: Algorithmic Unfairness, Infrastructure, and Genealogies of Data

Problems of algorithmic bias are often framed in terms of lack of representative data or formal fairness optimization constraints to be applied to automated decision-making systems. However, these discussions sidestep deeper issues with data used in AI, including problematic categorizations and the extractive logics of crowdwork and data mining. This talk will examine two interventions: first by reframing of data as a form of infrastructure, and as such, implicating politics and power in the construction of datasets; and secondly discussing the development of a research program around the genealogy of datasets used in machine learning and AI systems. These genealogies should be attentive to the constellation of organizations and stakeholders involved in their creation, the intent, values, and assumptions of their authors and curators, and the adoption of datasets by subsequent researchers.

Location
Virtual Seminar
Speaker
Learn about CSML’s graduate certificate program at info session

Graduate students from all departments are invited to attend an informal information session on the Center for Statistics and Machine Learning’s (CSML) graduate certificate program.

Event will be held at 4:30 pm on Wednesday, October 27, 2021 at the Louis A. Simpson Lawn. Staff and…

Location
Louis A. Simpson Lawn
Machine Predictions and Synthetic Text: A Roundtable

Since it was published in March 2021, "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" has sparked impassioned conversations on the unintended consequences and potential harms of prominent natural language processing (NLP) projects. While this groundbreaking paper has been influential in computer and data science—prompting reflection on the dangers of relying on poorly conceptualized and curated data—it is only beginning to be discussed by humanities scholars who use NLP methods in their research.

Location
Zoom Webinar
Speakers