Upcoming Seminars

Capitalizing on Generative AI: Guided Diffusion Models Towards Generative Optimization
Tue, Apr 23, 2024, 12:15 pm

Lunch available beginning at 12:15 PM
Speaker to begin at 12:30 PM

Abstract: Diffusion models represent a significant breakthrough in generative AI, operating by progressively transforming random noise distributions into structured outputs, with adaptability for specific tasks through guidance or fine…

CSML Machine Learning Lunchtime Seminar Series
Tue, May 7, 2024, 12:15 pm

Lunch available beginning at 12:15 PM
Speaker to begin at 12:30 PM

Contributions to and/or sponsorship of any event does not constitute departmental or institutional endorsement of the specific program, speakers or views presented.


Previous Seminars

The One World Seminar on the Mathematics of Machine Learning
Wed, Apr 7, 2021, 12:00 pm

In this talk we study the problem of signal recovery for group models. More precisely for a given set of groups, each containing a small subset of indices, and for given linear sketches of the true signal vector which is known to be group-sparse in the sense that its support is contained in the union of a small number of these groups, we study algorithms which successfully recover the true signal just by the knowledge of its linear  sketches. We derive model projection complexity results and algorithms for more general  group models than the state-of-the-art. We consider two versions of the classical Iterative Hard Thresholding algorithm (IHT). The  classical version iteratively calculates the exact projection of a vector onto the group model, while the approximate version (AM-IHT) uses a head- and a tail-approximation iteratively. We apply both variants to group models and analyze the two cases where the sensing matrix is a Gaussian matrix and a model expander matrix.

Leveraging Dataset Symmetries in Neural Network Prediction
Mon, Mar 22, 2021, 12:30 pm

Scientists and engineers are increasingly applying deep neural networks (DNNs) to modelling and design of complex systems. While the flexibility of DNNs makes them an attractive tool, it also makes their solutions difficult to interpret and their predictive capability difficult to quantify. In contrast, scientific models directly expose the…

Function Approximation via Sparse Random Fourier Features
Wed, Mar 17, 2021, 12:00 pm

Random feature methods have been successful in various machine learning tasks, are easy to compute, and come with theoretical accuracy bounds. They serve as an alternative approach to standard neural networks since they can represent similar function spaces without a costly training phase. However, for accuracy, random feature methods require more measurements than trainable parameters, limiting their use for data-scarce applications or problems in scientific machine learning.

AI Meets Large-scale Sensing: preserving and exploiting structure of the real world to enhance machine perception
Thu, Mar 11, 2021, 3:00 pm

Machine capability has reached an inflection point, achieving human-level performance in tasks traditionally associated with cognition (vision, speech, strategic gameplay).  However, efforts to move such capability pervasively into the real world, have in many cases fallen far short of the relatively constrained and isolated demonstrations of success. A major insight emerging is that structure in data can be substantially exploited to enhance machine learning. This talk explores how the statistically-complex processes of the real world can be addressed by preforming sensing in ways that preserve the rich structure of the real world.

Finite Width, Large Depth Neural Networks as Perturbatively Solvable Models
Wed, Mar 10, 2021, 12:00 pm

Abstract: Deep neural networks are often considered to be complicated "black boxes," for which a systematic analysis is not only out of reach but potentially impossible. In this talk, which is based on ongoing joint work with Dan Roberts and Sho Yaida, I will make the opposite claim. Namely, that deep neural networks at initialization are…

Computational Optics for Control and Readout of Neural Activity
Wed, Feb 17, 2021, 4:30 pm

Nearly all aspects of cognition and behavior require the coordinated action of multiple brain regions that are spread out over a large 3D volume. To understand the long-distance communication between these brain regions, we need optical techniques that can simultaneously monitor and control tens of thousands of individual neurons at cellular resolution and kilohertz speed.

Optimization Inspired Deep Architectures for Multiview 3D
Thu, Feb 11, 2021, 3:00 pm

Multiview 3D has traditionally been approached as continuous optimization: the solution is produced by an algorithm that solves an optimization problem over continuous variables (camera pose, 3D points, motion) to maximize the satisfaction of known constraints from multiview geometry. In contrast, deep learning offers an alternative strategy where the solution is produced by a general-purpose network with learned weights. In this talk, I will present some recent work using a hybrid approach that takes the best of both worlds. In particular, I will present several new deep architectures inspired by classical optimization-based algorithms.

Deep Networks from First Principles
Fri, Jan 15, 2021, 4:30 pm

In this talk, we offer an entirely “white box’’ interpretation of deep (convolutional) networks from the perspective of data compression. In particular, we show how modern deep architectures, linear (convolution) operators and nonlinear activations, and parameters of each layer can be derived from the principle of rate reduction (and invariance).

Breaking the Sample Size Barrier in Statistical Inference and Reinforcement Learning
Tue, Dec 8, 2020, 11:00 am

A proliferation of emerging data science applications require efficient extraction of information from complex data. The unprecedented scale of relevant features, however, often overwhelms the volume of available samples, which dramatically complicates statistical inference and decision making. In this talk, we present…