Upcoming Seminars

CITP Distinguished Lecture Series: Lorrie Cranor – Designing Usable and Useful Privacy Choice Interfaces
Thu, Mar 30, 2023, 4:30 pm

Co-sponsored by the Department of Computer Science and the Department of Electrical and Computer Engineering

Users who wish to exercise privacy rights or make privacy choices must often rely on website or app user interfaces. However, too often, these user interfaces suffer from usability deficiencies ranging from being…


Previous Seminars

AI Meets Large-scale Sensing: preserving and exploiting structure of the real world to enhance machine perception
Thu, Mar 11, 2021, 3:00 pm

Machine capability has reached an inflection point, achieving human-level performance in tasks traditionally associated with cognition (vision, speech, strategic gameplay).  However, efforts to move such capability pervasively into the real world, have in many cases fallen far short of the relatively constrained and isolated demonstrations of success. A major insight emerging is that structure in data can be substantially exploited to enhance machine learning. This talk explores how the statistically-complex processes of the real world can be addressed by preforming sensing in ways that preserve the rich structure of the real world.


Finite Width, Large Depth Neural Networks as Perturbatively Solvable Models
Wed, Mar 10, 2021, 12:00 pm

Abstract: Deep neural networks are often considered to be complicated "black boxes," for which a systematic analysis is not only out of reach but potentially impossible. In this talk, which is based on ongoing joint work with Dan Roberts and Sho Yaida, I will make the opposite claim. Namely, that deep neural networks at initialization are…


Computational Optics for Control and Readout of Neural Activity
Wed, Feb 17, 2021, 4:30 pm

Nearly all aspects of cognition and behavior require the coordinated action of multiple brain regions that are spread out over a large 3D volume. To understand the long-distance communication between these brain regions, we need optical techniques that can simultaneously monitor and control tens of thousands of individual neurons at cellular resolution and kilohertz speed.


Optimization Inspired Deep Architectures for Multiview 3D
Thu, Feb 11, 2021, 3:00 pm

Multiview 3D has traditionally been approached as continuous optimization: the solution is produced by an algorithm that solves an optimization problem over continuous variables (camera pose, 3D points, motion) to maximize the satisfaction of known constraints from multiview geometry. In contrast, deep learning offers an alternative strategy where the solution is produced by a general-purpose network with learned weights. In this talk, I will present some recent work using a hybrid approach that takes the best of both worlds. In particular, I will present several new deep architectures inspired by classical optimization-based algorithms.


Deep Networks from First Principles
Fri, Jan 15, 2021, 4:30 pm

In this talk, we offer an entirely “white box’’ interpretation of deep (convolutional) networks from the perspective of data compression. In particular, we show how modern deep architectures, linear (convolution) operators and nonlinear activations, and parameters of each layer can be derived from the principle of rate reduction (and invariance).


Breaking the Sample Size Barrier in Statistical Inference and Reinforcement Learning
Tue, Dec 8, 2020, 11:00 am

A proliferation of emerging data science applications require efficient extraction of information from complex data. The unprecedented scale of relevant features, however, often overwhelms the volume of available samples, which dramatically complicates statistical inference and decision making. In this talk, we present…


HEE Seminar- Taylor Faucett-UCI-Physics Learning from Machines Learning
Tue, Dec 1, 2020, 2:00 pm

Machine Learning methods are extremely powerful but often function as black-box problem solvers, providing improved performance at the expense of clarity. Our work describes a new machine learning approach which translates the strategy of a deep neural network into simple functions that are meaningful and intelligible to the physicist, without…


Deep Learning: It’s Not All About Recognizing Cats and Dogs
Thu, Nov 12, 2020, 12:30 pm

In this seminar, we will examine the underinvested deep learning personalization and recommendation systems in the overall research community. The training of state-of-the-art industry-scale personalized and recommendation models consumes the highest number of compute cycles among all deep learning use cases. For AI inference, personalization and recommendation consumes even higher compute cycles of 80%. What does state-of-the-art industry-scale neural personalization and recommendation models look like?


Analysis of Stochastic Gradient Descent in Continuous Time
Wed, Nov 4, 2020, 12:00 pm

Stochastic gradient descent is an optimisation method that combines classical gradient descent with random subsampling within the target functional. In this work, we introduce the stochastic gradient process as a continuous-time representation of stochastic gradient descent. The stochastic gradient process is a dynamical system that is coupled…


Consistency of Cheeger cuts: Total Variation, Isoperimetry, and Clustering
Wed, Oct 21, 2020, 12:00 pm

Clustering unlabeled point clouds is a fundamental problem in machine learning. One classical method for constructing clusters on graph-based data is to solve for Cheeger cuts, which balance between finding clusters that require cutting few graph edges and finding clusters which are similar in size. Although solving for Cheeger cuts…