Featured Event

  • Optimization Inspired Deep Architectures for Multiview 3D

    Thu, Feb 11, 2021, 3:00 pm
    Multiview 3D has traditionally been approached as continuous optimization: the solution is produced by an algorithm that solves an optimization problem over continuous variables (camera pose, 3D points, motion) to maximize the satisfaction of known constraints from multiview geometry. In contrast, deep learning offers an alternative strategy where the solution is produced by a general-purpose network with learned weights. In this talk, I will present some recent work using a hybrid approach that takes the best of both worlds. In particular, I will present several new deep architectures inspired by classical optimization-based algorithms.
  • Computational Optics for Control and Readout of Neural Activity

    Wed, Feb 17, 2021, 4:30 pm
    Nearly all aspects of cognition and behavior require the coordinated action of multiple brain regions that are spread out over a large 3D volume. To understand the long-distance communication between these brain regions, we need optical techniques that can simultaneously monitor and control tens of thousands of individual neurons at cellular resolution and kilohertz speed.
  • Preconditioning Helps: Faster Convergence in Statistical and Reinforcement Learning

    Mon, Apr 19, 2021, 4:30 pm
    While exciting progress has been made in understanding the global convergence of vanilla gradient methods for solving challenging nonconvex problems in statistical estimation and machine learning, their computational efficacy is still far from satisfactory for ill-posed or ill-conditioned problems. In this talk, we discuss how the trick of preconditioning further boosts the convergence speed with minimal computation overheads through two examples: low-rank matrix estimation in statistical learning and policy optimization in entropy-regularized reinforcement learning.
  • Deep Networks from First Principles

    Fri, Jan 15, 2021, 4:30 pm
    In this talk, we offer an entirely “white box’’ interpretation of deep (convolutional) networks from the perspective of data compression. In particular, we show how modern deep architectures, linear (convolution) operators and nonlinear activations, and parameters of each layer can be derived from the principle of rate reduction (and invariance).
  • Breaking the Sample Size Barrier in Statistical Inference and Reinforcement Learning

    Tue, Dec 8, 2020, 11:00 am

    A proliferation of emerging data science applications require efficient extraction of information from complex data. The unprecedented scale of relevant features, however, often overwhelms the volume of available samples, which dramatically complicates statistical inference and decision making. In this talk, we present two vignettes on how to improve sample efficiency in high-dimensional statistical problems.

  • HEE Seminar- Taylor Faucett-UCI-Physics Learning from Machines Learning

    Tue, Dec 1, 2020, 2:00 pm

    Machine Learning methods are extremely powerful but often function as black-box problem solvers, providing improved performance at the expense of clarity. Our work describes a new machine learning approach which translates the strategy of a deep neural network into simple functions that are meaningful and intelligible to the physicist, without sacrificing performance improvements. We apply this approach to benchmark high-energy problems of fat-jet classification and electron identification.

  • CSML Poster Session Event

    Mon, May 3, 2021, 12:00 pm

    The annual CSML Poster Session event will be held in person or virtually. Watch this space for further details.

    Due date for independent work posters and papers TBA. Please check your email for details.

    Check out this article on 2020's poster session here.

  • Using Code Ocean in the Sciences and Engineering: Bringing computational reproducibility to your research collaborations

    Fri, Nov 20, 2020, 1:00 pm

    Computational analyses are playing an increasingly central role in research. However, many researchers have not received training in best practices and tools for reproducibly managing and sharing their code and data. This is a step-by-step, practical workshop on managing your research code and data for computationally reproducible collaboration. The workshop starts with some brief introductory information about computational reproducibility, but the bulk of the workshop is guided work with code and data.

Pages

Subscribe to Featured Event