- Thu, Feb 11, 2021, 3:00 pmMultiview 3D has traditionally been approached as continuous optimization: the solution is produced by an algorithm that solves an optimization problem over continuous variables (camera pose, 3D points, motion) to maximize the satisfaction of known constraints from multiview geometry. In contrast, deep learning offers an alternative strategy where the solution is produced by a general-purpose network with learned weights. In this talk, I will present some recent work using a hybrid approach that takes the best of both worlds. In particular, I will present several new deep architectures inspired by classical optimization-based algorithms.
- Wed, Feb 17, 2021, 4:30 pmNearly all aspects of cognition and behavior require the coordinated action of multiple brain regions that are spread out over a large 3D volume. To understand the long-distance communication between these brain regions, we need optical techniques that can simultaneously monitor and control tens of thousands of individual neurons at cellular resolution and kilohertz speed.
- Mon, Apr 19, 2021, 4:30 pmWhile exciting progress has been made in understanding the global convergence of vanilla gradient methods for solving challenging nonconvex problems in statistical estimation and machine learning, their computational efficacy is still far from satisfactory for ill-posed or ill-conditioned problems. In this talk, we discuss how the trick of preconditioning further boosts the convergence speed with minimal computation overheads through two examples: low-rank matrix estimation in statistical learning and policy optimization in entropy-regularized reinforcement learning.
- Wed, Jun 2, 2021, 8:00 am to Thu, Jun 10, 2021, 8:00 am
Graphics Processing Units (GPUs) offer high performance and massive parallelization, but learning how to program GPUs for scientific applications can be daunting.
- Fri, Jan 15, 2021, 4:30 pmIn this talk, we offer an entirely “white box’’ interpretation of deep (convolutional) networks from the perspective of data compression. In particular, we show how modern deep architectures, linear (convolution) operators and nonlinear activations, and parameters of each layer can be derived from the principle of rate reduction (and invariance).
- Tue, Jan 19, 2021, 8:00 am to Fri, Jan 29, 2021, 8:08 am
The Princeton Institute for Computational Science & Engineering (PICSciE) and OIT Research Computing , along with the Center for Statistics and Machine Learning(link is external), are announcing a two-week Research Computing Bootcamp held virtually during Winter Break, from January 19-29, 2021.
- Tue, Dec 8, 2020, 11:00 am
A proliferation of emerging data science applications require efficient extraction of information from complex data. The unprecedented scale of relevant features, however, often overwhelms the volume of available samples, which dramatically complicates statistical inference and decision making. In this talk, we present two vignettes on how to improve sample efficiency in high-dimensional statistical problems.
- Tue, Dec 1, 2020, 2:00 pm
Machine Learning methods are extremely powerful but often function as black-box problem solvers, providing improved performance at the expense of clarity. Our work describes a new machine learning approach which translates the strategy of a deep neural network into simple functions that are meaningful and intelligible to the physicist, without sacrificing performance improvements. We apply this approach to benchmark high-energy problems of fat-jet classification and electron identification.
- Mon, May 3, 2021, 12:00 pm
The annual CSML Poster Session event will be held in person or virtually. Watch this space for further details.
Due date for independent work posters and papers TBA. Please check your email for details.
Check out this article on 2020's poster session here.
- Fri, Nov 20, 2020, 1:00 pm
Computational analyses are playing an increasingly central role in research. However, many researchers have not received training in best practices and tools for reproducibly managing and sharing their code and data. This is a step-by-step, practical workshop on managing your research code and data for computationally reproducible collaboration. The workshop starts with some brief introductory information about computational reproducibility, but the bulk of the workshop is guided work with code and data.