Nuclear fusion power is a potential source of safe, non-carbon-emitting and virtually limitless energy. The tokamak is a promising approach to fusion based on magnetic plasma confinement, constituting a complex physical system with many control challenges. However, plasma instabilities pose an existential threat to a reactor, which has not yet...
Real-Time Remote Sensing and Fusion Plasma Control: A Reservoir Computing Approach
Nonsmooth regularisers are widely used in machine learning for enforcing solution structures (such as the l1 norm for sparsity or the nuclear norm for low rank). State of the art solvers are typically first order methods or coordinate descent methods which handle nonsmoothness by careful smooth approximations and support pruning. In this work, we...
Inspired by the proposal of tangent kernels of neural networks (NNs), a recent research line aims to design kernels with a better generalization performance on standard datasets. Indeed, a few recent works showed that certain kernel machines perform as well as NNs on certain datasets, despite their separations in specific cases implied by...
Department of Statistics at UC Berkeley
The efficiency of kernel methods on structured datasets
VisualAI lab focuses on bringing together the fields of computer vision, machine learning, human-machine interaction as well as fairness, accountability and transparency. In this talk, we will introduce the general goal of the lab, and how to build an agent that can understand and follow human’s language to perform tasks.
While exciting progress has been made in understanding the global convergence of vanilla gradient methods for solving challenging nonconvex problems in statistical estimation and machine learning, their computational efficacy is still far from satisfactory for ill-posed or ill-conditioned problems. In this talk, we discuss how the trick of...
Carnegie Mellon University
Preconditioning Helps: Faster Convergence in Statistical and Reinforcement Learning
In this talk we study the problem of signal recovery for group models. More precisely for a given set of groups, each containing a small subset of indices, and for given linear sketches of the true signal vector which is known to be group-sparse in the sense that its support is contained in the union of a small number of these groups, we study...
Research Chair of Data Science, African Institute of Mathematical Sciences and Stellenbosch University, South Africa
Discrete Optimization Methods for Group Model Selection in Compressed Sensing
Scientists and engineers are increasingly applying deep neural networks (DNNs) to modelling and design of complex systems. While the flexibility of DNNs makes them an attractive tool, it also makes their solutions difficult to interpret and their predictive capability difficult to quantify.
Department of Mechanical and Aerospace Engineering, Princeton University
Random feature methods have been successful in various machine learning tasks, are easy to compute, and come with theoretical accuracy bounds. They serve as an alternative approach to standard neural networks since they can represent similar function spaces without a costly training phase. However, for accuracy, random feature methods require more...
Machine capability has reached an inflection point, achieving human-level performance in tasks traditionally associated with cognition (vision, speech, strategic gameplay). However, efforts to move such capability pervasively into the real world, have in many cases fallen far short of the relatively constrained and isolated demonstrations of...
AI Meets Large-scale Sensing: preserving and exploiting structure of the real world to enhance machine perception
Abstract: Deep neural networks are often considered to be complicated "black boxes," for which a systematic analysis is not only out of reach but potentially impossible. In this talk, which is based on ongoing joint work with Dan Roberts and Sho Yaida, I will make the opposite claim. Namely, that deep neural networks at initialization are...