Upcoming Seminars

JARROD MCCLEAN - Google Quantum Artificial Intelligence Lab

Thu, Sep 30, 2021, 4:00 pm

Google Quantum Artificial Intelligence Lab

Website Jarrod McClean Website

Thursday, Sep. 30, 2021

Zoom Meeting

Host - Haw Yang

More information and abstract forthcoming.


Previous Seminars

Real-Time Remote Sensing and Fusion Plasma Control: A Reservoir Computing Approach

Thu, Jun 17, 2021, 11:00 am
Nuclear fusion power is a potential source of safe, non-carbon-emitting and virtually limitless energy. The tokamak is a promising approach to fusion based on magnetic plasma confinement, constituting a complex physical system with many control challenges. However, plasma instabilities pose an existential threat to a reactor, which has not yet...

Smooth bilevel programming for sparse regularisation

Wed, May 19, 2021, 12:00 pm
Nonsmooth regularisers are widely used in machine learning for enforcing solution structures (such as the l1 norm for sparsity or the nuclear norm for low rank). State of the art solvers are typically first order methods or coordinate descent methods which handle nonsmoothness by careful smooth approximations and support pruning. In this work, we...

The efficiency of kernel methods on structured datasets

Wed, May 5, 2021, 4:30 pm
Inspired by the proposal of tangent kernels of neural networks (NNs), a recent research line aims to design kernels with a better generalization performance on standard datasets. Indeed, a few recent works showed that certain kernel machines perform as well as NNs on certain datasets, despite their separations in specific cases implied by...

Evolving Graphical Planner: Contextual Global Planning for Vision-and-Language Navigation

Thu, Apr 22, 2021, 3:00 pm
VisualAI lab focuses on bringing together the fields of computer vision, machine learning, human-machine interaction as well as fairness, accountability and transparency. In this talk, we will introduce the general goal of the lab, and how to build an agent that can understand and follow human’s language to perform tasks.

Preconditioning Helps: Faster Convergence in Statistical and Reinforcement Learning

Mon, Apr 19, 2021, 4:30 pm
While exciting progress has been made in understanding the global convergence of vanilla gradient methods for solving challenging nonconvex problems in statistical estimation and machine learning, their computational efficacy is still far from satisfactory for ill-posed or ill-conditioned problems. In this talk, we discuss how the trick of...

The One World Seminar on the Mathematics of Machine Learning

Wed, Apr 7, 2021, 12:00 pm
In this talk we study the problem of signal recovery for group models. More precisely for a given set of groups, each containing a small subset of indices, and for given linear sketches of the true signal vector which is known to be group-sparse in the sense that its support is contained in the union of a small number of these groups, we study...

Leveraging Dataset Symmetries in Neural Network Prediction

Mon, Mar 22, 2021, 12:30 pm

Scientists and engineers are increasingly applying deep neural networks (DNNs) to modelling and design of complex systems. While the flexibility of DNNs makes them an attractive tool, it also makes their solutions difficult to interpret and their predictive capability difficult to quantify.


Function Approximation via Sparse Random Fourier Features

Wed, Mar 17, 2021, 12:00 pm
Random feature methods have been successful in various machine learning tasks, are easy to compute, and come with theoretical accuracy bounds. They serve as an alternative approach to standard neural networks since they can represent similar function spaces without a costly training phase. However, for accuracy, random feature methods require more...

AI Meets Large-scale Sensing: preserving and exploiting structure of the real world to enhance machine perception

Thu, Mar 11, 2021, 3:00 pm
Machine capability has reached an inflection point, achieving human-level performance in tasks traditionally associated with cognition (vision, speech, strategic gameplay).  However, efforts to move such capability pervasively into the real world, have in many cases fallen far short of the relatively constrained and isolated demonstrations of...

Finite Width, Large Depth Neural Networks as Perturbatively Solvable Models

Wed, Mar 10, 2021, 12:00 pm

Abstract: Deep neural networks are often considered to be complicated "black boxes," for which a systematic analysis is not only out of reach but potentially impossible. In this talk, which is based on ongoing joint work with Dan Roberts and Sho Yaida, I will make the opposite claim. Namely, that deep neural networks at initialization are...