# Featured Event

- Thu, Apr 22, 2021, 3:00 pmVisualAI lab focuses on bringing together the fields of computer vision, machine learning, human-machine interaction as well as fairness, accountability and transparency. In this talk, we will introduce the general goal of the lab, and how to build an agent that can understand and follow human’s language to perform tasks.
## Machine Learning and Dynamical Systems meet in Reproducing Kernel Hilbert Spaces

Wed, Apr 21, 2021, 12:00 pmSince its inception in the 19th century through the efforts of Poincaré and Lyapunov, the theory of dynamical systems addresses the qualitative behaviour of dynamical systems as understood from models. From this perspective, the modeling of dynamical processes in applications requires a detailed understanding of the processes to be analyzed. This deep understanding leads to a model, which is an approximation of the observed reality and is often expressed by a system of Ordinary/Partial, Underdetermined (Control),## Princeton Research Day 2021

Thu, May 6, 2021 (All day)Princeton’s celebration of early-career research and creative work is back in an all-online format.## The One World Seminar on the Mathematics of Machine Learning

Wed, Apr 7, 2021, 12:00 pmIn this talk we study the problem of signal recovery for group models. More precisely for a given set of groups, each containing a small subset of indices, and for given linear sketches of the true signal vector which is known to be group-sparse in the sense that its support is contained in the union of a small number of these groups, we study algorithms which successfully recover the true signal just by the knowledge of its linear sketches. We derive model projection complexity results and algorithms for more general group models than the state-of-the-art. We consider two versions of the classical Iterative Hard Thresholding algorithm (IHT). The classical version iteratively calculates the exact projection of a vector onto the group model, while the approximate version (AM-IHT) uses a head- and a tail-approximation iteratively. We apply both variants to group models and analyze the two cases where the sensing matrix is a Gaussian matrix and a model expander matrix.## DataX Workshop: Social biases in machine learning and in human nature: What social scientists and computer scientists can learn from each other

Fri, Apr 9, 2021, 8:00 am:*Princeton DataX Workshop: Social Biases in Machine Learning and in Human Nature: What Social Scientists and Computer Scientists Can Learn From Each Other*2-day virtual workshop explores social biases in machine learning and brings together cutting-edge innovative sociology, social psychology, cognitive science, and computer science perspectives on the interplay between stereotyping and human and artificial intelligence.

Host: Xuechunzi Bai & Susan T. Fiske

## Evnin Lecture Series: Calling BS: The Art of Skepticism in a Data-Driven World

Tue, Mar 30, 2021, 7:00 pmJoin us virtually for the Evnin Lecture with Jevin West and Carl Bergstrom of the University of Washington, co-authors of Calling Bullshit: The Art of Skepticism in a Digital World.

## Leveraging Dataset Symmetries in Neural Network Prediction

Mon, Mar 22, 2021, 12:30 pmScientists and engineers are increasingly applying deep neural networks (DNNs) to modelling and design of complex systems. While the flexibility of DNNs makes them an attractive tool, it also makes their solutions difficult to interpret and their predictive capability difficult to quantify. In contrast, scientific models directly expose the equations governing a process but their applicability is restricted in the presence of unknown effects or when the data are high-dimensional.

## Function Approximation via Sparse Random Fourier Features

Wed, Mar 17, 2021, 12:00 pmRandom feature methods have been successful in various machine learning tasks, are easy to compute, and come with theoretical accuracy bounds. They serve as an alternative approach to standard neural networks since they can represent similar function spaces without a costly training phase. However, for accuracy, random feature methods require more measurements than trainable parameters, limiting their use for data-scarce applications or problems in scientific machine learning.## Finite Width, Large Depth Neural Networks as Perturbatively Solvable Models

Wed, Mar 10, 2021, 12:00 pm**Abstract:**Deep neural networks are often considered to be complicated "black boxes," for which a systematic analysis is not only out of reach but potentially impossible. In this talk, which is based on ongoing joint work with Dan Roberts and Sho Yaida, I will make the opposite claim. Namely, that deep neural networks at initialization are perturbatively solvable models. The perturbative parameter is the width n of the network and we can obtain corrections to all orders in n. Our approach applies to networks at finite width n and large depth L.## AI Meets Large-scale Sensing: preserving and exploiting structure of the real world to enhance machine perception

Thu, Mar 11, 2021, 3:00 pmMachine capability has reached an inflection point, achieving human-level performance in tasks traditionally associated with cognition (vision, speech, strategic gameplay). However, efforts to move such capability pervasively into the real world, have in many cases fallen far short of the relatively constrained and isolated demonstrations of success. A major insight emerging is that structure in data can be substantially exploited to enhance machine learning. This talk explores how the statistically-complex processes of the real world can be addressed by preforming sensing in ways that preserve the rich structure of the real world.