Generalized Lagrangian Networks

Fri, Jan 31, 2020, 12:00 pm
26 Prospect Ave, Classroom 103
Center for Statistics and Machine Learning

Even though neural networks enjoy widespread use, they still struggle to learn the basic laws of physics. How might we endow them with better inductive biases? Recent work (Greydanus et al. 2019) proposed Hamiltonian Neural Networks (HNNs) which can learn the Hamiltonian of a physical system from data using a neural network. A key issue with these models is that they require a priori knowledge of the system’s conjugate position and momentum coordinates, and thus are difficult to learn from arbitrary coordinates such as pixels. In this talk, I will introduce Generalized Lagrangian Networks (GLNs), which learn Lagrangians instead of Hamiltonians. These models do not require conjugate position and momentum coordinates and perform well in situations where generalized momentum is difficult to compute (eg the double pendulum). This is particularly appealing for use with a learned latent representations, a case where HNNs struggle. Unlike previous work on learning Lagrangians (Lutter et al. 2019), our approach is fully general and extends to non-holonomic systems such as the 1D wave equation.


Lunch will be provided.

This interdisciplinary meeting focusses on ML approaches that are useful for the sciences and engineering. The style is informal. Join us if you want to learn new ML approaches for scientific research and, in particular, if you're already using them.