Scientists and engineers are increasingly applying deep neural networks (DNNs) to modelling and design of complex systems. While the flexibility of DNNs makes them an attractive tool, it also makes their solutions difficult to interpret and their predictive capability difficult to quantify. In contrast, scientific models directly expose the equations governing a process but their applicability is restricted in the presence of unknown effects or when the data are high-dimensional. The emerging paradigm of physics-guided artificial intelligence asks: How can we combine the flexibility of DNNs with the interpretability of scientific models to learn relationships from data consistent with known scientific theories? In this talk, I will discuss my work on incorporating prior knowledge of problem structure (e.g., physics-based constraints) into neural network design. Specifically, I will demonstrate how prior knowledge of task symmetries can be leveraged for improved learning outcomes in convolutional neural network based classification; and how embedding priors from dynamical systems theory can lead to physically plausible neural network based video prediction.
Dr. Christine Allen-Blanchette is a postdoctoral researcher in the Department of Mechanical and Aerospace Engineering at Princeton University where she is pursuing research at the intersection of deep learning, geometry, and dynamical systems. She completed her PhD in Computer Science and MSE in Robotics at the University of Pennsylvania, and her BS degrees in Mechanical Engineering and Computer Engineering at San Jose State University. Among her awards are the Princeton Presidential Postdoctoral Fellowship, NSF Integrative Graduate Education and Research Training award, and GEM Fellowship sponsored by the Adobe Foundation.