As the space between robots and humans blurs, how do we keep the interactions safe?

Written by
Allison Gasparini
March 11, 2024

Over the last couple of years, artificial intelligence has become ingrained in our everyday lives. It’s been a remarkable trajectory that is expected to continue upward – but not without risks.

For a long time, robots have operated in contained industrial spaces. Robotic arms building cars on an assembly line, for example, are physically walled off to prevent mishaps with human workers. “In industrial robotics, we don’t have to worry about the safety of human-robot interactions,” said Jaime Fernández Fisac, an assistant professor of electrical and computer engineering at Princeton University and participating faculty at the Center for Statistics and Machine Learning (CSML).

But the last decade has seen a notable shift that brings robotics increasingly into human-populated spaces. Now, we don’t just have robots that are making cars. We have robots that are cars. And it’s a transition which hasn’t always been smooth. In June of last year, The Washington Post reported that Tesla’s automated vehicle system Autopilot was involved in over 700 crashes and 17 fatalities in the U.S. from 2019 to the time of the reporting.

“If these things perpetually happen, the public will reject the technology,” said Fernández Fisac. “We need to do our homework and ensure we provide the public the guarantees and assurances that are necessary to deploy the technology.”

In a seminar on March 5 hosted by CSML, Fernández Fisac shared some of the ways he and his colleagues at the Safe Robotics Laboratory are using machine learning to work toward ensuring that humans are protected from harm when operating and co-existing with robotic systems.

The question of a safety guarantee

The AI boom brings with it opportunities for great benefits — but risks for individual and societal harm are inevitable as well. Providing a promise of absolute safety is an impossible task with anything constructed for human use. Take bridges for example. They’re a necessary part of infrastructure. But bridge collapses are extraordinarily dangerous, cause mass fatalities and, unfortunately, do happen. “That doesn’t mean we shouldn’t build bridges,” said Fernández Fisac.

Jaime Fernandez Fisac stands in front of a crowded classroom giving a lecture.

At a lunchtime seminar hosted by the Center for Statistics and Machine Learning, Assistant Professor Jaime Fernández Fisac gave a talk on safety in human-AI interactions. 

A bridge is built by licensed contractors and professional engineers. And even then, a bridge may be closed if weather conditions make it unsafe to cross. Fernández Fisac argues a similar thing can be done with AI. It may be impossible to promise safety under every condition imaginable (there will always be extreme scenarios a robot cannot plan for). But AI systems can and should guarantee safety in as many scenarios as possible, and that takes a lot of advanced planning. 

It’s not just automated vehicles which pose safety concerns researchers need to be planning for. In early 2023, Vice reported that an AI chatbot encouraged a man suffering from depression to take his own life. The incident raised questions about how AI researchers should consider the risk chatbots can pose to users struggling with mental illness. 

Fernández Fisac said the question he and his laboratory are asking is, “Can I make decisions not right before something bad is about to happen, but ahead of time?” 

In the Safe Robotics Laboratory at Princeton, Fernández Fisac and his colleagues are using reinforcement learning to solve this problem. Their quest to reconcile safety and performance has included the design of effective safety filters. These filters use a reinforcement learning algorithm to monitor the operations of autonomous systems and intervene to prevent failure and hazardous behavior. 

The algorithm works by supervising the robot’s operations and intervenes in uncertain environments with a “safety fallback strategy” that protects the robot from worst-case scenarios. In a video he played for the seminar, Fernández Fisac demonstrated how his lab’s safety filter aided a robot in crossing bumpy terrain. He said these safety filters could also be used with autonomous cars driving on busy roads or in preventing chatbots from saying harmful things to those struggling with mental health issues.

AI mishaps often make headlines, so it’s important to assure the public that AI systems deployed at scale are safe to use. Safety filters like those being created by Fernández Fisac aim to do just that.