Robotic arms are already common in factories, where they paint or weld parts, pick items from conveyor belts, and stack products. But how can we move robotic arms out of controlled settings and into our everyday lives, where they can perform common tasks like dishwashing or grocery shopping?
Gan Luyang ‘26, is spending her summer tackling this question and gaining research experience in the Princeton Vision & Learning Lab. Luyang, a mathematics major pursuing a minor in statistics and machine learning, is one of three students being funded this year by the Center for Statistics and Machine Learning to complete research in a laboratory on campus during the summer months.
Luyang’s project objective is to develop a robotic arm and evaluate the methods by which it could grasp an item from any location and under any condition. “A factory is a constrained environment and it’s easy to just program your robotic arm to say, first move here, then move here, then move here,” said Luyang. A household, however, is a totally new environment. In a household, objects are frequently misplaced, replaced, and moved around. “You kind of have to adjust dynamically.”
The Vision & Learning Lab is overseen by Associate Professor of Computer Science Jia Deng. In the lab is a robotic arm and a display set up with items one might find strewn around a person’s home – blocks of Legos, a Pringles can, a mustard bottle. This setting is used for the lab’s robot experiments, during which the researchers test machine learning models trained for the task of picking up items in the display. The models help the robotic arm plan the motion it must take in order to grasp one of the objects.
Embodying AI
Luyang is in charge of fine-tuning these neural networks used in the lab research. She reads through other people’s code, training the model and then deploying it and monitoring its performance. “Luyang is brilliant and hard-working,” said Deng, who advises Luyang’s work in the lab. He said her work in training the baseline models has been an “essential” part of the project.
In a world where language and image generators like ChatGPT are taking off, Luyang finds herself instead drawn to the idea of embodied intelligence – that is to say, giving a “body,” so to speak, to AI by merging it with robotics. “What’s important to me is for artificial intelligence to have real, physical contact with the world,” said Luyang. “This whole field of robotic manipulation is still not quite developed yet – just the simple task of grasping things is actually very difficult.”
The research experience Luyang has gained this summer so far is something she said is “invaluable.” “I’ve definitely learned a lot,” said Luyang. “Getting into the fine-tuning details has given me a more tangible understanding of the machine learning concepts I've learned in class and my proficiency in 3D vision has also deepened through working with thousands of lines of code.”
Luyang’s goal is to one day obtain a PhD before going into industry, where she hopes to make impacts in real world settings to help people. “Ultimately, artificial intelligence should not just interact with us through computers, but it should actually help us with our lives,” said Luyang.