Princeton University is leading the way in fostering community and collaboration among researchers in robotics and artificial intelligence. On Nov. 4, the University hosted the Princeton Symposium on Safe Deployment of Foundation Models in Robotics. Faculty, students and post-doctoral researchers from both Princeton and outside institutions gathered to present their research, exchange ideas, and engage with a series of talks from leaders in robotics and AI research.
“These are really exciting times in robotics,” said Anirudha Majumdar, Associate Professor of Mechanical and Aerospace Engineering at Princeton and symposium organizer, at the event’s opening remarks.
“This event is a prime example of the trends the university has identified,” said Sanjeev Arora, Charles C. Fitzmorris Professor of Computer Science and Director of Princeton Language and Intelligence (PLI), who organized the symposium along with Majumdar. “There are unifying ideas in AI that apply to all kinds of other disciplines.”
Robotics at Princeton
Over the past decade, the School of Engineering and Applied Science has aimed to expand the robotics research ongoing at the university. Majumdar said at the same time he’d been aiding the growth of Robotics at Princeton, the University had also begun to put their weight behind expanding and launching a number of artificial intelligence initiatives.
This fall semester, the University expanded its AI research landscape with the launch of two new initiatives: AI for Accelerating Invention (AI^2) and Natural and Artificial Minds (NAM). Alongside Princeton Language and Intelligence (PLI) which was launched in 2023, these initiatives form the foundation of the Princeton Laboratory for Artificial Intelligence (AI Lab).
Given the ample opportunity for collaboration between AI and robotics, Majumdar and Arora had been discussing the idea of a joint event. “Since there’s a lot of excitement and activity in both areas, we wanted something that would bring not just robotics but also the AI community together,” said Majumdar.
Majumdar and Arora chose the topic of safety for the symposium because it is something that both robotics researchers and AI researchers have begun to consider more and more as the fields rapidly evolve and converge. At the dawn of robotics research, the concept of safety revolved mostly around preventing collisions. Over the ensuing decades, that has changed. Especially as AI has been used in conjunction with robotic systems.
“The advent of foundation models provides an opportunity to think about safety much more broadly,” said Majumdar.
Thinking about safety
Among the industry leaders who spoke at the symposium were Distinguished Engineer at Waymo and former head of robotics at Google DeepMind Vincent Vanhoucke, Toyota Research Institute research scientist Masha Itkina, and Chief AI Scientist at Meta Yann LeCun.
The answer to creating controllable and safe robotic systems, LeCun posited in his talk at the symposium, is creating smart systems.
LeCun further discussed the routes researchers might take in order to push AI to the levels of human intelligence. “If [you] want to build robots, and not just if you want to build robots, if you want to build AI systems that have human-level intelligence…they need to have some level of common sense,” said LeCun. “We really don’t have any technology that’s capable of doing this at the moment.”
In addition to the talks, students and post-doctoral participants at the event showcased their own research and engaged in one-on-one discussion at the symposium’s poster session.
Among the presenters was Alex Robey, a post-doctoral researcher from Carnegie Mellon University. Robey’s research revolves around anticipating potential malicious attacks on robotics systems with the eventual goal of figuring out how to defend against them. “When you’re using foundation models and language models in robotics, it’s a relatively unexplored area,” said Robey.
There are robots used by police departments and deployed during war. So, if a malicious user was able to co-opt one such robot, there’s the potential for a lot of damage to be done. The question is, how do researchers create safety-minded machine learning systems to prevent such attacks? “I don’t know how to solve the problems,” said Robey. “That’s why I wanted to come and present this.”
Majumdar said he and his collaborators plan to continue hosting events similar to the robotics symposium into the future. Of the many workshops and conferences he’s attended, Robey said he felt the smaller-scale of the Princeton symposium was beneficial for fostering community and collaboration.
“You can just go up to somebody and immediately connect,” said Robey. “It’s been much faster for getting on the same page about what’s important.”
Link for Symposium Videos and Poster Session