
Photo by Sameer Khan.
As artificial intelligence becomes increasingly complex, new language is needed to help improve communication between humans and machines, Google DeepMind senior staff research scientist Been Kim told a standing-room-only crowd in Robertson Hall’s Arthur Lewis Auditorium on March 28.
In a lecture that spanned nearly an hour, Kim argued for the adoption of “neologisms,” new, more precise words that represent human concepts that we could teach machines, or that machines could teach us. Kim recently published a position paper on the topic alongside colleagues at Google.
Humans frequently invent new words to meet specific communications needs, she said, such as explaining complex topics like mathematics, or to signal tribal affiliation. Gen Alpha has invented many words like “skibidi” that mystify their elders, she said. But we also create new words to make it easier to communicate with one another.
“Neologisms are really about doing the same thing between machines and humans,” she said.
The event was the second of four Distinguished Lecture Series organized by the Princeton Laboratory for Artificial Intelligence this semester, with the goal to bring speakers to campus whose research demonstrates the transformative impact AI could have across disciplines. The next Distinguished Lecture, featuring Kathleen R. McKeown, a computer science professor at Columbia University, is scheduled for 2 p.m. Friday, April 4. Hosted by Princeton Language and Intelligence, the event is titled “Hallucination in Text Summarization: From News to Narrative.” On April 11, PLI will host Yejin Choi of Stanford University for the semester’s final Distinguished Lecture Series event.

Google's Been Kim lectures in Robertson Hall’s Arthur Lewis Auditorium. Photo by Sameer Khan.
Kim’s Distinguished Lecture was hosted by Natural and Artificial Minds, one of the AI Lab’s three research initiatives. Kim, who received her Ph.D. from the Massachusetts Institute of Technology, researches how to help humans better communicate with complex language learning models, and how machines’ nature compares to that of humans.
“Her work is a perfect example of the kind of research we try to foster through the initiative Natural and Artificial Minds,” said Tania Lombrozo, Arthur W. Marks ’19 Professor of Psychology and co-director of NAM.
Improving communication with machines is important, Kim said, because we have already seen evidence that people can learn from large language models. She described in detail research at Google DeepMind where world class chess players were able to improve their skills in a short period of time with the use of AI.
“The natural question is, how do you do this more generally? How do you enable learning from machines to learning, not just for experts, but for everyday users of machines?” she asked.
Andrew Nam, a postdoctoral researcher for NAM who attended the talk, said he appreciated hearing about research that is more focused on the end user. He compared the current moment, where large language models like ChatGPT are now used widely by the general public, to earlier adoptions of technology, like the rise of personal computers.
“It’s a different perspective from someone coming in from industry,” Nam said. “These are the kinds of questions we don’t think about as much as theorists with the AI Lab.”
While researchers would like to understand “every single atom” of large language models, Kim cautioned that, ultimately, there will always be limits to our understanding of machines.
“At the end of the day, we build these models so they’re useful to us,” she said. “Solving the communication problem narrowly focuses on making interpretability useful, which is ever more so important as LLMs change the way we live this life. And we want a good life.”