Natural and Artificial Minds (NAM) reenvisions theory-driven cognitive science research in the age of artificial intelligence.
The fields of cognitive science and artificial intelligence (AI) originally developed hand in hand: human cognition inspired the first AI systems, and those systems in turn inspired new theories of human cognition. AI has made rapid progress in the last decade, creating new opportunities for synthesis with cognitive science. Modern AI systems offer potential insights into how human minds may – and may not – work, thus fueling new theoretical advances in cognitive science. Conversely, modern AI systems have significant limitations relative to humans, and advances in cognitive science can provide insights into the processes underlying human capabilities, leading to new innovations in AI.
The goal of the Natural and Artificial Minds (NAM) initiative is to support opportunities for mutual interaction across the cognitive sciences and AI, accelerating discoveries about natural and artificial minds and creating a unique community for theory-driven research in cognitive science.
A key insight driving NAM is that minds and mental capacities can be fruitfully investigated beyond their paradigmatic manifestation in humans. By focusing on the questions – and sometimes answers – that are shared across natural and artificial minds, we can advance a more general science of minds. At the same time, appreciating the diversity of minds can create new opportunities for understanding and improving intelligent systems. To accomplish this vision, NAM brings together different branches of cognitive science, including psychology, computer science, philosophy, and neuroscience.
In its first year (2024-2025), NAM will focus on launching two research efforts. The first, led by NAM co-director Sarah-Jane Leslie, focuses on developing and testing AI models of human cognitive function. The second, led by NAM co-director Tania Lombrozo, focuses on explanation and intelligibility in humans and machines. For more information, see Core Projects.
Upcoming Events
Get Involved
How to get involved in NAM
- Join the mailing list
- Attend the NAM Launch Event
- Apply for seed funding for faculty pursuing theory-driven cognitive research projects that bridge natural and artificial minds
- Apply for seed funding for postdoctoral researchers and graduate students pursuing theory-driven cognitive research projects advised by faculty members across more than one NAM subdiscipline (call for proposals will be available fall 2024)
- Apply for funding to cover snacks or meals and external visitors for reading groups on NAM topics and based in AI Lab space (proposals accepted on a rolling basis; applicants should send a two-page proposal with sample readings, target audience, and a budget to the NAM co-directors)
- Attend biweekly working group meetings on Natural and Artificial Minds
- Apply for funding to offer a Wintersession activity related to NAM (applicants should send a two-page proposal and budget to NAM co-directors Sarah-Jane Leslie and Tania Lombrozo)
- Apply for a postdoctoral fellowship in NAM
Core Projects
-
-
One of the largest gaps that remains between natural and artificial minds is the efficiency with which natural minds — or at least human ones — learn, and the flexibility with which they can generalize what they have learned to novel circumstances, as compared to contemporary artificial ones. These capabilities reflect the ability of humans to efficiently learn low-dimensional, abstract representations of task-relevant structure, and to apply and recombine such representations in new settings that share similar elements of structure. Artificial systems have yet to exhibit these capabilities, requiring massive amounts of data to train (several orders of magnitude more than humans), and achieving proficiency in focused domains of function (e.g., language versus motor skills) that do not generalize to others. This project will directly address this gap, under the assumption that natural minds are imbued with inductive biases toward the efficient learning of task-relevant abstract representations. It will draw on insights from psychologists about the functional components of human cognition, and from neuroscientists about principles of computation in neural network architectures gleaned from the architecture of the brain.
-
-
Deep learning systems and other advances in AI have raised questions about “explainability”: How can an engineer or end-user understand the basis for some algorithmic judgment or decision when it comes from a largely opaque process? While research on explainability within computer science has made important advances, it has proceeded largely independently from existing work in educational, cognitive, and social psychology, or philosophy of science and epistemology, on the nature of explanation and understanding. This disconnect is unfortunate, as these fields have a great deal to learn from each other. This project will bring an interdisciplinary team together to tackle these new questions and to generate a taxonomy of forms of understanding relevant for human minds to better understand artificial systems, and for artificial systems to better mimic human explanation and understanding. Postdoctoral fellows dedicated to this project will play a crucial role in theoretical development, empirical testing, and dissemination with an eye towards effectively serving the needs of psychologists, philosophers, and computer scientists interested in explanation and understanding.
Leadership
Class of 1943 Professor
Co-Director of Natural and Artificial Minds
Philosophy & Statistics and Machine Learning
Co-Director of Natural and Artificial Minds
Arthur W. Marks ’19 Professor of Psychology