Date
Oct 25, 2024, 10:30 am12:00 pm
Location
41 William Street, Room 274

Speaker

Details

Event Description

Abstract: Symbolic systems are powerful frameworks for modeling cognitive processes as they encapsulate the rules and relationships fundamental to many aspects of human reasoning and behavior. Central to these models are systematicity, compositionality, and productivity, making them invaluable in both cognitive science and artificial intelligence. However, certain limitations remain. For instance, the integration of structured symbolic processes and latent sub-symbolic processes has been implemented at the computational level through fiat methods such as quantization or softmax sampling, which assume, rather than derive, the operations underpinning discretization and symbolicization. In this work, we introduce a novel neural stochastic dynamical systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT). Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives. Moreover, like PLoT, our model learns to sample a diverse distribution of attractor states that reflect the mutual information between the input data and the symbolic encodings. This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuro-plausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations.

Bio: Andrew Nam is a postdoctoral researcher at the AI Lab, Natural and Artificial Minds (NAM), where he explores the mechanisms of intelligent systems in both humans and machines. A graduate of Berkeley with a dual degree in Computer Science and Economics, Andrew initially worked as a software engineer before pursuing a PhD in Psychology at Stanford University under the guidance of Jay McClelland. His doctoral research focused on abstract and rule-based reasoning, rapid learning, and out-of-distribution generalization. Currently, he is interested in understanding abstract reasoning using both tractable, cognitively inspired neural architectures and large-scale foundation models trained on naturalistic data.

Light breakfast will be served.

Sponsor
Event organized by NAM/PDP