Skip to content Skip to navigation

Autonomous Agents to Build Cognitive Models

Principal Investigator
Daniel Yamins, Assistant Professor of Psychology

Co-Principal Investigators
Michael Frank, Associate Professor of Psychology and, by courtesy, of Linguistics
Fei-Fei Li, Associate Professor of Computer Science
Dorsa Sadigh, Assistant Professor of Computer Science and of Electrical Engineering
Dennis Wall, Associate Professor of Pediatrics (Systems Medicine), of Biomedical Data Science and, by courtesy, of Psychiatry and Behavioral Sciences

Abstract
How do we build intelligent robots that flexibly handle new environments, and interact naturally in social circumstances? How do children develop visual and motor skills, as well as learn to understand and interact with other people? How do we improve the definition, measurement, and treatment of developmental disorders such as Autism Spectrum Disorder (ASD)? Though these questions might at first seem rather disparate, they are deeply linked, scientifically and technologically.

In fact, they all rest on the conception of an intrinsically-motivated, physically-embodied, self-aware agent. Such an agent spontaneously posits and solves interesting tasks for itself – just because they’re fun – a capability beyond today’s most advanced robots. In contrast, human children are the ultimate intrinsically-motivated curious agents: they’re “scientists in the crib” who learn about their world and other people as they explore and restructure the environment around them. And, as is becoming clear from recent clinical studies, developmental disorders like ASD involve complex interactions between motivational, visuomotor, attentional, social prediction, and play-behavior deficits – observables directly relevant to the functioning (or failure) of autonomy.

Our team brings cutting-edge tools from artificial intelligence (AI), cognitive, and clinical sciences together to make a substantial leap in our abilities to build artificial agents that are playful. From an AI perspective, our goal is to combine insights from recent advances in deep neural networks, reinforcement learning, and robotics to formalize, implement, train, and deploy such curious, embodied, self-aware, socially interactive algorithmic agents.

Critically, if we can get the AI sufficiently right in its learning behavior, it should give us insight into the underlying causes behind the emergence and timing of empirically-observed developmental milestones in children. In other words, the AI agents will serve as a starting point for quantitative model of visuomotor-social development. A tight link between our AI and behavioral measurement efforts is critical, because as we iterate through inevitable failures of our AI agents to match human behavior, we will increasingly improve both our AI agents’ performance capabilities and their ability to explain human experimental data.

We also hypothesize that if the properly-functioning AI agent predicts normal child behavior, rendering various components dysfunctional should predict distinct classes of behavioral variability due to developmental disorders. We seek to characterize failure modes of our AI agents, and match these outputs to patterns and onset times of atypical visual/attentional behaviors observed in recent studies of ASD. Combining these insights with recent augmented reality ASD treatment devices, we also seek to develop improved device-based therapies for ASD that are both effective and low-cost.

If successful, our work is a step towards the grand challenge of using artificial intelligence to understand, mimic, diagnose, and treat, the human mind. Our proposal has evident risks: AI goals may be unattainable, or models may not match normal development, or agent failure modes may not correspond to real disorder deficits. However, by keeping a tight collaborative loop between AI and cognitive development, we are confident that we can make partial progress in each and spur future research efforts.