Safe Generalization
in Minds and Machines

Investigating how similarity in information shapes representations and how latent cause inference can both result in generalization and memory failures in humans and AI.

Successful Generalization

Understanding how context, sequences, curriculum shape, and existing representations shape new representations.

Read More

Failures in Generalization

Investigating false memories in humans and hallucinations in AI arising from representational geometry and latent causal inference.

Read More

AI Safety & Reliability

Developing methods to predict and prevent agent derailment from task goals using information-theoretic signatures.

Read More

Control & Performance

How control allocation impacts performance and learning.

Read More

Design Spaces

Creating metascience tools for the next frontier of scientific discovery.

Read More