•  54
    Children’s imitation of causal action sequences is influenced by statistical and pedagogical evidence
    with Daphna Buchsbaum, Alison Gopnik, and Patrick Shafto
    Cognition 120 (3): 331-340. 2011.
  •  24
    Formalizing Neurath’s ship: Approximate algorithms for online causal learning
    with Neil R. Bramley, Peter Dayan, and David A. Lagnado
    Psychological Review 124 (3): 301-338. 2017.
  •  203
    Seeking Confirmation Is Rational for Deterministic Hypotheses
    with Joseph L. Austerweil
    Cognitive Science 35 (3): 499-526. 2011.
    The tendency to test outcomes that are predicted by our current theory (the confirmation bias) is one of the best-known biases of human decision making. We prove that the confirmation bias is an optimal strategy for testing hypotheses when those hypotheses are deterministic, each making a single prediction about the next event in a sequence. Our proof applies for two normative standards commonly used for evaluating hypothesis testing: maximizing expected information gain and maximizing the proba…Read more
  •  22
    A nonparametric Bayesian framework for constructing flexible feature representations
    with Joseph L. Austerweil
    Psychological Review 120 (4): 817-851. 2013.
  •  36
    Random walks on semantic networks can resemble optimal foraging
    with Joshua T. Abbott and Joseph L. Austerweil
    Psychological Review 122 (3): 558-569. 2015.
  •  54
    Rational approximations to rational models: Alternative algorithms for category learning
    with Adam N. Sanborn and Daniel J. Navarro
    Psychological Review 117 (4): 1144-1167. 2010.
  •  24
    Learning to Learn Functions
    with Michael Y. Li, Fred Callaway, William D. Thompson, and Ryan P. Adams
    Cognitive Science 47 (4). 2023.
    Humans can learn complex functional relationships between variables from small amounts of data. In doing so, they draw on prior expectations about the form of these relationships. In three experiments, we show that people learn to adjust these expectations through experience, learning about the likely forms of the functions they will encounter. Previous work has used Gaussian processes—a statistical framework that extends Bayesian nonparametric approaches to regression—to model human function le…Read more
  •  14
    Iterated learning reveals stereotypes of facial trustworthiness that propagate in the absence of evidence
    with Stefan Uddenberg, Bill D. Thompson, Madalina Vlasceanu, and Alexander Todorov
    Cognition 237 (C): 105452. 2023.
  •  28
    Extracting Low‐Dimensional Psychological Representations from Convolutional Neural Networks
    with Aditi Jha and Joshua C. Peterson
    Cognitive Science 47 (1). 2023.
    Convolutional neural networks (CNNs) are increasingly widely used in psychology and neuroscience to predict how human minds and brains respond to visual images. Typically, CNNs represent these images using thousands of features that are learned through extensive training on image datasets. This raises a question: How many of these features are really needed to model human behavior? Here, we attempt to estimate the number of dimensions in CNN representations that are required to capture human psy…Read more
  •  16
    Show or tell? Exploring when (and why) teaching with language outperforms demonstration
    with Theodore R. Sumers, Mark K. Ho, and Robert D. Hawkins
    Cognition 232 (C): 105326. 2023.
  •  42
    Overrepresentation of extreme events in decision making reflects rational use of cognitive resources
    with Falk Lieder and Ming Hsu
    Psychological Review 125 (1): 1-32. 2018.
  •  36
    Overcoming Individual Limitations Through Distributed Computation: Rational Information Accumulation in Multigenerational Populations
    with Mathew D. Hardy, Peaks M. Krafft, and Bill Thompson
    Topics in Cognitive Science 14 (3): 550-573. 2022.
    Topics in Cognitive Science, Volume 14, Issue 3, Page 550-573, July 2022.
  •  23
    Optimal policies for free recall
    with Qiong Zhang and Kenneth A. Norman
    Psychological Review 130 (4): 1104-1124. 2023.
  •  40
    From partners to populations: A hierarchical Bayesian account of coordination and convention
    with Robert D. Hawkins, Michael Franke, Michael C. Frank, Adele E. Goldberg, Kenny Smith, and Noah D. Goodman
    Psychological Review 130 (4): 977-1016. 2023.
  •  23
    A rational reinterpretation of dual-process theories
    with Smitha Milli and Falk Lieder
    Cognition 217 (C): 104881. 2021.
  •  10
    A rational model of people’s inferences about others’ preferences based on response times
    with Vael Gates, Frederick Callaway, and Mark K. Ho
    Cognition 217 (C): 104885. 2021.
  •  25
    Language research has come to rely heavily on large‐scale, web‐based datasets. These datasets can present significant methodological challenges, requiring researchers to make a number of decisions about how they are collected, represented, and analyzed. These decisions often concern long‐standing challenges in corpus‐based language research, including determining what counts as a word, deciding which words should be analyzed, and matching sets of words across languages. We illustrate these chall…Read more
  •  17
    Intuitions about magic track the development of intuitive physics
    with Casey Lewry, Kaley Curtis, Nadya Vasilyeva, and Fei Xu
    Cognition 214 (C): 104762. 2021.
  •  30
    Bayesian collective learning emerges from heuristic social learning
    with P. M. Krafft, Erez Shmueli, Joshua B. Tenenbaum, and Alex “Sandy” Pentland
    Cognition 212 (C): 104469. 2021.
  •  33
    Evaluating models of robust word recognition with serial reproduction
    with Stephan C. Meylan and Sathvik Nair
    Cognition 210 (C): 104553. 2021.
    Spoken communication occurs in a “noisy channel” characterized by high levels of environmental noise, variability within and between speakers, and lexical and syntactic ambiguity. Given these properties of the received linguistic input, robust spoken word recognition—and language processing more generally—relies heavily on listeners' prior knowledge to evaluate whether candidate interpretations of that input are more or less likely. Here we compare several broad-coverage probabilistic generative…Read more
  •  16
    Assessing Mathematics Misunderstandings via Bayesian Inverse Planning
    with Anna N. Rafferty and Rachel A. Jansen
    Cognitive Science 44 (10). 2020.
    Online educational technologies offer opportunities for providing individualized feedback and detailed profiles of students' skills. Yet many technologies for mathematics education assess students based only on the correctness of either their final answers or responses to individual steps. In contrast, examining the choices students make for how to solve the equation and the ways in which they might answer incorrectly offers the opportunity to obtain a more nuanced perspective of their algebra s…Read more
  •  28
    Infant-directed speech is consistent with teaching
    with Baxter S. Eaves, Naomi H. Feldman, and Patrick Shafto
    Psychological Review 123 (6): 758-771. 2016.
  •  23
    Parallelograms revisited: Exploring the limitations of vector space models for simple analogies
    with Joshua C. Peterson and Dawn Chen
    Cognition 205 (C): 104440. 2020.
  •  18
    Reconciling novelty and complexity through a rational analysis of curiosity
    with Rachit Dubey
    Psychological Review 127 (3): 455-476. 2020.
  •  22
    Learning How to Generalize
    with Joseph L. Austerweil and Sophia Sanborn
    Cognitive Science 43 (8). 2019.
    Generalization is a fundamental problem solved by every cognitive system in essentially every domain. Although it is known that how people generalize varies in complex ways depending on the context or domain, it is an open question how people learn the appropriate way to generalize for a new context. To understand this capability, we cast the problem of learning how to generalize as a problem of learning the appropriate hypothesis space for generalization. We propose a normative mathematical fra…Read more
  •  43
    Using Category Structures to Test Iterated Learning as a Method for Identifying Inductive Biases
    with Brian R. Christian and Michael L. Kalish
    Cognitive Science 32 (1): 68-107. 2008.
    Many of the problems studied in cognitive science are inductive problems, requiring people to evaluate hypotheses in the light of data. The key to solving these problems successfully is having the right inductive biases—assumptions about the world that make it possible to choose between hypotheses that are equally consistent with the observed data. This article explores a novel experimental method for identifying the biases that guide human inductive inferences. The idea behind this method is si…Read more
  •  78
    Evaluating (and Improving) the Correspondence Between Deep Neural Networks and Human Representations
    with Joshua C. Peterson and Joshua T. Abbott
    Cognitive Science 42 (8): 2648-2669. 2018.
    Decades of psychological research have been aimed at modeling how people learn features and categories. The empirical validation of these theories is often based on artificial stimuli with simple representations. Recently, deep neural networks have reached or surpassed human accuracy on tasks such as identifying objects in natural images. These networks learn representations of real‐world stimuli that can potentially be leveraged to capture psychological representations. We find that state‐of‐th…Read more
  •  19
    Inferring mass in complex scenes by mental simulation
    with Jessica B. Hamrick, Peter W. Battaglia, and Joshua B. Tenenbaum
    Cognition 157 (C): 61-76. 2016.
  •  34
    Strategy selection as rational metareasoning
    with Falk Lieder
    Psychological Review 124 (6): 762-794. 2017.