•  203
    Seeking Confirmation Is Rational for Deterministic Hypotheses
    with Joseph L. Austerweil
    Cognitive Science 35 (3): 499-526. 2011.
    The tendency to test outcomes that are predicted by our current theory (the confirmation bias) is one of the best-known biases of human decision making. We prove that the confirmation bias is an optimal strategy for testing hypotheses when those hypotheses are deterministic, each making a single prediction about the next event in a sequence. Our proof applies for two normative standards commonly used for evaluating hypothesis testing: maximizing expected information gain and maximizing the proba…Read more
  •  126
    People are adept at inferring novel causal relations, even from only a few observations. Prior knowledge about the probability of encountering causal relations of various types and the nature of the mechanisms relating causes and effects plays a crucial role in these inferences. We test a formal account of how this knowledge can be used and acquired, based on analyzing causal induction as Bayesian inference. Five studies explored the predictions of this account with adults and 4-year-olds, using…Read more
  •  115
    The Effects of Cultural Transmission Are Modulated by the Amount of Information Transmitted
    with Stephan Lewandowsky and Michael L. Kalish
    Cognitive Science 37 (5): 953-967. 2013.
    Information changes as it is passed from person to person, with this process of cultural transmission allowing the minds of individuals to shape the information that they transmit. We present mathematical models of cultural transmission which predict that the amount of information passed from person to person should affect the rate at which that information changes. We tested this prediction using a function-learning task, in which people learn a functional relationship between two variables by …Read more
  •  114
    Generalization, similarity, and bayesian inference
    Behavioral and Brain Sciences 24 (4): 629-640. 2001.
    Shepard has argued that a universal law should govern generalization across different domains of perception and cognition, as well as across organisms from different species or even different planets. Starting with some basic assumptions about natural kinds, he derived an exponential decay function as the form of the universal generalization gradient, which accords strikingly well with a wide range of empirical data. However, his original formulation applied only to the ideal case of generalizat…Read more
  •  97
    Rational Use of Cognitive Resources: Levels of Analysis Between the Computational and the Algorithmic
    with Falk Lieder and Noah D. Goodman
    Topics in Cognitive Science 7 (2): 217-229. 2015.
    Marr's levels of analysis—computational, algorithmic, and implementation—have served cognitive science well over the last 30 years. But the recent increase in the popularity of the computational level raises a new challenge: How do we begin to relate models at different levels of analysis? We propose that it is possible to define levels of analysis that lie between the computational and the algorithmic, providing a way to build a bridge between computational- and algorithmic-level models. The ke…Read more
  •  93
    Language Evolution by Iterated Learning With Bayesian Agents
    with Michael L. Kalish
    Cognitive Science 31 (3): 441-480. 2007.
    Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Bayesian inference, assuming that learners compute a posterior distribution over languages by combining a prior (representing their inductive biases) with the evidence provided by linguistic data. We sh…Read more
  •  92
    One and Done? Optimal Decisions From Very Few Samples
    with Edward Vul, Noah Goodman, and Joshua B. Tenenbaum
    Cognitive Science 38 (4): 599-637. 2014.
    In many learning or inference tasks human behavior approximates that of a Bayesian ideal observer, suggesting that, at some level, cognition can be described as Bayesian inference. However, a number of findings have highlighted an intriguing mismatch between human behavior and standard assumptions about optimality: People often appear to make decisions based on just one or a few samples from the appropriate posterior probability distribution, rather than using the full distribution. Although sam…Read more
  •  89
    Modeling human performance in statistical word segmentation
    with Michael C. Frank, Sharon Goldwater, and Joshua B. Tenenbaum
    Cognition 117 (2): 107-125. 2010.
  •  88
    A Bayesian framework for word segmentation: Exploring the effects of context
    with Sharon Goldwater and Mark Johnson
    Cognition 112 (1): 21-54. 2009.
  •  81
    The imaginary fundamentalists: The unshocking truth about Bayesian cognitive science
    with Nick Chater, Noah Goodman, Charles Kemp, Mike Oaksford, and Joshua B. Tenenbaum
    Behavioral and Brain Sciences 34 (4): 194-196. 2011.
    If Bayesian Fundamentalism existed, Jones & Love's (J&L's) arguments would provide a necessary corrective. But it does not. Bayesian cognitive science is deeply concerned with characterizing algorithms and representations, and, ultimately, implementations in neural circuits; it pays close attention to environmental structure and the constraints of behavioral data, when available; and it rigorously compares multiple models, both within and across papers. J&L's recommendation of Bayesian Enlighten…Read more
  •  78
    Evaluating (and Improving) the Correspondence Between Deep Neural Networks and Human Representations
    with Joshua C. Peterson and Joshua T. Abbott
    Cognitive Science 42 (8): 2648-2669. 2018.
    Decades of psychological research have been aimed at modeling how people learn features and categories. The empirical validation of these theories is often based on artificial stimuli with simple representations. Recently, deep neural networks have reached or surpassed human accuracy on tasks such as identifying objects in natural images. These networks learn representations of real‐world stimuli that can potentially be leveraged to capture psychological representations. We find that state‐of‐th…Read more
  •  65
    Optimal metacognitive control of memory recall
    with Frederick Callaway, Kenneth A. Norman, and Qiong Zhang
    Psychological Review 131 (3): 781-811. 2024.
  •  63
    Testing the Efficiency of Markov Chain Monte Carlo With People Using Facial Affect Categories
    with Jay B. Martin and Adam N. Sanborn
    Cognitive Science 36 (1): 150-162. 2012.
    Exploring how people represent natural categories is a key step toward developing a better understanding of how people learn, form memories, and make decisions. Much research on categorization has focused on artificial categories that are created in the laboratory, since studying natural categories defined on high-dimensional stimuli such as images is methodologically challenging. Recent work has produced methods for identifying these representations from observed behavior, such as reverse corre…Read more
  •  58
    Word-level information influences phonetic learning in adults and infants
    with Naomi H. Feldman, Emily B. Myers, Katherine S. White, and James L. Morgan
    Cognition 127 (3): 427-438. 2013.
  •  58
    Rational variability in children’s causal inferences: The Sampling Hypothesis
    with Stephanie Denison, Elizabeth Bonawitz, and Alison Gopnik
    Cognition 126 (2): 285-300. 2013.
  •  55
  •  54
    Children’s imitation of causal action sequences is influenced by statistical and pedagogical evidence
    with Daphna Buchsbaum, Alison Gopnik, and Patrick Shafto
    Cognition 120 (3): 331-340. 2011.
  •  54
    Rational approximations to rational models: Alternative algorithms for category learning
    with Adam N. Sanborn and Daniel J. Navarro
    Psychological Review 117 (4): 1144-1167. 2010.
  •  53
    Determining the knowledge that guides human judgments is fundamental to understanding how people reason, make decisions, and form predictions. We use an experimental procedure called ‘‘iterated learning,’’ in which the responses that people give on one trial are used to generate the data they see on the next, to pinpoint the knowledge that informs people's predictions about everyday events (e.g., predicting the total box office gross of a movie from its current take). In particular, we use this …Read more
  •  52
    Intuitive theories as grammars for causal inference
    with Joshua B. Tenenbaum and Sourabh Niyogi
    In Alison Gopnik & Laura Schulz (eds.), Causal learning: psychology, philosophy, and computation, Oxford University Press. pp. 301--322. 2007.
  •  52
    When Absence of Evidence Is Evidence of Absence: Rational Inferences From Absent Data
    with Anne S. Hsu, Andy Horng, and Nick Chater
    Cognitive Science 41 (S5): 1155-1167. 2017.
    Identifying patterns in the world requires noticing not only unusual occurrences, but also unusual absences. We examined how people learn from absences, manipulating the extent to which an absence is expected. People can make two types of inferences from the absence of an event: either the event is possible but has not yet occurred, or the event never occurs. A rational analysis using Bayesian inference predicts that inferences from absent data should depend on how much the absence is expected t…Read more
  •  46
    From mere coincidences to meaningful discoveries
    Cognition 103 (2): 180-226. 2007.
  •  46
    Two proposals for causal grammars
    In Alison Gopnik & Laura Schulz (eds.), Causal learning: psychology, philosophy, and computation, Oxford University Press. pp. 323--345. 2007.
  •  45
    The influence of categories on perception: Explaining the perceptual magnet effect as optimal statistical inference
    with Naomi H. Feldman and James L. Morgan
    Psychological Review 116 (4): 752-782. 2009.