•  28
    Reinforcement learning and artificial agency
    Mind and Language 39 (1): 22-38. 2024.
    There is an apparent connection between reinforcement learning and agency. Artificial entities controlled by reinforcement learning algorithms are standardly referred to as agents, and the mainstream view in the psychology and neuroscience of agency is that humans and other animals are reinforcement learners. This article examines this connection, focusing on artificial reinforcement learning systems and assuming that there are various forms of agency. Artificial reinforcement learning systems s…Read more
  •  125
    Sharing Our Concepts with Machines
    Erkenntnis 88 (7): 3079-3095. 2021.
    As AI systems become increasingly competent language users, it is an apt moment to consider what it would take for machines to understand human languages. This paper considers whether either language models such as GPT-3 or chatbots might be able to understand language, focusing on the question of whether they could possess the relevant concepts. A significant obstacle is that systems of both kinds interact with the world only through text, and thus seem ill-suited to understanding utterances co…Read more
  •  57
    Modern AI systems have shown the capacity to produce remarkably fluent language, prompting debates both about their semantic understanding and, less prominently, about whether they can perform speech acts. This paper addresses the latter question, focusing on assertion. We argue that to be capable of assertion, an entity must meet two requirements: it must produce outputs with descriptive functions, and it must be capable of being sanctioned by agents with which it interacts. The second requirem…Read more
  •  47
    Machine Learning, Functions and Goals
    Croatian Journal of Philosophy 22 (66): 351-370. 2022.
    Machine learning researchers distinguish between reinforcement learning and supervised learning and refer to reinforcement learning systems as “agents”. This paper vindicates the claim that systems trained by reinforcement learning are agents while those trained by supervised learning are not. Systems of both kinds satisfy Dretske’s criteria for agency, because they both learn to produce outputs selectively in response to inputs. However, reinforcement learning is sensitive to the instrumental v…Read more
  •  37
    Affective Experience and Evidence for Animal Consciousness
    Philosophical Topics 48 (1): 109-127. 2020.
    Affective experience in nonhuman animals is of great interest for both theoretical and practical reasons. This paper highlights research by the psychologists Anthony Dickinson and Bernard Balleine which provides particularly good evidence of conscious affective experience in rats. This evidence is compelling because it implicates a sophisticated system for goal-directed action selection, and demonstrates a contrast between apparently conscious and unconscious evaluative representations with simi…Read more
  •  52
    Cognitive Models Are Distinguished by Content, Not Format
    Philosophy of Science 88 (1): 83-102. 2021.
    Cognitive scientists often describe the mind as constructing and using models of aspects of the environment, but it is not obvious what makes something a model as opposed to a mere representation....
  •  46
    Directive Content
    Pacific Philosophical Quarterly 102 (1): 2-26. 2020.
    Representations may have descriptive content, directive content, or both, but little explicit attention has been given to the problem of distinguishing representations of these three kinds. We do not know, for instance, what determines whether a given representation is a directive instructing its consumer to perform some action or has descriptive content to the effect that the action in question has a certain value. This paper considers what it takes for a representation to have directive conten…Read more
  •  77
    Representation and the active consumer
    Synthese 197 (10): 4533-4550. 2020.
    One of the central tasks for naturalistic theories of representation is to say what it takes for something to be a representation, and some leading theories have been criticised for being too liberal. Prominent discussions of this problem have proposed a producer-oriented solution; it is argued that representations must be produced by systems employing perceptual constancy mechanisms. However, representations may be produced by simple transducers if they are consumed in the right way. It is char…Read more
  •  59
    Why Hunger is not a Desire
    Review of Philosophy and Psychology 8 (3): 617-635. 2017.
    This paper presents an account of the nature of desire, informed by psychology and neuroscience, which entails that hunger is not a desire. The account is contrasted with Schroeder’s well-known empirically-informed theory of desire. It is argued that one significant virtue of the present account, in comparison with Schroeder’s theory, is that it draws a sharp distinction between desires and basic drives, such as the drive for food. One reason to draw this distinction is that experiments on incen…Read more