•  47
    Polyadic dynamic logics for hpsg parsing
    with Martin Lange
    Journal of Logic, Language and Information 18 (2): 159-198. 2009.
    Head-driven phrase structure grammar (HPSG) is one of the most prominent theories employed in deep parsing of natural language. Many linguistic theories are arguably best formalized in extensions of modal or dynamic logic (Keller, Feature logics, infinitary descriptions and grammar, 1993; Kracht, Linguistics Philos 18:401–458, 1995; Moss and Tiede, In: Blackburn, van Benthem, and Wolther (eds.) Handbook of modal logic, 2006), and HPSG seems to be no exception. Adequate extensions of dynamic logi…Read more
  •  41
    Understanding models understanding language
    Synthese 200 (6): 1-16. 2022.
    Landgrebe and Smith :2061–2081, 2021) present an unflattering diagnosis of recent advances in what they call language-centric artificial intelligence—perhaps more widely known as natural language processing: The models that are currently employed do not have sufficient expressivity, will not generalize, and are fundamentally unable to induce linguistic semantics, they say. The diagnosis is mainly derived from an analysis of the widely used Transformer architecture. Here I address a number of mis…Read more
  •  34
    Most, if not all, philosophers agree that computers cannot learn what words refers to from raw text alone. While many attacked Searle’s Chinese Room thought experiment, no one seemed to question this most basic assumption. For how can computers learn something that is not in the data? Emily Bender and Alexander Koller ( 2020 ) recently presented a related thought experiment—the so-called Octopus thought experiment, which replaces the rule-based interlocutor of Searle’s thought experiment with a …Read more
  •  31
    On Hedden's proof that machine learning fairness metrics are flawed
    Inquiry: An Interdisciplinary Journal of Philosophy. forthcoming.
    1. Fairness is about the just distribution of society's resources, and in ML, the main resource being distributed is model performance, e.g. the translation quality produced by machine translation...
  •  22
    On the Opacity of Deep Neural Networks
    Canadian Journal of Philosophy 1-16. forthcoming.
    Deep neural networks are said to be opaque, impeding the development of safe and trustworthy artificial intelligence, but where this opacity stems from is less clear. What are the sufficient properties for neural network opacity? Here, I discuss five common properties of deep neural networks and two different kinds of opacity. Which of these properties are sufficient for what type of opacity? I show how each kind of opacity stems from only one of these five properties, and then discuss to what e…Read more
  •  6
    Compound constructions: A reply to Bundgaard et al
    Semiotica 2008 (169): 163-169. 2008.
  •  5
    Identity Theory and Falsifiability
    Acta Analytica 1-12. forthcoming.
    I identify a class of arguments against multiple realization (MR): BookofSand arguments. The arguments are in their general form successful under reasonably uncontroversial assumptions, but this, on the other hand, turns the table on identity theory: If arguments from MR can always be refuted by BookofSand arguments, is identity theory falsifiable? In the absence of operational demarcation criteria, it is not. I suggest a parameterized formal demarcation principle for brain state/process types a…Read more