-
30On the Opacity of Deep Neural NetworksCanadian Journal of Philosophy 1-16. forthcoming.Deep neural networks are said to be opaque, impeding the development of safe and trustworthy artificial intelligence, but where this opacity stems from is less clear. What are the sufficient properties for neural network opacity? Here, I discuss five common properties of deep neural networks and two different kinds of opacity. Which of these properties are sufficient for what type of opacity? I show how each kind of opacity stems from only one of these five properties, and then discuss to what e…Read more
-
11Identity Theory and FalsifiabilityActa Analytica 1-12. forthcoming.I identify a class of arguments against multiple realization (MR): BookofSand arguments. The arguments are in their general form successful under reasonably uncontroversial assumptions, but this, on the other hand, turns the table on identity theory: If arguments from MR can always be refuted by BookofSand arguments, is identity theory falsifiable? In the absence of operational demarcation criteria, it is not. I suggest a parameterized formal demarcation principle for brain state/process types a…Read more
-
36On Hedden's proof that machine learning fairness metrics are flawedInquiry: An Interdisciplinary Journal of Philosophy. forthcoming.1. Fairness is about the just distribution of society's resources, and in ML, the main resource being distributed is model performance, e.g. the translation quality produced by machine translation...
-
40Grounding the Vector Space of an Octopus: Word Meaning from Raw TextMinds and Machines 33 (1): 33-54. 2023.Most, if not all, philosophers agree that computers cannot learn what words refers to from raw text alone. While many attacked Searle’s Chinese Room thought experiment, no one seemed to question this most basic assumption. For how can computers learn something that is not in the data? Emily Bender and Alexander Koller ( 2020 ) recently presented a related thought experiment—the so-called Octopus thought experiment, which replaces the rule-based interlocutor of Searle’s thought experiment with a …Read more
-
43Understanding models understanding languageSynthese 200 (6): 1-16. 2022.Landgrebe and Smith :2061–2081, 2021) present an unflattering diagnosis of recent advances in what they call language-centric artificial intelligence—perhaps more widely known as natural language processing: The models that are currently employed do not have sufficient expressivity, will not generalize, and are fundamentally unable to induce linguistic semantics, they say. The diagnosis is mainly derived from an analysis of the widely used Transformer architecture. Here I address a number of mis…Read more
-
38Keith Stenning and Michiel van Lambalgen, Human reasoning and cognitive scienceStudia Logica 97 (2): 317-318. 2011.
-
50Polyadic dynamic logics for hpsg parsingJournal of Logic, Language and Information 18 (2): 159-198. 2009.Head-driven phrase structure grammar (HPSG) is one of the most prominent theories employed in deep parsing of natural language. Many linguistic theories are arguably best formalized in extensions of modal or dynamic logic (Keller, Feature logics, infinitary descriptions and grammar, 1993; Kracht, Linguistics Philos 18:401–458, 1995; Moss and Tiede, In: Blackburn, van Benthem, and Wolther (eds.) Handbook of modal logic, 2006), and HPSG seems to be no exception. Adequate extensions of dynamic logi…Read more
-
28Dov M. Gabbay, Sergei S. Goncharov and Michael Zakharyaschev (eds.), Mathematical problems from applied logic IStudia Logica 87 (2-3): 363-367. 2007.
-
51Patrick Blackburn and Johan Bos , representation and inference for natural languageStudia Logica 85 (3): 413-418. 2007.
-
University of CopenhagenDpt. of Computer Science
Department of Media, Cognition and CommunicationProfessor
Areas of Specialization
General Philosophy of Science |
Philosophy of Cognitive Science |
Philosophy of Computing and Information |
Areas of Interest
5 more