•  38
    Almost all words are the names of categories. We can learn most of our words (and hence our categories) from dictionary definitions, but not all of them. Some have to be learned from direct experience. To understand a word from its definition we need to already understand the words used in the definition. This is the “Symbol Grounding Problem” [1]. How many words (and which ones) do we need to ground directly in sensorimotor experience in order to be able to learn all other words via definition …Read more
  •  11
    This is a paperback reissue of a 1988 special issue of Cognition - dated but still of interest. The book consists of three chapters, each making one major negative point about connectionism. Fodor & Pylyshyn (F&P) argue that connectionist networks (henceforth 'nets') are not good models for cognition because they lack 'systematicity', Pinker & Price (P&P) argue that nets are not good substitutes for rule-based models of linguistic ability, and Lachter & Bever (L&B) argue that nets can only model…Read more
  •  34
    Neoconstructivism: A unifying constraint for the cognitive sciences
    In Thomas W. Simon & Robert J. Scholes (eds.), [Book Chapter], Lawrence Erlbaum. pp. 1-11. 1982.
    Behavioral scientists studied behavior; cognitive scientists study what generates behavior. Cognitive science is hence theoretical behaviorism (or behaviorism is experimental cognitivism). Behavior is data for a cognitive theorist. What counts as a theory of behavior? In this paper, a methodological constraint on theory construction -- "neoconstructivism" -- will be proposed (by analogy with constructivism in mathematics): Cognitive theory must be computable; given an encoding of the input to a …Read more
  •  72
    Metaphor and Mental Duality
    In Language, Mind, And Brain, Hillsdale: Erlbaum. pp. 189-211. 1982.
    I am going to attempt to argue, given certain premises, there are reasons, not only empirical, but also logical, for expecting a certain division of labor in the processing of information by the human brain. This division of labor consists specifically of a functional bifurcation into what may be called, to a first approximation, "verbal" and "nonverbal" modes of information- processing. That this dichotomy is not quite satisfactory, however, will be one of the principal conclusions of this chap…Read more
  •  136
    "Symbol Grounding" is beginning to mean too many things to too many people. My own construal has always been simple: Cognition cannot be just computation, because computation is just the systematically interpretable manipulation of meaningless symbols, whereas the meanings of my thoughts don't depend on their interpretability or interpretation by someone else. On pain of infinite regress, then, symbol meanings must be grounded in something other than just their interpretability if they are to be…Read more
  •  10
    Rational Disagreement in Peer Review (review)
    Science, Technology and Human Values 10 (3): 55-62. 1985.
  •  266
    Explaining the mind by building machines with minds runs into the other-minds problem: How can we tell whether any body other than our own has a mind when the only way to know is by being the other body? In practice we all use some form of Turing Test: If it can do everything a body with a mind can do such that we can't tell them apart, we have no basis for doubting it has a mind. But what is "everything" a body with a mind can do? Turing's original "pen-pal" version (the TT) only tested linguis…Read more
  •  30
    Turing's celebrated 1950 paper proposes a very general methodological criterion for modelling mental function: total functional equivalence and indistinguishability. His criterion gives rise to a hierarchy of Turing Tests, from subtotal ("toy") fragments of our functions (t1), to total symbolic (pen-pal) function (T2 -- the standard Turing Test), to total external sensorimotor (robotic) function (T3), to total internal microfunction (T4), to total indistinguishability in every empirically discer…Read more
  •  71
    Turing set the agenda for (what would eventually be called) the cognitive sciences. He said, essentially, that cognition is as cognition does (or, more accurately, as cognition is capable of doing): Explain the causal basis of cognitive capacity and you’ve explained cognition. Test your explanation by designing a machine that can do everything a normal human cognizer can do – and do it so veridically that human cognizers cannot tell its performance apart from a real human cognizer’s – and you re…Read more
  •  34
    AI is about a "robot" boy who is "programmed" to love his adoptive human mother but is discriminated against because he is just a robot. I put both "robot" and "programmed" in scarequotes, because these are the two things that should have been given more thought before making the movie. (Most of this critique also applies to the short story by Brian Aldiss that inspired the movie, but the buck stops with the film as made, and its maker.)
  •  35
    The ethical case for Open Access (OA) (free online access) to research findings is especially salient when it is public health that is being compromised by needless access restrictions. But the ethical imperative for OA is far more general: It applies to all scientific and scholarly research findings published in peer-reviewed journals. And peer-to-peer access is far more important than direct public access. Most research is funded so as to be conducted and published, by researchers, in order to…Read more
  •  121
    Jerry Fodor argues that Darwin was wrong about "natural selection" because (1) it is only a tautology rather than a scientific law that can support counterfactuals ("If X had happened, Y would have happened") and because (2) only minds can select. Hence Darwin's analogy with "artificial selection" by animal breeders was misleading and evolutionary explanation is nothing but post-hoc historical narrative. I argue that Darwin was right on all counts. Until Darwin's "tautology," it had been believe…Read more
  •  291
    Minds, machines and Turing: The indistinguishability of indistinguishables
    Journal of Logic, Language and Information 9 (4): 425-445. 2000.
    Turing's celebrated 1950 paper proposes a very general methodological criterion for modelling mental function: total functional equivalence and indistinguishability. His criterion gives rise to a hierarchy of Turing Tests, from subtotal ("toy") fragments of our functions (t1), to total symbolic (pen-pal) function (T2 -- the standard Turing Test), to total external sensorimotor (robotic) function (T3), to total internal microfunction (T4), to total indistinguishability in every empirically discer…Read more
  •  190
    What language allows us to do is to "steal" categories quickly and effortlessly through hearsay instead of having to earn them the hard way, through risky and time-consuming sensorimotor "toil" (trial-and-error learning, guided by corrective feedback from the consequences of miscategorisation). To make such linguistic "theft" possible, however, some, at least, of the denoting symbols of language must first be grounded in categories that have been earned through sensorimotor toil (or else in cate…Read more
  •  21
    Suppose Boeing 747s grew on trees. They would first sprout as embryonic planes, the size of an acorn. Then they would grow until they reached full size, when they would plop off the trees, ready to fly. Suppose also that we knew how to feed and care for them, how to make minor repairs, and of course how to fly them. But let us suppose that all of this transpired at a very early stage in our scientific history, when we did not yet understand the physics or the engineering of flight: Hence the phe…Read more
  •  47
    1.1 The predominant approach to cognitive modeling is still what has come to be called "computationalism" (Dietrich 1990, Harnad 1990b), the hypothesis that cognition is computation. The more recent rival approach is "connectionism" (Hanson & Burr 1990, McClelland & Rumelhart 1986), the hypothesis that cognition is a dynamic pattern of connections and activations in a "neural net." Are computationalism and connectionism really deeply different from one another, and if so, should they compete for…Read more
  •  55
    Exorcizing the ghost of mental imagery
    Computational Intelligence 9 (4): 337-339. 1993.
    The problem seems apparent even in Glasgow's term ``depict'', which is used by way of contrast with ``describe''. Now ``describe'' refers relatively unproblematically to strings of symbols, such as those in this written sentence, that are systematically interpretable as propositions describing objects, events, or states of affairs. But what does ``depict'' mean? In the case of a picture -- whether a photo or a diagram -- it is clear what depict means. A picture is an object (I will argue below t…Read more
  •  18
    Le modele d'ancrage propose ici est simple a recapituler. Les projections sensorielles analogiques sont les intrants des reseaux neuronaux qui doivent apprendre a connecter certaines des projections avec certains symboles (le nom de leur categorie) et certaines autres projections avec d'autres symboles (les noms d'autres categories pouvant se confondre les unes aux autres), en trouvant et en utilisant les invariants qui les representent de facon a favoriser l'accomplissement d'une categorisation…Read more
  •  44
    According to "computationalism" (Newell, 1980; Pylyshyn 1984; Dietrich 1990), mental states are computational states, so if one wishes to build a mind, one is actually looking for the right program to run on a digital computer. A computer program is a semantically interpretable formal symbol system consisting of rules for manipulating symbols on the basis of their shapes, which are arbitrary in relation to what they can be systematically interpreted as meaning. According to computationalism, eve…Read more
  •  68
    Brian Rotman argues that (one) “mind” and (one) “god” are only conceivable, literally, because of (alphabetic) literacy, which allowed us to designate each of these ghosts as an incorporeal, speaker-independent “I” (or, in the case of infinity, a notional agent that goes on counting forever). I argue that to have a mind is to have the capacity to feel. No one can be sure which organisms feel, hence have minds, but it seems likely that one-celled organisms and plants do not, whereas animals do. S…Read more
  •  75
    Lost in the hermeneutic hall of mirrors
    Journal of Experimental and Theoretical Artificial Intelligence 2 321-27. 1990.
    Critique of Computationalism as merely projecting hermeneutics (i.e., meaning originating from the mind of an external interpreter) onto otherwise intrinsically meaningless symbols. Projecting an interpretation onto a symbol system results in its being reflected back, in a spuriously self-confirming way
  • How/why the mind-body problem is hard
    Journal of Consciousness Studies 7 (4): 54-61. 2000.
    [opening paragraph]: [B]rain-imaging studies . . . demonstrate in ever more detail how specific kinds of mental activity are precisely correlated with specific patterns of brain activity . Mind/Brain correlations: We've known about them for decades, probably centuries. And that's still all we've got with brain imaging; and that's all we'll have even when we get the correspondence fine-tuned right down to the last mental ‘just noticeable difference’ and its corresponding molecule.
  •  89
    Grounding symbols in the analog world with neural nets
    Think (misc) 2 (1): 12-78. 1993.
    Harnad's main argument can be roughly summarised as follows: due to Searle's Chinese Room argument, symbol systems by themselves are insufficient to exhibit cognition, because the symbols are not grounded in the real world, hence without meaning. However, a symbol system that is connected to the real world through transducers receiving sensory data, with neural nets translating these data into sensory categories, would not be subject to the Chinese Room argument. Harnad's article is not only the…Read more
  •  19
    What lies on the two sides of the linguistic divide is fairly clear: On one side, you have organisms buffeted about to varying degrees, depending on their degree of autonomy and plasticity, by the states of affairs in the world they live in. On the other side, you have organisms capable of describing and explaining the states of affairs in the world they live in. Language is what distinguishes one side from the other. How did we get here from there? In principle, one can tell a seamless story ab…Read more
  •  64
    Why, oh why do we keep conflating this question, which is about the uncertainty of sensory information, with the much more profound and pertinent one, which is about the functional explicability and causal role of feeling? " _Kant: How is it possible for something even to be a thought? What are the conditions for the_ _possibility of experience at all?_ " That's not the right question either. The right question is not even an epistemic one, about "thought" or "knowledge" but an "aesthesiogenic" …Read more
  •  5
    Hebb, DO-Father of Cognitive Psychobiology 1904-1985
    Behavioral and Brain Sciences 8 (4): 765. 1985.
  •  40
    Language and the game of life
    Behavioral and Brain Sciences 28 (4): 497-498. 2005.
    Steels & Belpaeme's (S&B's) simulations contain all the right components, but they are put together wrongly. Color categories are unrepresentative of categories in general and language is not merely naming. Language evolved because it provided a powerful new way to acquire categories (through instruction, rather than just the old way of other species, through trial-and-error experience). It did not evolve so that multiple agents looking at the same objects could let one another know which of the…Read more
  •  276
    Why and how we are not zombies
    Journal of Consciousness Studies 1 (2): 164-67. 1994.
    A robot that is functionally indistinguishable from us may or may not be a mindless Zombie. There will never be any way to know, yet its functional principles will be as close as we can ever get to explaining the mind