•  682
    Human cognition is not an island unto itself. As a species, we are not Leibnizian Monads independently engaging in clear, Cartesian thinking. Our minds interact. That's surely why our species has language. And that interactivity probably constrains both what and how we think.
  •  619
    Libet, Gleason, Wright, & Pearl (1983) asked participants to report the moment at which they freely decided to initiate a pre-specified movement, based on the position of a red marker on a clock. Using event-related potentials (ERPs), Libet found that the subjective feeling of deciding to perform a voluntary action came after the onset of the motor “readiness potential,” RP). This counterintuitive conclusion poses a challenge for the philosophical notion of free will. Faced with these findings, …Read more
  •  590
    Distributed processes, distributed cognizers and collaborative cognition
    [Journal (Paginated)] (in Press) 13 (3): 01-514. 2005.
    Cognition is thinking; it feels like something to think, and only those who can feel can think. There are also things that thinkers can do. We know neither how thinkers can think nor how they are able do what they can do. We are waiting for cognitive science to discover how. Cognitive science does this by testing hypotheses about what processes can generate what doing (“know-how”) This is called the Turing Test. It cannot test whether a process can generate feeling, hence thinking -- only whethe…Read more
  •  491
    The symbol grounding problem
    Physica D 42 335-346. 1990.
    There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the symbol grounding problem : How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their shapes, be grounded in anythi…Read more
  •  466
    Can a machine be conscious? How?
    Journal of Consciousness Studies 10 (4-5): 67-75. 2003.
    A "machine" is any causal physical system, hence we are machines, hence machines can be conscious. The question is: which kinds of machines can be conscious? Chances are that robots that can pass the Turing Test -- completely indistinguishable from us in their behavioral capacities -- can be conscious (i.e. feel), but we can never be sure (because of the "other-minds" problem). And we can never know HOW they have minds, because of the "mind/body" problem. We can only know how they pass the Turin…Read more
  •  347
    Virtual symposium on virtual mind
    with Patrick Hayes, Donald Perlis, and Ned Block
    Minds and Machines 2 (3): 217-238. 1992.
    When certain formal symbol systems (e.g., computer programs) are implemented as dynamic physical symbol systems (e.g., when they are run on a computer) their activity can be interpreted at higher levels (e.g., binary code can be interpreted as LISP, LISP code can be interpreted as English, and English can be interpreted as a meaningful conversation). These higher levels of interpretability are called "virtual" systems. If such a virtual system is interpretable as if it had a mind, is such a "vir…Read more
  •  346
    Why and how we are not zombies
    Journal of Consciousness Studies 1 (2): 164-67. 1994.
    A robot that is functionally indistinguishable from us may or may not be a mindless Zombie. There will never be any way to know, yet its functional principles will be as close as we can ever get to explaining the mind
  •  343
    What's wrong and right about Searle's chinese room argument?
    In Michael A. Bishop & John M. Preston (eds.), [Book Chapter] (in Press), Oxford University Press. 2001.
    Searle's Chinese Room Argument showed a fatal flaw in computationalism (the idea that mental states are just computational states) and helped usher in the era of situated robotics and symbol grounding (although Searle himself thought neuroscience was the only correct way to understand the mind)
  •  317
    Minds, machines and Turing: The indistinguishability of indistinguishables
    Journal of Logic, Language and Information 9 (4): 425-445. 2000.
    Turing's celebrated 1950 paper proposes a very general methodological criterion for modelling mental function: total functional equivalence and indistinguishability. His criterion gives rise to a hierarchy of Turing Tests, from subtotal ("toy") fragments of our functions (t1), to total symbolic (pen-pal) function (T2 -- the standard Turing Test), to total external sensorimotor (robotic) function (T3), to total internal microfunction (T4), to total indistinguishability in every empirically discer…Read more
  •  260
    Explaining the mind by building machines with minds runs into the other-minds problem: How can we tell whether any body other than our own has a mind when the only way to know is by being the other body? In practice we all use some form of Turing Test: If it can do everything a body with a mind can do such that we can't tell them apart, we have no basis for doubting it has a mind. But what is "everything" a body with a mind can do? Turing's original "pen-pal" version (the TT) only tested linguis…Read more
  •  248
    Distributed cognition: Cognizing, autonomy and the Turing test
    with Itiel E. Dror
    Pragmatics and Cognition 14 (2): 14. 2006.
    Some of the papers in this special issue distribute cognition between what is going on inside individual cognizers' heads and their outside worlds; others distribute cognition among different individual cognizers. Turing's criterion for cognition was individual, autonomous input/output capacity. It is not clear that distributed cognition could pass the Turing Test
  •  245
    Minds, machines and Searle
    Journal of Experimental and Theoretical Artificial Intelligence 1 (4): 5-25. 1989.
    Searle's celebrated Chinese Room Argument has shaken the foundations of Artificial Intelligence. Many refutations have been attempted, but none seem convincing. This paper is an attempt to sort out explicitly the assumptions and the logical, methodological and empirical points of disagreement. Searle is shown to have underestimated some features of computer modeling, but the heart of the issue turns out to be an empirical question about the scope and limits of the purely symbolic model of the mi…Read more
  •  242
    When in 1979 Zenon Pylyshyn, associate editor of Behavioral and Brain Sciences (BBS, a peer commentary journal which I edit) informed me that he had secured a paper by John Searle with the unprepossessing title of [XXXX], I cannot say that I was especially impressed; nor did a quick reading of the brief manuscript -- which seemed to be yet another tedious "Granny Objection"[1] about why/how we are not computers -- do anything to upgrade that impression
  •  228
    Correlation vs. causality: How/why the mind-body problem is hard
    Journal of Consciousness Studies 7 (4): 54-61. 2000.
    The Mind/Body Problem is about causation not correlation. And its solution will require a mechanism in which the mental component somehow manages to play a causal role of its own, rather than just supervening superflously on other, nonmental components that look, for all the world, as if they can do the full causal job perfectly well without it. Correlations confirm that M does indeed "supervene" on B, but causality is needed to show how/why M is not supererogatory; and that's the hard part
  •  222
    SUMMARY: Universities (the universal research-providers) as well as research funders (public and private) are beginning to make it part of their mandates to ensure not only that researchers conduct and publish peer-reviewed research (“publish or perish”), but that they also make it available online, free for all. This is called Open Access (OA), and it maximizes the uptake, impact and progress of research by making it accessible to all potential users worldwide, not just those whose universities…Read more
  •  201
    Apart from what (little) OpenAI may be concealing from us, we all know (roughly) how ChatGPT works (its huge text database, its statistics, its vector representations, and their huge number of parameters, its next-word training, and so on). But none of us can say (hand on heart) that we are not surprised by what ChatGPT has proved to be able to do with these resources. This has even driven some of us to conclude that ChatGPT actually understands. It is not true that it understands. But it is als…Read more
  •  189
    What language allows us to do is to "steal" categories quickly and effortlessly through hearsay instead of having to earn them the hard way, through risky and time-consuming sensorimotor "toil" (trial-and-error learning, guided by corrective feedback from the consequences of miscategorisation). To make such linguistic "theft" possible, however, some, at least, of the denoting symbols of language must first be grounded in categories that have been earned through sensorimotor toil (or else in cate…Read more
  •  184
      Computation is interpretable symbol manipulation. Symbols are objects that are manipulated on the basis of rules operating only on theirshapes, which are arbitrary in relation to what they can be interpreted as meaning. Even if one accepts the Church/Turing Thesis that computation is unique, universal and very near omnipotent, not everything is a computer, because not everything can be given a systematic interpretation; and certainly everything can''t be givenevery systematic interpretation. B…Read more
  •  176
    The annotation game: On Turing (1950) on computing, machinery, and intelligence
    In Robert Epstein & Grace Peters (eds.), [Book Chapter] (in Press), Kluwer Academic Publishers. 2006.
    This quote/commented critique of Turing's classical paper suggests that Turing meant -- or should have meant -- the robotic version of the Turing Test (and not just the email version). Moreover, any dynamic system (that we design and understand) can be a candidate, not just a computational one. Turing also dismisses the other-minds problem and the mind/body problem too quickly. They are at the heart of both the problem he is addressing and the solution he is proposing.
  •  157
    Explaining the mind: Problems, problems
    The Sciences 41 (2): 36-42. 2001.
    The mind/body problem is the feeling/function problem: How and why do feeling systems feel? The problem is not just "hard" but insoluble . Fortunately, the "easy" problems of cognitive science are not insoluble. Five books are reviewed in this context
  •  153
    Symbol grounding and the symbolic theft hypothesis
    with Angelo Cangelosi and Alberto Greco
    In A. Cangelosi & D. Parisi (eds.), Simulating the Evolution of Language, Springer Verlag. pp. 191--210. 2002.
    Scholars studying the origins and evolution of language are also interested in the general issue of the evolution of cognition. Language is not an isolated capability of the individual, but has intrinsic relationships with many other behavioral, cognitive, and social abilities. By understanding the mechanisms underlying the evolution of linguistic abilities, it is possible to understand the evolution of cognitive abilities. Cognitivism, one of the current approaches in psychology and cognitive s…Read more
  •  150
    "in an academic generation a little overaddicted to "politesse," it may be worth saying that violent destruction is not necessarily worthless and futile. Even though it leaves doubt about the right road for London, it helps if someone rips up, however violently, a
  •  150
    The causal structure of cognition can be simulated but not implemented computationally, just as the causal structure of a furnace can be simulated but not implemented computationally. Heating is a dynamical property, not a computational one. A computational simulation of a furnace cannot heat a real house (only a simulated house). It lacks the essential causal property of a furnace. This is obvious with computational furnaces. The only thing that allows us even to imagine that it is otherwise in…Read more
  •  142
    Minds, machines and Searle
    Journal of Theoretical and Experimental Artificial Intelligence 1 5-25. 1989.
    Searle's celebrated Chinese Room Argument has shaken the foundations of Artificial Intelligence. Many refutations have been attempted, but none seem convincing. This paper is an attempt to sort out explicitly the assumptions and the logical, methodological and empirical points of disagreement. Searle is shown to have underestimated some features of computer modeling, but the heart of the issue turns out to be an empirical question about the scope and limits of the purely symbolic (computational)…Read more
  •  136
    "Symbol Grounding" is beginning to mean too many things to too many people. My own construal has always been simple: Cognition cannot be just computation, because computation is just the systematically interpretable manipulation of meaningless symbols, whereas the meanings of my thoughts don't depend on their interpretability or interpretation by someone else. On pain of infinite regress, then, symbol meanings must be grounded in something other than just their interpretability if they are to be…Read more
  •  130
    Connecting object to symbol in modeling cognition
    In A. Clark & Ronald Lutz (eds.), Connectionism in Context, Springer Verlag. pp. 75--90. 1992.
    Connectionism and computationalism are currently vying for hegemony in cognitive modeling. At first glance the opposition seems incoherent, because connectionism is itself computational, but the form of computationalism that has been the prime candidate for encoding the "language of thought" has been symbolic computationalism (Dietrich 1990, Fodor 1975, Harnad 1990c; Newell 1980; Pylyshyn 1984), whereas connectionism is nonsymbolic (Fodor & Pylyshyn 1988, or, as some have hopefully dubbed it, "s…Read more
  •  120
    Jerry Fodor argues that Darwin was wrong about "natural selection" because (1) it is only a tautology rather than a scientific law that can support counterfactuals ("If X had happened, Y would have happened") and because (2) only minds can select. Hence Darwin's analogy with "artificial selection" by animal breeders was misleading and evolutionary explanation is nothing but post-hoc historical narrative. I argue that Darwin was right on all counts. Until Darwin's "tautology," it had been believe…Read more
  •  115
    Categorical perception
    In L. Nadel (ed.), Encyclopedia of Cognitive Science, Nature Publishing Group. pp. 67--4. 2003.
  •  112
    Category induction and representation
    In Categorical Perception, Cambridge University Press. 1987.
    A provisional model is presented in which categorical perception (CP) provides our basic or elementary categories. In acquiring a category we learn to label or identify positive and negative instances from a sample of confusable alternatives. Two kinds of internal representation are built up in this learning by "acquaintance": (1) an iconic representation that subserves our similarity judgments and (2) an analog/digital feature-filter that picks out the invariant information allowing us to categ…Read more
  •  109
    In our century a Frege/Brentano wedge has gradually been driven into the mind/body problem so deeply that it appears to have split it into two: The problem of "qualia" and the problem of "intentionality." Both problems use similar intuition pumps: For qualia, we imagine a robot that is indistinguishable from us in every objective respect, but it lacks subjective experiences; it is mindless. For intentionality, we again imagine a robot that is indistinguishable from us in every objective respect …Read more