•  21
    Suppose Boeing 747s grew on trees. They would first sprout as embryonic planes, the size of an acorn. Then they would grow until they reached full size, when they would plop off the trees, ready to fly. Suppose also that we knew how to feed and care for them, how to make minor repairs, and of course how to fly them. But let us suppose that all of this transpired at a very early stage in our scientific history, when we did not yet understand the physics or the engineering of flight: Hence the phe…Read more
  •  47
    1.1 The predominant approach to cognitive modeling is still what has come to be called "computationalism" (Dietrich 1990, Harnad 1990b), the hypothesis that cognition is computation. The more recent rival approach is "connectionism" (Hanson & Burr 1990, McClelland & Rumelhart 1986), the hypothesis that cognition is a dynamic pattern of connections and activations in a "neural net." Are computationalism and connectionism really deeply different from one another, and if so, should they compete for…Read more
  •  55
    Exorcizing the ghost of mental imagery
    Computational Intelligence 9 (4): 337-339. 1993.
    The problem seems apparent even in Glasgow's term ``depict'', which is used by way of contrast with ``describe''. Now ``describe'' refers relatively unproblematically to strings of symbols, such as those in this written sentence, that are systematically interpretable as propositions describing objects, events, or states of affairs. But what does ``depict'' mean? In the case of a picture -- whether a photo or a diagram -- it is clear what depict means. A picture is an object (I will argue below t…Read more
  •  18
    Le modele d'ancrage propose ici est simple a recapituler. Les projections sensorielles analogiques sont les intrants des reseaux neuronaux qui doivent apprendre a connecter certaines des projections avec certains symboles (le nom de leur categorie) et certaines autres projections avec d'autres symboles (les noms d'autres categories pouvant se confondre les unes aux autres), en trouvant et en utilisant les invariants qui les representent de facon a favoriser l'accomplissement d'une categorisation…Read more
  •  44
    According to "computationalism" (Newell, 1980; Pylyshyn 1984; Dietrich 1990), mental states are computational states, so if one wishes to build a mind, one is actually looking for the right program to run on a digital computer. A computer program is a semantically interpretable formal symbol system consisting of rules for manipulating symbols on the basis of their shapes, which are arbitrary in relation to what they can be systematically interpreted as meaning. According to computationalism, eve…Read more
  •  68
    Brian Rotman argues that (one) “mind” and (one) “god” are only conceivable, literally, because of (alphabetic) literacy, which allowed us to designate each of these ghosts as an incorporeal, speaker-independent “I” (or, in the case of infinity, a notional agent that goes on counting forever). I argue that to have a mind is to have the capacity to feel. No one can be sure which organisms feel, hence have minds, but it seems likely that one-celled organisms and plants do not, whereas animals do. S…Read more
  •  75
    Lost in the hermeneutic hall of mirrors
    Journal of Experimental and Theoretical Artificial Intelligence 2 321-27. 1990.
    Critique of Computationalism as merely projecting hermeneutics (i.e., meaning originating from the mind of an external interpreter) onto otherwise intrinsically meaningless symbols. Projecting an interpretation onto a symbol system results in its being reflected back, in a spuriously self-confirming way
  • How/why the mind-body problem is hard
    Journal of Consciousness Studies 7 (4): 54-61. 2000.
    [opening paragraph]: [B]rain-imaging studies . . . demonstrate in ever more detail how specific kinds of mental activity are precisely correlated with specific patterns of brain activity . Mind/Brain correlations: We've known about them for decades, probably centuries. And that's still all we've got with brain imaging; and that's all we'll have even when we get the correspondence fine-tuned right down to the last mental ‘just noticeable difference’ and its corresponding molecule.
  •  56
    Do scientists agree? It is not only unrealistic to suppose that they do, but probably just as unrealistic to think that they ought to. Agreement is for what is already established scientific history. The current and vital ongoing aspect of science consists of an active and often heated interaction of data, ideas and minds, in a process one might call "creative disagreement." The "scientific method" is largely derived from a reconstruction based on selective hindsight. What actually goes on has m…Read more
  •  18
    Distributed processes, distributed cognizers, and collaborative cognition
    Pragmatics and Cognition 13 (3): 501-514. 2005.
    Cognition is thinking; it feels like something to think, and only those who can feel can think. There are also things that thinkers can do. We know neither how thinkers can think nor how they are able to do what they can do. We are waiting for cognitive science to discover how. Cognitive science does this by testing hypotheses about what processes can generate what doing.This is called the Turing Test. It cannot test whether a process can generate feeling, hence thinking — only whether it can ge…Read more
  •  117
    Categorical perception
    In L. Nadel (ed.), Encyclopedia of Cognitive Science, Nature Publishing Group. pp. 67--4. 2003.
  •  53
    Harnad accepts the picture of computation as formalism, so that any implementation of a program - thats any implementation - is as good as any other; in fact, in considering claims about the properties of computations, the nature of the implementing system - the interpreter - is invisible. Let me refer to this idea as 'Computationalism'. Almost all the criticism, claimed refutation by Searle's argument, and sharp contrasting of this idea with others, rests on the absoluteness of this separation …Read more
  •  87
    Darwin, Skinner, Turing and the mind
    Magyar Pszichologiai Szemle 57 (4): 521-528. 2002.
    Darwin differs from Newton and Einstein in that his ideas do not require a complicated or deep mind to understand them, and perhaps did not even require such a mind in order to generate them in the first place. It can be explained to any school-child (as Newtonian mechanics and Einsteinian relativity cannot) that living creatures are just Darwinian survival/reproduction machines. They have whatever structure they have through a combination of chance and its consequences: Chance causes changes in…Read more
  •  21
    Distributed processes, distributed cognizers, and collaborative cognition
    Pragmatics and Cognition 13 (2): 501-514. 2005.
    Cognition is thinking; it feels like something to think, and only those who can feel can think. There are also things that thinkers can do. We know neither how thinkers can think nor how they are able to do what they can do. We are waiting for cognitive science to discover how. Cognitive science does this by testing hypotheses about what processes can generate what doing.This is called the Turing Test. It cannot test whether a process can generate feeling, hence thinking — only whether it can ge…Read more
  •  228
    Correlation vs. causality: How/why the mind-body problem is hard
    Journal of Consciousness Studies 7 (4): 54-61. 2000.
    The Mind/Body Problem is about causation not correlation. And its solution will require a mechanism in which the mental component somehow manages to play a causal role of its own, rather than just supervening superflously on other, nonmental components that look, for all the world, as if they can do the full causal job perfectly well without it. Correlations confirm that M does indeed "supervene" on B, but causality is needed to show how/why M is not supererogatory; and that's the hard part
  •  46
    We must distinguish between what can be described or interpreted as X and what really is X. Otherwise we are just doing hermeneutics. It won't do simply to declare that the thermostat turns on the furnace because it feels cold or that the chess-playing computer program makes a move because it thinks it should get its queen out early. In what does real feeling and thinking consist?
  •  592
    Distributed processes, distributed cognizers and collaborative cognition
    [Journal (Paginated)] (in Press) 13 (3): 01-514. 2005.
    Cognition is thinking; it feels like something to think, and only those who can feel can think. There are also things that thinkers can do. We know neither how thinkers can think nor how they are able do what they can do. We are waiting for cognitive science to discover how. Cognitive science does this by testing hypotheses about what processes can generate what doing (“know-how”) This is called the Turing Test. It cannot test whether a process can generate feeling, hence thinking -- only whethe…Read more
  •  254
    Distributed cognition: Cognizing, autonomy and the Turing test
    with Itiel E. Dror
    Pragmatics and Cognition 14 (2): 14. 2006.
    Some of the papers in this special issue distribute cognition between what is going on inside individual cognizers' heads and their outside worlds; others distribute cognition among different individual cognizers. Turing's criterion for cognition was individual, autonomous input/output capacity. It is not clear that distributed cognition could pass the Turing Test
  •  37
    Some of the features of animal and human categorical perception (CP) for color, pitch and speech are exhibited by neural net simulations of CP with one-dimensional inputs: When a backprop net is trained to discriminate and then categorize a set of stimuli, the second task is accomplished by "warping" the similarity space (compressing within-category distances and expanding between-category distances). This natural side-effect also occurs in humans and animals. Such CP categories, consisting of n…Read more
  •  187
      Computation is interpretable symbol manipulation. Symbols are objects that are manipulated on the basis of rules operating only on theirshapes, which are arbitrary in relation to what they can be interpreted as meaning. Even if one accepts the Church/Turing Thesis that computation is unique, universal and very near omnipotent, not everything is a computer, because not everything can be given a systematic interpretation; and certainly everything can''t be givenevery systematic interpretation. B…Read more
  •  55
    Deceiving ourselves about self-deception
    Behavioral and Brain Sciences 34 (1): 25-26. 2011.
    Were we just the Darwinian adaptive survival/reproduction machines von Hippel & Trivers invoke to explain us, the self-deception problem would not only be simpler, but also nonexistent. Why would unconscious robots bother to misinform themselves so as to misinform others more effectively? But as we are indeed conscious rather than unconscious robots, the problem is explaining the causal role of consciousness itself, not just its supererogatory tendency to misinform itself so as to misinform (or …Read more
  •  22
    Distributed cognition: Cognizing, autonomy and the Turing Test
    with Itiel E. Dror
    Pragmatics and Cognition 14 (2): 209-213. 2006.
    Some of the papers in this Special Issue distribute cognition between what is going on inside individual cognizers' heads and their outside worlds; others distribute cognition among different individual cognizers. Turing's criterion for cognition was for individual, autonomous input/output capacity. It is not clear that distributed cognition could pass the Turing Test.
  •  95
    Creativity : method or magic?
    In Henri Cohen & Brigitte Stemmer (eds.), Consciousness and Cognition: Fragments of Mind and Brain, Academic Press. 2007.
    Creativity may be a trait, a state or just a process defined by its products. It can be contrasted with certain cognitive activities that are not ordinarily creative, such as problem solving, deduction, induction, learning, imitation, trial and error, heuristics and "abduction," however, all of these can be done creatively too. There are four kinds of theories, attributing creativity respectively to (1) method, (2) "memory" (innate structure), (3) magic or (4) mutation. These theories variously …Read more
  •  15
    Kravchenko suggests replacing Turing’s suggestion for explaining cognizers’ cognitive capacity through autonomous robotic modelling by ‘autopoiesis’, Maturana’s extremely vague metaphor for the relations and interactions among organisms, environments, and various subordinate and superordinate systems therein. I suggest that this would be an exercise in hermeneutics rather than causal explanation.
  •  44
    Does mind piggyback on robotic and symbolic capacity?
    In Harold J. Morowitz & Jerome L. Singer (eds.), The Mind, the Brain, and Complex Adaptive Systems, Addison-wesley. 1995.
    Cognitive science is a form of "reverse engineering" (as Dennett has dubbed it). We are trying to explain the mind by building (or explaining the functional principles of) systems that have minds. A "Turing" hierarchy of empirical constraints can be applied to this task, from t1, toy models that capture only an arbitrary fragment of our performance capacity, to T2, the standard "pen-pal" Turing Test (total symbolic capacity), to T3, the Total Turing Test (total symbolic plus robotic capacity), t…Read more
  •  17
    The notion of an immaterial, immortal "soul" is just a vague telekinetic theory to fill an unfillable explanatory gap in our understanding of the causal role of feelings.
  •  113
    A provisional model is presented in which categorical perception (CP) provides our basic or elementary categories. In acquiring a category we learn to label or identify positive and negative instances from a sample of confusable alternatives. Two kinds of internal representation are built up in this learning by "acquaintance": (1) an iconic representation that subserves our similarity judgments and (2) an analog/digital feature-filter that picks out the invariant information allowing us to categ…Read more
  •  21
    The experimental analysis of naming behavior can tell us exactly the kinds of things Horne & Lowe (H & L) report here: (1) the conditions under which people and animals succeed or fail in naming things and (2) the conditions under which bidirectional associations are formed between inputs (objects, pictures of objects, seen or heard names of objects) and outputs (spoken names of objects, multimodal operations on objects). The "stimulus equivalence" that H & L single out is really just the reflex…Read more