•  55
    The digital computer as red Herring
    Psycoloquy 12 (54). 2001.
    Stevan Harnad correctly perceives a deep problem in computationalism, the hypothesis that cognition is computation, namely, that the symbols manipulated by a computational entity do not automatically mean anything. Perhaps, he proposes, transducers and neural nets will not have this problem. His analysis goes wrong from the start, because computationalism is not as rigid a set of theories as he thinks. Transducers and neural nets are just two kinds of computational system, among many, and any so…Read more
  •  22
    Little “me”
    Behavioral and Brain Sciences 15 (2): 217-218. 1992.
  •  29
    A Temporal Logic for Reasoning about Processes and Plans
    Cognitive Science 6 (2): 101-155. 1982.
    Much previous work in artificial intelligence has neglected representing time in all its complexity. In particular, it has neglected continuous change and the indeterminacy of the future. To rectify this, I have developed a first‐order temporal logic, in which it is possible to name and prove things about facts, events, plans, and world histories. In particular, the logic provides analyses of causality, continuous change in quantities, the persistence of facts (the frame problem), and the relati…Read more
  •  989
    Zombies are hypothetical creatures identical to us in behavior and internal functionality, but lacking experience. When the concept of zombie is examined in careful detail, it is found that the attempt to keep experience out does not work. So the concept of zombie is the same as the concept of person. Because they are only trivially conceivable, zombies are in a sense inconceivable.
  •  43
    What matters to a machine
    In M. Anderson S. Anderson (ed.), Machine Ethics, Cambridge Univ. Press. pp. 88--114. 2011.
  •  25
    Dodging the explanatory gap–or bridging it
    Behavioral and Brain Sciences 30 (5-6): 518-518. 2007.
    Assuming our understanding of the brain continues to advance, we will at some point have a computational theory of how access consciousness works. Block's supposed additional kind of consciousness will not appear in this theory, and continued belief in it will be difficult to sustain. Appeals to to experience such-and-such will carry little weight when we cannot locate a subject for whom it might be like something
  •  265
    Artificial intelligence and consciousness
    In Philip David Zelazo, Morris Moscovitch & Evan Thompson (eds.), Cambridge Handbook of Consciousness, Cambridge University Press. pp. 117--150. 2007.
  •  538
    Logic is useful as a neutral formalism for expressing the contents of mental representations. It can be used to extract crisp conclusions regarding the higher-order theory of phenomenal consciousness developed in (McDermott 2001, 20007). A key aspect of conscious perceptions is their connection to the distinction between appearance and reality. Perceptions must often be corrected. To do so requires that the logic of perception be able to represent the logical structure of judgment events, that i…Read more
  •  19
    Minds, brains, programs, and persons
    Behavioral and Brain Sciences 5 (2): 339-341. 1982.
  •  23
    A vehicle with no wheels
    Behavioral and Brain Sciences 22 (1): 161-161. 1999.
    O'Brien & Opie's theory fails to address the issue of consciousness and introspection. They take for granted that once something is experienced, it can be commented on. But introspection requires neural structures that, according to their theory, have nothing to do with experience as such. That makes the tight coupling between the two in humans a mystery.
  •  40
    Erratum: "What does a Sloman want?"
    International Journal of Machine Consciousness 2 (2): 385-385. 2010.
  •  22