•  3
    Non-monotonic logic I
    with Jon Doyle
    Artificial Intelligence 13 (1-2): 41-72. 1980.
  • Planning routes through uncertain territory
    with Ernest Davis
    Artificial Intelligence 22 (2): 107-156. 1984.
  • Temporal data base management
    with Thomas L. Dean
    Artificial Intelligence 32 (1): 1-55. 1987.
  • Modeling a dynamic and uncertain world I
    with Steve Hanks
    Artificial Intelligence 66 (1): 1-55. 1994.
  • Level-headed
    Artificial Intelligence 171 (18): 1183-1186. 2007.
  •  1
    Kurzweil's argument for the success of AI
    Artificial Intelligence 170 (18): 1227-1233. 2006.
  •  2
    Reply to Carruthers and Akman
    Artificial Intelligence 151 (1-2): 241-245. 2003.
  •  1
    Nonmonotonic logic and temporal projection
    with Steve Hanks
    Artificial Intelligence 33 (3): 379-412. 1987.
  •  1
    Problems in formal temporal reasoning
    with Yoav Shoham
    Artificial Intelligence 36 (1): 49-61. 1988.
  • A general framework for reason maintenance
    Artificial Intelligence 50 (3): 289-329. 1991.
  •  902
    On the Claim that a Table-Lookup Program Could Pass the Turing Test
    Minds and Machines 24 (2): 143-188. 2014.
    The claim has often been made that passing the Turing Test would not be sufficient to prove that a computer program was intelligent because a trivial program could do it, namely, the “Humongous-Table (HT) Program”, which simply looks up in a table what to say next. This claim is examined in detail. Three ground rules are argued for: (1) That the HT program must be exhaustive, and not be based on some vaguely imagined set of tricks. (2) That the HT program must not be created by some set of senti…Read more
  •  11
    Planning and Acting
    Cognitive Science 2 (2): 71-100. 1978.
  • Computationally Constrained Beliefs
    Journal of Consciousness Studies 20 (5-6): 124-150. 2013.
    People and intelligent computers, if there ever are any, will both have to believe certain things in order to be intelligent agents at all, or to be a particular sort of intelligent agent. I distinguish implicit beliefs that are inherent in the architecture of a natural or artificial agent, in the way it is 'wired', from explicit beliefs that are encoded in a way that makes them easier to learn and to erase if proven mistaken. I introduce the term IFI, which stands for irresistible framework int…Read more
  •  37
    The digital computer as red Herring
    Psycoloquy 12 (54). 2001.
    Stevan Harnad correctly perceives a deep problem in computationalism, the hypothesis that cognition is computation, namely, that the symbols manipulated by a computational entity do not automatically mean anything. Perhaps, he proposes, transducers and neural nets will not have this problem. His analysis goes wrong from the start, because computationalism is not as rigid a set of theories as he thinks. Transducers and neural nets are just two kinds of computational system, among many, and any so…Read more
  •  13
    Little “me”
    Behavioral and Brain Sciences 15 (2): 217-218. 1992.
  •  23
    A Temporal Logic for Reasoning about Processes and Plans
    Cognitive Science 6 (2): 101-155. 1982.
  •  506
    Zombies are hypothetical creatures identical to us in behavior and internal functionality, but lacking experience. When the concept of zombie is examined in careful detail, it is found that the attempt to keep experience out does not work. So the concept of zombie is the same as the concept of person. Because they are only trivially conceivable, zombies are in a sense inconceivable.
  •  32
    What matters to a machine
    In M. Anderson S. Anderson (ed.), Machine Ethics, Cambridge Univ. Press. pp. 88--114. 2011.
  •  20
    Dodging the explanatory gap–or bridging it
    Behavioral and Brain Sciences 30 (5-6): 518-518. 2007.
    Assuming our understanding of the brain continues to advance, we will at some point have a computational theory of how access consciousness works. Block's supposed additional kind of consciousness will not appear in this theory, and continued belief in it will be difficult to sustain. Appeals to to experience such-and-such will carry little weight when we cannot locate a subject for whom it might be like something
  •  162
    Artificial intelligence and consciousness
    In Philip David Zelazo, Morris Moscovitch & Evan Thompson (eds.), Cambridge Handbook of Consciousness, Cambridge University Press. pp. 117--150. 2007.
  •  286
    Logic is useful as a neutral formalism for expressing the contents of mental representations. It can be used to extract crisp conclusions regarding the higher-order theory of phenomenal consciousness developed in (McDermott 2001, 20007). A key aspect of conscious perceptions is their connection to the distinction between appearance and reality. Perceptions must often be corrected. To do so requires that the logic of perception be able to represent the logical structure of judgment events, that i…Read more
  •  14
    Minds, brains, programs, and persons
    Behavioral and Brain Sciences 5 (2): 339-341. 1982.
  •  18
    A vehicle with no wheels
    Behavioral and Brain Sciences 22 (1): 161-161. 1999.
    O'Brien & Opie's theory fails to address the issue of consciousness and introspection. They take for granted that once something is experienced, it can be commented on. But introspection requires neural structures that, according to their theory, have nothing to do with experience as such. That makes the tight coupling between the two in humans a mystery.