•  27
    Non-monotonic logic I
    with Jon Doyle
    Artificial Intelligence 13 (1-2): 41-72. 1980.
  •  5
    Planning routes through uncertain territory
    with Ernest Davis
    Artificial Intelligence 22 (2): 107-156. 1984.
  •  8
    Temporal data base management
    with Thomas L. Dean
    Artificial Intelligence 32 (1): 1-55. 1987.
  •  4
    Modeling a dynamic and uncertain world I
    with Steve Hanks
    Artificial Intelligence 66 (1): 1-55. 1994.
  •  2
  •  6
    Level-headed
    Artificial Intelligence 171 (18): 1183-1186. 2007.
  •  10
    Kurzweil's argument for the success of AI
    Artificial Intelligence 170 (18): 1227-1233. 2006.
  •  4
    Reply to Carruthers and Akman
    Artificial Intelligence 151 (1-2): 241-245. 2003.
  •  28
    Nonmonotonic logic and temporal projection
    with Steve Hanks
    Artificial Intelligence 33 (3): 379-412. 1987.
  •  2
    Problems in formal temporal reasoning
    with Yoav Shoham
    Artificial Intelligence 36 (1): 49-61. 1988.
  •  9
    A general framework for reason maintenance
    Artificial Intelligence 50 (3): 289-329. 1991.
  •  15
    Planning and Acting
    Cognitive Science 2 (2): 71-100. 1978.
    A new theory of problem solving is presented, which embeds problem solving in the theory of action; in this theory, a problem is just a difficult action. Making this work requires a sophisticated language for‐talking about plans and their execution. This language allows a broad range of types of action, and can also be used to express rules for choosing and scheduling plans. To ensure flexibility, the problem solver consists of an interpreter driven by a theorem prover which actually manipulates…Read more
  • Computationally Constrained Beliefs
    Journal of Consciousness Studies 20 (5-6): 124-150. 2013.
    People and intelligent computers, if there ever are any, will both have to believe certain things in order to be intelligent agents at all, or to be a particular sort of intelligent agent. I distinguish implicit beliefs that are inherent in the architecture of a natural or artificial agent, in the way it is 'wired', from explicit beliefs that are encoded in a way that makes them easier to learn and to erase if proven mistaken. I introduce the term IFI, which stands for irresistible framework int…Read more
  •  55
    The digital computer as red Herring
    Psycoloquy 12 (54). 2001.
    Stevan Harnad correctly perceives a deep problem in computationalism, the hypothesis that cognition is computation, namely, that the symbols manipulated by a computational entity do not automatically mean anything. Perhaps, he proposes, transducers and neural nets will not have this problem. His analysis goes wrong from the start, because computationalism is not as rigid a set of theories as he thinks. Transducers and neural nets are just two kinds of computational system, among many, and any so…Read more
  •  22
    Little “me”
    Behavioral and Brain Sciences 15 (2): 217-218. 1992.
  •  29
    A Temporal Logic for Reasoning about Processes and Plans
    Cognitive Science 6 (2): 101-155. 1982.
    Much previous work in artificial intelligence has neglected representing time in all its complexity. In particular, it has neglected continuous change and the indeterminacy of the future. To rectify this, I have developed a first‐order temporal logic, in which it is possible to name and prove things about facts, events, plans, and world histories. In particular, the logic provides analyses of causality, continuous change in quantities, the persistence of facts (the frame problem), and the relati…Read more
  •  981
    Zombies are hypothetical creatures identical to us in behavior and internal functionality, but lacking experience. When the concept of zombie is examined in careful detail, it is found that the attempt to keep experience out does not work. So the concept of zombie is the same as the concept of person. Because they are only trivially conceivable, zombies are in a sense inconceivable.
  •  43
    What matters to a machine
    In M. Anderson S. Anderson (ed.), Machine Ethics, Cambridge Univ. Press. pp. 88--114. 2011.
  •  25
    Dodging the explanatory gap–or bridging it
    Behavioral and Brain Sciences 30 (5-6): 518-518. 2007.
    Assuming our understanding of the brain continues to advance, we will at some point have a computational theory of how access consciousness works. Block's supposed additional kind of consciousness will not appear in this theory, and continued belief in it will be difficult to sustain. Appeals to to experience such-and-such will carry little weight when we cannot locate a subject for whom it might be like something
  •  265
    Artificial intelligence and consciousness
    In Philip David Zelazo, Morris Moscovitch & Evan Thompson (eds.), Cambridge Handbook of Consciousness, Cambridge University Press. pp. 117--150. 2007.
  •  533
    Logic is useful as a neutral formalism for expressing the contents of mental representations. It can be used to extract crisp conclusions regarding the higher-order theory of phenomenal consciousness developed in (McDermott 2001, 20007). A key aspect of conscious perceptions is their connection to the distinction between appearance and reality. Perceptions must often be corrected. To do so requires that the logic of perception be able to represent the logical structure of judgment events, that i…Read more
  •  19
    Minds, brains, programs, and persons
    Behavioral and Brain Sciences 5 (2): 339-341. 1982.
  •  23
    A vehicle with no wheels
    Behavioral and Brain Sciences 22 (1): 161-161. 1999.
    O'Brien & Opie's theory fails to address the issue of consciousness and introspection. They take for granted that once something is experienced, it can be commented on. But introspection requires neural structures that, according to their theory, have nothing to do with experience as such. That makes the tight coupling between the two in humans a mystery.
  •  40
    Erratum: "What does a Sloman want?"
    International Journal of Machine Consciousness 2 (2): 385-385. 2010.
  •  22