•  201
    What does a Sloman want?
    International Journal of Machine Consciousness 2 (1): 51-53. 2010.
  •  20
    [Star] Penrose is wrong
    PSYCHE: An Interdisciplinary Journal of Research On Consciousness 2 66-82. 1995.
  •  1385
    On the Claim that a Table-Lookup Program Could Pass the Turing Test
    Minds and Machines 24 (2): 143-188. 2014.
    The claim has often been made that passing the Turing Test would not be sufficient to prove that a computer program was intelligent because a trivial program could do it, namely, the “Humongous-Table (HT) Program”, which simply looks up in a table what to say next. This claim is examined in detail. Three ground rules are argued for: (1) That the HT program must be exhaustive, and not be based on some vaguely imagined set of tricks. (2) That the HT program must not be created by some set of senti…Read more
  •  15
    Planning and Acting
    Cognitive Science 2 (2): 71-100. 1978.
    A new theory of problem solving is presented, which embeds problem solving in the theory of action; in this theory, a problem is just a difficult action. Making this work requires a sophisticated language for‐talking about plans and their execution. This language allows a broad range of types of action, and can also be used to express rules for choosing and scheduling plans. To ensure flexibility, the problem solver consists of an interpreter driven by a theorem prover which actually manipulates…Read more
  • Computationally Constrained Beliefs
    Journal of Consciousness Studies 20 (5-6): 124-150. 2013.
    People and intelligent computers, if there ever are any, will both have to believe certain things in order to be intelligent agents at all, or to be a particular sort of intelligent agent. I distinguish implicit beliefs that are inherent in the architecture of a natural or artificial agent, in the way it is 'wired', from explicit beliefs that are encoded in a way that makes them easier to learn and to erase if proven mistaken. I introduce the term IFI, which stands for irresistible framework int…Read more
  •  55
    The digital computer as red Herring
    Psycoloquy 12 (54). 2001.
    Stevan Harnad correctly perceives a deep problem in computationalism, the hypothesis that cognition is computation, namely, that the symbols manipulated by a computational entity do not automatically mean anything. Perhaps, he proposes, transducers and neural nets will not have this problem. His analysis goes wrong from the start, because computationalism is not as rigid a set of theories as he thinks. Transducers and neural nets are just two kinds of computational system, among many, and any so…Read more
  •  22
    Little “me”
    Behavioral and Brain Sciences 15 (2): 217-218. 1992.
  •  29
    A Temporal Logic for Reasoning about Processes and Plans
    Cognitive Science 6 (2): 101-155. 1982.
    Much previous work in artificial intelligence has neglected representing time in all its complexity. In particular, it has neglected continuous change and the indeterminacy of the future. To rectify this, I have developed a first‐order temporal logic, in which it is possible to name and prove things about facts, events, plans, and world histories. In particular, the logic provides analyses of causality, continuous change in quantities, the persistence of facts (the frame problem), and the relati…Read more
  •  981
    Zombies are hypothetical creatures identical to us in behavior and internal functionality, but lacking experience. When the concept of zombie is examined in careful detail, it is found that the attempt to keep experience out does not work. So the concept of zombie is the same as the concept of person. Because they are only trivially conceivable, zombies are in a sense inconceivable.