• PhilPeople
  • PhilPapers
  • PhilArchive
  • PhilEvents
  • PhilJobs
  • Sign in
PhilPeople
 
  • Sign in
  • News Feed
  • Find Philosophers
  • Departments
  • Radar
  • Help
 

Drag to reposition

Drew McDermott
Yale University
  •  Home
  •  Publications
    43
    • Most Recent
    • Most Downloaded
    • Topics
  •  News and Updates
    29

 More details
  • Yale University
    Regular Faculty
New Haven, Connecticut, United States of America
Areas of Interest
Philosophy of Mind
Philosophy of Cognitive Science
  • All publications (43)
  • Free at last! Free at last! Thank evolution, free at last!
    Artificial Intelligence 169 (2): 165-173. 2005.
    Science, Logic, and Mathematics
  •  1
    Non-monotonic logic I
    with Jon Doyle
    Artificial Intelligence 13 (1-2): 41-72. 1980.
    Science, Logic, and Mathematics
  • Planning routes through uncertain territory
    with Ernest Davis
    Artificial Intelligence 22 (2): 107-156. 1984.
    Science, Logic, and Mathematics
  • Temporal data base management
    with Thomas L. Dean
    Artificial Intelligence 32 (1): 1-55. 1987.
    Science, Logic, and Mathematics
  • Modeling a dynamic and uncertain world I
    with Steve Hanks
    Artificial Intelligence 66 (1): 1-55. 1994.
    Science, Logic, and Mathematics
  • Using regression-match graphs to control search in planning
    Artificial Intelligence 109 (1-2): 111-159. 1999.
    Science, Logic, and Mathematics
  • Planning: What it is, what it could be, an introduction to the special issue on planning and scheduling
    with James Hendler
    Artificial Intelligence 76 (1-2): 1-16. 1995.
    Science, Logic, and Mathematics
  • Level-headed
    Artificial Intelligence 171 (18): 1183-1186. 2007.
    Science, Logic, and Mathematics
  •  1
    Kurzweil's argument for the success of AI
    Artificial Intelligence 170 (18): 1227-1233. 2006.
    Science, Logic, and Mathematics
  •  2
    Reply to Carruthers and Akman
    Artificial Intelligence 151 (1-2): 241-245. 2003.
    Science, Logic, and Mathematics
  • Nonmonotonic logic and temporal projection
    with Steve Hanks
    Artificial Intelligence 33 (3): 379-412. 1987.
    Science, Logic, and Mathematics
  •  1
    Problems in formal temporal reasoning
    with Yoav Shoham
    Artificial Intelligence 36 (1): 49-61. 1988.
    Science, Logic, and Mathematics
  • A general framework for reason maintenance
    Artificial Intelligence 50 (3): 289-329. 1991.
    Science, Logic, and Mathematics
  • Building large knowledge-based systems: Representation and inference in the cyc project
    Artificial Intelligence 61 (1): 53-63. 1993.
    Science, Logic, and Mathematics
  •  72
    Artificial intelligence meets natural stupidity
    In J. Haugel (ed.), Mind Design, Mit Press. pp. 5-18. 1981.
    Artificial Intelligence MethodologyMachine Mentality, Misc
  •  8
    Mind and Mechanism (edited book)
    Yale University. 2001.
    An exploration of the mind-body problem from the perspective of artificial intelligence.
    Computationalism in Cognitive Science
  •  39
    Tarskian semantics, or no notation without denotation
    Cognitive Science 2 (3): 277-82. 1978.
    Computational Semantics
  •  15
    Optimization and connectionism are two different things
    Behavioral and Brain Sciences 12 (3): 483-484. 1989.
    Philosophy of Cognitive ScienceConnectionism and Neural Networks
  •  22
    Computation and consciousness
    Behavioral and Brain Sciences 13 (4): 676-678. 1990.
    Philosophy of Cognitive ScienceComputationalism
  •  113
    What does a Sloman want?
    International Journal of Machine Consciousness 2 (1): 51-53. 2010.
    Explaining Consciousness, Misc
  •  11
    [Star] Penrose is wrong
    PSYCHE: An Interdisciplinary Journal of Research On Consciousness 2 66-82. 1995.
    Godelian Arguments Against AI
  •  7
    Higher-Order Thought Rendered Defenseless: Review of Consciousness and Self-Consciousness: A Defense of the Higher-Order Thought Theory of Consciousness by Rocco Gennaro (review)
    PSYCHE: An Interdisciplinary Journal of Research On Consciousness 4. 1998.
    Higher-Order Thought Theories of ConsciousnessSelf-Consciousness, Misc
  •  13
    A little static for the dynamicists review of Shanahan
    International Journal of Machine Consciousness 3 (02): 361-365. 2011.
    Philosophy of ConsciousnessDynamical SystemsMachine ConsciousnessAspects of Consciousness
  •  896
    On the Claim that a Table-Lookup Program Could Pass the Turing Test
    Minds and Machines 24 (2): 143-188. 2014.
    The claim has often been made that passing the Turing Test would not be sufficient to prove that a computer program was intelligent because a trivial program could do it, namely, the “Humongous-Table (HT) Program”, which simply looks up in a table what to say next. This claim is examined in detail. Three ground rules are argued for: (1) That the HT program must be exhaustive, and not be based on some vaguely imagined set of tricks. (2) That the HT program must not be created by some set of senti…Read more
    The claim has often been made that passing the Turing Test would not be sufficient to prove that a computer program was intelligent because a trivial program could do it, namely, the “Humongous-Table (HT) Program”, which simply looks up in a table what to say next. This claim is examined in detail. Three ground rules are argued for: (1) That the HT program must be exhaustive, and not be based on some vaguely imagined set of tricks. (2) That the HT program must not be created by some set of sentient beings enacting responses to all possible inputs. (3) That in the current state of cognitive science it must be an open possibility that a computational model of the human mind will be developed that accounts for at least its nonphenomenological properties. Given ground rule 3, the HT program could simply be an “optimized” version of some computational model of a mind, created via the automatic application of program-transformation rules [thus satisfying ground rule 2]. Therefore, whatever mental states one would be willing to impute to an ordinary computational model of the human psyche one should be willing to grant to the optimized version as well. Hence no one could dismiss out of hand the possibility that the HT program was intelligent. This conclusion is important because the Humongous-Table Program Argument is the only argument ever marshalled against the sufficiency of the Turing Test, if we exclude arguments that cognitive science is simply not possible
    The Turing Test
  •  2
    We've been framed: Or, why AI is innocent of the frame problem
    In Zenon W. Pylyshyn (ed.), The Robot's Dilemma, Ablex. 1987.
    The Frame Problem
  •  11
    Planning and Acting
    Cognitive Science 2 (2): 71-100. 1978.
  • Computationally Constrained Beliefs
    Journal of Consciousness Studies 20 (5-6): 124-150. 2013.
    People and intelligent computers, if there ever are any, will both have to believe certain things in order to be intelligent agents at all, or to be a particular sort of intelligent agent. I distinguish implicit beliefs that are inherent in the architecture of a natural or artificial agent, in the way it is 'wired', from explicit beliefs that are encoded in a way that makes them easier to learn and to erase if proven mistaken. I introduce the term IFI, which stands for irresistible framework int…Read more
    People and intelligent computers, if there ever are any, will both have to believe certain things in order to be intelligent agents at all, or to be a particular sort of intelligent agent. I distinguish implicit beliefs that are inherent in the architecture of a natural or artificial agent, in the way it is 'wired', from explicit beliefs that are encoded in a way that makes them easier to learn and to erase if proven mistaken. I introduce the term IFI, which stands for irresistible framework intuition, for an implicit belief that can come into conflict with an explicit one. IFIs are a key element of any theory of consciousness that explains qualia and other aspects of phenomenology as second-order beliefs about perception. Before I can survey the IFI landscape, I review evidence that the brains of humans, and presumably of other intelligent agents, consist of many specialized modules that are capable of sharing a unified workspace on urgent occasions, and jointly model themselves as a single agent. I also review previous work relevant to my subject. Then I explore several IFIs, starting with, 'My future actions are free from the control of physical laws'. Most of them are universal, in the sense that they will be shared by any intelligent agent; the case must be argued for each IFI. When made explicit, IFIs may look dubious or counterproductive, but they really are irresistible, so we find ourselves in the odd position of oscillating between justified beliefs and conflicting but irresistible beliefs. We cannot hope that some process of argumentation will resolve the conflict
    Mental States and ProcessesPhilosophy of Cognitive ScienceBeliefAspects of Consciousness
  •  128
    A critique of pure reason
    Computational Intelligence 3 151-60. 1987.
    Artificial Intelligence MethodologyKant: Metaphysics and Epistemology
  •  37
    The digital computer as red Herring
    Psycoloquy 12 (54). 2001.
    Stevan Harnad correctly perceives a deep problem in computationalism, the hypothesis that cognition is computation, namely, that the symbols manipulated by a computational entity do not automatically mean anything. Perhaps, he proposes, transducers and neural nets will not have this problem. His analysis goes wrong from the start, because computationalism is not as rigid a set of theories as he thinks. Transducers and neural nets are just two kinds of computational system, among many, and any so…Read more
    Stevan Harnad correctly perceives a deep problem in computationalism, the hypothesis that cognition is computation, namely, that the symbols manipulated by a computational entity do not automatically mean anything. Perhaps, he proposes, transducers and neural nets will not have this problem. His analysis goes wrong from the start, because computationalism is not as rigid a set of theories as he thinks. Transducers and neural nets are just two kinds of computational system, among many, and any solution to the semantic problem that works for them will work for most other computational systems
    Computationalism in Cognitive ScienceAnalog and Digital Computation
  •  13
    Little “me”
    Behavioral and Brain Sciences 15 (2): 217-218. 1992.
    Philosophy of Cognitive Science
  • Prev.
  • 1
  • 2
  • Next
PhilPeople logo

On this site

  • Find a philosopher
  • Find a department
  • The Radar
  • Index of professional philosophers
  • Index of departments
  • Help
  • Acknowledgments
  • Careers
  • Contact us
  • Terms and conditions

Brought to you by

  • The PhilPapers Foundation
  • The American Philosophical Association
  • Centre for Digital Philosophy, Western University
PhilPeople is currently in Beta Sponsored by the PhilPapers Foundation and the American Philosophical Association
Feedback