•  56
    A Framework for Grounding the Moral Status of Intelligent Machines
    AIES '18, February 2–3, 2018, New Orleans, LA, USA. 2018.
    I propose a framework, derived from moral theory, for assessing the moral status of intelligent machines. Using this framework, I claim that some current and foreseeable intelligent machines have approximately as much moral status as plants, trees, and other environmental entities. This claim raises the question: what obligations could a moral agent (e.g., a normal adult human) have toward an intelligent machine? I propose that the threshold for any moral obligation should be the "functional mor…Read more
  •  20
    The hard limit on human nonanthropocentrism
    AI and Society 1-17. forthcoming.
    There may be a limit on our capacity to suppress anthropocentric tendencies toward non-human others. Normally, we do not reach this limit in our dealings with animals, the environment, etc. Thus, continued striving to overcome anthropocentrism when confronted with these non-human others may be justified. Anticipation of super artificial intelligence may force us to face this limit, denying us the ability to free ourselves completely of anthropocentrism. This could be for our own good.
  •  252
    In a recent commentary, Kim and colleagues argued that minimal-risk research should be deregulated so that such studies do not require review by an institutional review board. They claim that regulation of minimal-risk studies provides no adequate counterbalancing good and instead leads to a costly human subjects oversight system. We argue that the counterbalancing good of regulating minimal-risk studies is that oversight exists to ensure that respect for persons and justice requirements are sat…Read more