Foundational Research Institute
  •  35
    Responses to Catastrophic AGI Risk: A Survey
    with Kaj Sotala and Roman V. Yampolskiy
    Physica Scripta 90. 2015.
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that ar…Read more
  •  74
    Long-Term Trajectories of Human Civilization
    with Seth D. Baum, Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Phil Torres, Alexey Turchin, and Roman V. Yampolskiy
    Foresight 21 (1): 53-83. 2019.
    Purpose This paper aims to formalize long-term trajectories of human civilization as a scientific and ethical field of study. The long-term trajectory of human civilization can be defined as the path that human civilization takes during the entire future time period in which human civilization could continue to exist. Design/methodology/approach This paper focuses on four types of trajectories: status quo trajectories, in which human civilization persists in a state broadly similar to its curren…Read more
  •  55
    What kinds of fundamental limits are there in how capable artificial intelligence (AI) systems might become? Two questions in particular are of interest: (1) How much more capable could AI become relative to humans, and (2) how easily could superhuman capability be acquired? To answer these questions, we will consider the literature on human expertise and intelligence, discuss its relevance for AI, and consider how AI could improve on humans in two major aspects of thought and expertise, namely …Read more
  •  170
    Superintelligence as a Cause or Cure for Risks of Astronomical Suffering
    with Lukas Gloor
    Informatica: An International Journal of Computing and Informatics 41 (4): 389-400. 2017.
    Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way …Read more
  •  2230
    Advantages of artificial intelligences, uploads, and digital minds
    International Journal of Machine Consciousness 4 (01): 275-291. 2012.
    I survey four categories of factors that might give a digital mind, such as an upload or an artificial general intelligence, an advantage over humans. Hardware advantages include greater serial speeds and greater parallel speeds. Self-improvement advantages include improvement of algorithms, design of new mental modules, and modification of motivational system. Co-operative advantages include copyability, perfect co-operation, improved communication, and transfer of skills. Human handicaps inclu…Read more
  •  961
    Coalescing minds: Brain uploading-related group mind scenarios
    with Harri Valpola
    International Journal of Machine Consciousness 4 (01): 293-312. 2012.
    We present a hypothetical process of mind coalescence, where articial connections are created between two or more brains. This might simply allow for an improved form of communication. At the other extreme, it might merge the minds into one in a process that can be thought of as a reverse split-brain operation. We propose that one way mind coalescence might happen is via an exocortex, a prosthetic extension of the biological brain which integrates with the brain as seamlessly as parts of the bio…Read more