•  66
    Abstract. In this article is presented a model of the change of the probability of the global catastrophic risks in the world with exponentially evolving technologies. Increasingly cheaper technologies become accessible to a larger number of agents. Also, the technologies become more capable to cause a global catastrophe. Examples of such dangerous technologies are artificial viruses constructed by the means of synthetic biology, non-aligned AI and, to less extent, nanotech and nuclear prolifera…Read more
  •  218
    Does a single mechanism of aging exit? Most scientists have their own pet theories about what is aging, but the lack of generally accepted theory is mind-blowing. Here we suggest an explanation: evolution works against unitary mechanism of aging because it equalizes ‘warranty period’ of different resilience systems. Therefore, we need life-extension methods that go beyond fighting specific aging mechanisms: such as using a combination of geroprotectors or repair-fixing bionanorobots controlled b…Read more
  •  183
    Humanity may underestimate the rate of natural global catastrophes because of the survival bias (“anthropic shadow”). But the resulting reduction of the Earth’s future habitability duration is not very large in most plausible cases (1-2 orders of magnitude) and thus it looks like we still have at least millions of years. However, anthropic shadow implies anthropic fragility: we are more likely to live in a world where a sterilizing catastrophe is long overdue and could be triggered by unexpecte…Read more
  •  428
    In this article, I present a view on the future of nuclear war which takes into account the expected technological progress as well as global political changes. There are three main directions in which technological progress in nuclear weapons may happen: a) Many gigaton scale weapons. b) Cheaper nuclear bombs which are based on the use of the reactor-grade plutonium, laser isotope separation or are hypothetical pure fusion weapons. Also, advanced nanotechnology will provide the ability to qui…Read more
  •  301
    In AI safety research, the median timing of AGI arrival is often taken as a reference point, which various polls predict to happen in the middle of 21 century, but for maximum safety, we should determine the earliest possible time of Dangerous AI arrival. Such Dangerous AI could be either AGI, capable of acting completely independently in the real world and of winning in most real-world conflicts with humans, or an AI helping humans to build weapons of mass destruction, or a national state coupl…Read more
  •  491
    With the fast pace of AI development, the problem of preventing its global catastrophic risks arises. However, no satisfactory solution has been found. From several possibilities, the confinement of AI in a box is considered as a low-quality possible solution for AI safety. However, some treacherous AIs can be stopped by effective confinement if it is used as an additional measure. Here, we proposed an idealized model of the best possible confinement by aggregating all known ideas in the field o…Read more
  •  271
    Lucid dreaming (LD) is a fun and interesting activity, but most participants have difficulties in attaining lucidity, retaining it during the dream, concentrating on the needed task and remembering the results. This motivates to search for a new way to enhance lucid dreaming via different induction techniques, including chemicals and electric brain stimulation. However, results are still unstable. An alternative approach is to reach the lucid dreaming-like states via altered state of consciousne…Read more
  •  69
    Forever and Again
    Journal of Ethics and Emerging Technologies 28 (1): 31-56. 2018.
    This article explores theoretical conditions necessary for “quantum immortality” as well as its possible practical implications. It is demonstrated that QI is a particular case of “multiverse immortality”, which is based on two main assumptions: the very large size of the universe ; and a copy-friendly theory of personal identity. It is shown that a popular objection about lowering of the world-share of an observer in the case of QI does not succeed, as the world-share decline could be compensat…Read more
  •  454
    The long unbearable sufferings in the past and agonies experienced in some future timelines in which a malevolent AI could torture people for some idiosyncratic reasons (s-risks) is a significant moral problem. Such events either already happened or will happen in causally disconnected regions of the multiverse and thus it seems unlikely that we can do anything about it. However, at least one pure theoretic way to cure past sufferings exists. If we assume that there is no stable substrate of per…Read more
  •  509
    Abstract. The presumptuous philosopher (PP) thought experiment lends more credence to the hypothesis which postulates the existence of a larger number of observers than other hypothesis. The PP was suggested as a purely speculative endeavor. However, there is a class of real world observer-selection effects where it could be applied, and one of them is the possibility of interstellar panspermia (IP). There are two types of anthropic reasoning: SIA and SSA. SIA implies that my existence is an arg…Read more
  •  1026
    Abstract: After 2017 NY Times publication, the stigma of the scientific discussion of the problem of so-called UAP (Unidentified Aerial Phenomena) was lifted. Now the question arises: how UAP will affect the future of humanity, and especially, the probability of the global catastrophic risks? To answer this question, we assume that the Nimitz case in 2004 was real and we will suggest a classification of the possible explanations of the phenomena. The first level consists of mundane explanations:…Read more
  •  972
    The problem of surviving the end of the observable universe may seem very remote, but there are several reasons it may be important now: a) we may need to define soon the final goals of runaway space colonization and of superintelligent AI, b) the possibility of the solution will prove the plausibility of indefinite life extension, and с) the understanding of risks of the universe’s end will help us to escape dangers like artificial false vacuum decay. A possible solution depends on the type of …Read more
  •  753
    Abstract: The field of life extension is full of ideas but they are unstructured. Here we suggest a comprehensive strategy for reaching personal immortality based on the idea of multilevel defense, where the next life-preserving plan is implemented if the previous one fails, but all plans need to be prepared simultaneously in advance. The first plan, plan A, is the surviving until advanced AI creation via fighting aging and other causes of death and extending one’s life. Plan B is cryonics, whic…Read more
  •  637
    Long-Term Trajectories of Human Civilization
    with Seth D. Baum, Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, and Roman V. Yampolskiy
    Foresight 21 (1): 53-83. 2019.
    Purpose This paper aims to formalize long-term trajectories of human civilization as a scientific and ethical field of study. The long-term trajectory of human civilization can be defined as the path that human civilization takes during the entire future time period in which human civilization could continue to exist. Design/methodology/approach This paper focuses on four types of trajectories: status quo trajectories, in which human civilization persists in a state broadly similar to its curren…Read more
  •  631
    One of the possible solutions of the Fermi paradox is that all civilizations go extinct because they hit some Late Great Filter. Such a universal Late Great Filter must be an unpredictable event that all civilizations unexpectedly encounter, even if they try to escape extinction. This is similar to the “Death in Damascus” paradox from decision theory. However, this unpredictable Late Great Filter could be escaped by choosing a random strategy for humanity’s future development. However, if all ci…Read more
  •  1060
    Abstract. Boltzmann brains (BBs) are minds which randomly appear as a result of thermodynamic or quantum fluctuations. In this article, the question of if we are BBs, and the observational consequences if so, is explored. To address this problem, a typology of BBs is created, and the evidence is compared with the Simulation Argument. Based on this comparison, we conclude that while the existence of a “normal” BB is either unlikely or irrelevant, BBs with some ordering may have observable consequ…Read more
  •  1618
    Abstract: In the last decade, an urban legend about “glitches in the matrix” has become popular. As it is typical for urban legends, there is no evidence for most such stories, and the phenomenon could be explained as resulting from hoaxes, creepypasta, coincidence, and different forms of cognitive bias. In addition, the folk understanding of probability does not bear much resemblance to actual probability distributions, resulting in the illusion of improbable events, like the “birthday paradox”…Read more
  •  382
    The goal of the article is to explore what is the most probable type of simulation in which humanity lives (if any) and how this affects simulation termination risks. We firstly explore the question of what kind of simulation in which humanity is most likely located based on pure theoretical reasoning. We suggest a new patch to the classical simulation argument, showing that we are likely simulated not by our own descendants, but by alien civilizations. Based on this, we provide classification o…Read more
  •  444
    Abstract: The field of artificial general intelligence (AGI) safety is quickly growing. However, the nature of human values, with which future AGI should be aligned, is underdefined. Different AGI safety researchers have suggested different theories about the nature of human values, but there are contradictions. This article presents an overview of what AGI safety researchers have written about the nature of human values, up to the beginning of 2019. 21 authors were overviewed, and some of them…Read more
  •  1085
    Abstract. The main current approach to the AI safety is AI alignment, that is, the creation of AI whose preferences are aligned with “human values.” Many AI safety researchers agree that the idea of “human values” as a constant, ordered sets of preferences is at least incomplete. However, the idea that “humans have values” underlies a lot of thinking in the field; it appears again and again, sometimes popping up as an uncritically accepted truth. Thus, it deserves a thorough deconstruction, whic…Read more
  •  533
    Abstract: In the future, it will be possible to create advance simulations of ancestor in computers. Superintelligent AI could make these simulations very similar to the real past by creating a simulation of all of humanity. Such a simulation would use all available data about the past, including internet archives, DNA samples, advanced nanotech-based archeology, human memories, as well as text, photos and videos. This means that currently living people will be recreated in such a simulation, an…Read more
  •  492
    Abstract: As there are no visible ways to create safe self-improving superintelligence, but it is looming, we probably need temporary ways to prevent its creation. The only way to prevent it, is to create special AI, which is able to control and monitor all places in the world. The idea has been suggested by Goertzel in form of AI Nanny, but his Nanny is still superintelligent and not easy to control, as was shown by Bensinger at al. We explore here the ways to create the safest and simplest for…Read more
  •  625
    Approaches to the Prevention of Global Catastrophic Risks
    Human Prospect 7 (2): 52-65. 2018.
    Many global catastrophic and existential risks (X-risks) threaten the existence of humankind. There are also many ideas for their prevention, but the meta-problem is that these ideas are not structured. This lack of structure means it is not easy to choose the right plan(s) or to implement them in the correct order. I suggest using a “Plan A, Plan B” model, which has shown its effectiveness in planning actions in unpredictable environments. In this approach, Plan B is a backup option, implemente…Read more
  •  413
    Abstract: Global chemical contamination is an underexplored source of global catastrophic risks that is estimated to have low a priori probability. However, events such as pollinating insects’ population decline and lowering of the human male sperm count hint at some toxic exposure accumulation and thus could be a global catastrophic risk event if not prevented by future medical advances. We identified several potentially dangerous sources of the global chemical contamination, which may happen n…Read more
  •  589
    Abstract: Four main forms of Doomsday Argument (DA) exist—Gott’s DA, Carter’s DA, Grace’s DA and Universal DA. All four forms use different probabilistic logic to predict that the end of the human civilization will happen unexpectedly soon based on our early location in human history. There are hundreds of publications about the validity of the Doomsday argument. Most of the attempts to disprove the Doomsday Argument have some weak points. As a result, we are uncertain about the validity of DA p…Read more
  •  759
    In this article, a classification of the global catastrophic risks connected with the possible existence (or non-existence) of extraterrestrial intelligence is presented. If there are no extra-terrestrial intelligences (ETIs) in our light cone, it either means that the Great Filter is behind us, and thus some kind of periodic sterilizing natural catastrophe, like a gamma-ray burst, should be given a higher probability estimate, or that the Great Filter is ahead of us, and thus a future global ca…Read more
  •  752
    Abstract: Advances in new technologies create new ways to stimulate the pleasure center of the human brain via new chemicals, direct application of electricity, electromagnetic fields, “reward hacking” in games and social networks, and in the future, possibly via genetic manipulation, nanorobots and AI systems. This may have two consequences: a) human life may become more interesting, b) humans may stop participating in any external activities, including work, maintenance, reproduction, and even…Read more
  •  865
    Purpose Islands have long been discussed as refuges from global catastrophes; this paper will evaluate them systematically, discussing both the positives and negatives of islands as refuges. There are examples of isolated human communities surviving for thousands of years on places like Easter Island. Islands could provide protection against many low-level risks, notably including bio-risks. However, they are vulnerable to tsunamis, bird-transmitted diseases, and other risks. This article explor…Read more
  •  422
    Abstract: As there are no currently obvious ways to create safe self-improving superintelligence, but its emergence is looming, we probably need temporary ways to prevent its creation. The only way to prevent it is to create a special type of AI that is able to control and monitor the entire world. The idea has been suggested by Goertzel in the form of an AI Nanny, but his Nanny is still superintelligent, and is not easy to control. We explore here ways to create the safest and simplest form of …Read more
  •  5242
    Classification of Global Catastrophic Risks Connected with Artificial Intelligence
    with David Denkenberger
    AI and Society 35 (1): 147-163. 2020.
    A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Glo…Read more