-
251Robustness to Fundamental Uncertainty in AGI AlignmentJournal of Consciousness Studies 27 (1-2): 225-241. 2020.The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of philosophical and practical uncertainty…Read more
-
230The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of metaphysical and practical uncertainty …Read more
G Gordon Worley III
Phenomenological AI Safety Research Institute
-
Phenomenological AI Safety Research InstituteOther (Part-time)
Berkeley, CA, United States of America
Areas of Specialization
2 more
Artificial Intelligence Safety |
Philosophy of Consciousness |
Qualia |
Pyrrhonian Skepticism |
Metaphilosophical Skepticism |
Phenomenalism |
Meta-Ethics |
Areas of Interest
20 more