•  76
    AI Deception: A Survey of Examples, Risks, and Potential Solutions
    with Simon Goldstein, Aidan O'Gara, Michael Chen, and Dan Hendrycks
    This paper argues that a range of current AI systems have learned how to deceive humans. We define deception as the systematic inducement of false beliefs in the pursuit of some outcome other than the truth. We first survey empirical examples of AI deception, discussing both special-use AI systems (including Meta's CICERO) built for specific competitive situations, and general-purpose AI systems (such as large language models). Next, we detail several risks from AI deception, such as fraud, elec…Read more