In artificial intelligence literature, “delusions” are characterized as the generation of unfaithful output from reliable source content. There is an extensive literature on computer-generated delusions, ranging from visual hallucinations, like the production of nonsensical images in Computer Vision, to nonsensical text generated by (natural) language models, but this literature is predominantly taxonomic. In a recent research paper, however, a group of scientists from DeepMind successfully pres…
Read moreIn artificial intelligence literature, “delusions” are characterized as the generation of unfaithful output from reliable source content. There is an extensive literature on computer-generated delusions, ranging from visual hallucinations, like the production of nonsensical images in Computer Vision, to nonsensical text generated by (natural) language models, but this literature is predominantly taxonomic. In a recent research paper, however, a group of scientists from DeepMind successfully presented a formal treatment of an entire class of delusions in generative AI models (i.e., models based on a transformer architecture, both with and without RLHF—reinforcement learning with human feedback, such as BERT, GPT-3 or the more recent GPT-3.5), referred to as auto-suggestive delusions. Auto-suggestive delusions are not mere unfaithful output, but are self-induced by the transformer models themselves. Typically, these delusions have been subsumed under the concept of exposure bias, but exposure bias alone does not elucidate their nature. In order to address their nature, I will introduce a formal framework that clarifies the probabilistic delusions capable of explaining exposure bias in a broad manner. This will serve as the foundation for exploring auto-suggestive delusions in language models. Next, an examination of self- or auto-suggestive delusions will be undertaken, by drawing an analogy with the rule-following problematic from the philosophy of mind and language. Finally, I will argue that this comprehensive approach leads to the suggestion that transformers, large language models in particular, may develop in a manner that touches upon solipsism and the emergence of a private language, in a weak sense.