This article explores the challenges surrounding generative artificial intelligence (GenAI) in public administrations and its impact on human‒machine interactions within the public sector. First, it aims to deconstruct the reasons for distrust in GenAI in public administrations. The risks currently linked to GenAI in the public sector are often similar to those of conventional AI. However, while some risks remain pertinent, others are less so because GenAI has limited explainability, which, in r…
Read moreThis article explores the challenges surrounding generative artificial intelligence (GenAI) in public administrations and its impact on human‒machine interactions within the public sector. First, it aims to deconstruct the reasons for distrust in GenAI in public administrations. The risks currently linked to GenAI in the public sector are often similar to those of conventional AI. However, while some risks remain pertinent, others are less so because GenAI has limited explainability, which, in return, limits its uses in public administrations. Confidentiality, marking of GenAI outputs and errors are specific matters for which responses should be technical as well as cultural, as they are pushing the boundaries of our instrumental conceptions of machines. Second, this article proposes some paradigm shifts in the perspective of using GenAI in public administrations due to the radical change caused by its language-based nature. GenAI represents a profound break from the “numerical” nature of AI systems implemented in public administrations to date. The transformative impact of GenAI on the intellectual production of the state raises fears of the replacement, or rather enslavement, of civil servants to machines. The article argues for the development of critical thinking as a specific skill for civil servants who have become highly specialized and will have to think with a machine that is eclectic by nature. It anticipates a transformation in the political nature of public administrations, which should lead to more considerations for the strategic stake related to training corpus and for our conceptualization of the neutrality of AI.