•  92
    International regulation of autonomous weapon systems (AWS) is increasingly conceived as an exercise in risk management. This requires a shared approach for assessing the risks of AWS. This paper presents a structured approach to risk assessment and regulation for AWS, adapting a qualitative framework inspired by the Intergovernmental Panel on Climate Change (IPCC). It examines the interactions among key risk factors—determinants, drivers, and types—to evaluate the risk magnitude of AWS and esta…Read more
  •  236
    Artificial Intelligence for the Internal Democracy of Political Parties
    with Giuliano Formisano, Prathm Juneja, Sandri Giulia, and Luciano Floridi
    The article argues that AI can enhance the measurement and implementation of democratic processes within political parties, known as Intra-Party Democracy (IPD). It identifies the limitations of traditional methods for measuring IPD, which often rely on formal parameters, self-reported data, and tools like surveys. Such limitations lead to the collection of partial data, rare updates, and significant demands on resources. To address these issues, the article suggests that specific data managemen…Read more
  •  327
    The advent of Generative AI, particularly through Large Language Models (LLMs) like ChatGPT and its successors, marks a paradigm shift in the AI landscape. Advanced LLMs exhibit multimodality, handling diverse data formats, thereby broadening their application scope. However, the complexity and emergent autonomy of these models introduce challenges in predictability and legal compliance. This paper analyses the legal and regulatory implications of Generative AI and LLMs in the European Union con…Read more
  •  150
    Regulation by design (RBD) is a growing research field that explores, develops, and criticises the regulative function of design. In this article, we provide a qualitative thematic synthesis of the existing literature. The aim is to explore and analyse RBD's core features, practices, limitations, and related governance implications. To fulfil this aim, we examine the extant literature on RBD in the context of digital technologies. We start by identifying and structuring the core features of RBD,…Read more
  •  297
    Cancel Culture: an Essentially Contested Concept?
    Athena - Critical Inquiries in Law, Philosophy and Globalization 1 (2). 2023.
    Cancel culture is a form of societal self-defense that becomes prominent particularly during periods of substantial moral upheaval. It can lead to the polarization of incompatible viewpoints if it is indiscriminately demonized. In this brief editorial letter, I consider framing cancel culture as an essentially contested concept (ECC), according to the theory of Walter B. Gallie, with the aim of establishing a groundwork for a more productive discourse on it. In particular, I propose that interme…Read more
  •  961
    AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act
    with Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo, and Luciano Floridi
    Digital Society 3 (13): 1-29. 2024.
    The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios. To this scope, we propose to integrate the AIA…Read more
  •  727
    Taking AI Risks Seriously: a New Assessment Model for the AI Act
    with Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo, and Luciano Floridi
    AI and Society 38 (3): 1-5. 2023.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text,…Read more
  •  1792
    Accountability is a cornerstone of the governance of artificial intelligence (AI). However, it is often defined too imprecisely because its multifaceted nature and the sociotechnical structure of AI systems imply a variety of values, practices, and measures to which accountability in AI can refer. We address this lack of clarity by defining accountability in terms of answerability, identifying three conditions of possibility (authority recognition, interrogation, and limitation of power), and an…Read more
  •  76
    In this paper, I shall set out the pros and cons of assigning legal personhood on artificial intelligence systems under civil law. More specifically, I will provide arguments supporting a functionalist justification for conferring personhood on AIs, and I will try to identify what content this legal status might have from a regulatory perspective. Being a person in law implies the entitlement to one or more legal positions. I will mainly focus on liability as it is one of the main grounds for th…Read more
  •  110
    A conceptual framework for legal personality and its application to AI
    with Giorgio Bongiovanni and Giovanni Sartor
    Jurisprudence 13 (2): 194-219. 2022.
    In this paper, we provide an analysis of the concept of legal personality and discuss whether personality may be conferred on artificial intelligence systems (AIs). Legal personality will be presented as a doctrinal category that holds together bundles of rights and obligations; as a result, we first frame it as a node of inferential links between factual preconditions and legal effects. However, this inferentialist reading does not account for the ‘background reasons’ of legal personality, i.e.…Read more