•  17
    This article provides a methodology for the interpretation of AI ethics principles to specify ethical criteria for the development and deployment of AI systems in high-risk domains. The methodology consists of a three-step process deployed by an independent, multi-stakeholder ethics board to: (1) identify the appropriate level of abstraction for modelling the AI lifecycle; (2) interpret prescribed principles to extract specific requirements to be met at each step of the AI lifecycle; and (3) def…Read more
  •  7
    Ethical governance of artificial intelligence for defence: normative tradeoffs for principle to practice guidance
    with Alexander Blanchard and Christopher Thomas
    AI and Society 1-14. forthcoming.
    The rapid diffusion of artificial intelligence (AI) technologies in the defence domain raises challenges for the ethical governance of these systems. A recent shift from the what to the how of AI ethics sees a nascent body of literature published by defence organisations focussed on guidance to implement AI ethics principles. These efforts have neglected a crucial intermediate step between principles and guidance concerning the elicitation of ethical requirements for specifying the guidance. In …Read more
  •  20
    Artificial intelligence in support of the circular economy: ethical considerations and a path forward
    with Huw Roberts, Joyce Zhang, Ben Bariach, Josh Cowls, Ben Gilburt, Prathm Juneja, Andreas Tsamados, Marta Ziosi, and Luciano Floridi
    AI and Society 1-14. forthcoming.
    The world’s current model for economic development is unsustainable. It encourages high levels of resource extraction, consumption, and waste that undermine positive environmental outcomes. Transitioning to a circular economy (CE) model of development has been proposed as a sustainable alternative. Artificial intelligence (AI) is a crucial enabler for CE. It can aid in designing robust and sustainable products, facilitate new circular business models, and support the broader infrastructures need…Read more
  •  10
    Smart Cities: Reviewing the Debate About Their Ethical Implications
    with Marta Ziosi, Benjamin Hewitt, Prathm Juneja, and Luciano Floridi
    In Francesca Mazzi (ed.), The 2022 Yearbook of the Digital Governance Research Group, Springer Nature Switzerland. pp. 11-38. 2023.
    This paper considers a host of definitions and labels attached to the concept of smart cities to identify four dimensions that ground a review of ethical concerns emerging from the current debate. These are: (1) network infrastructure, with the corresponding concerns of control, surveillance, and data privacy and ownership; (2) post-political governance, embodied in the tensions between public and private decision-making and cities as post-political entities; (3) social inclusion, expressed in t…Read more
  •  4
    The SARS-CoV-2 (COVID-19) pandemic has caused social and economic devastation. As the milestone of two years of ‘living with the virus’ approaches, governments and businesses are attempting to develop means of reopening society whilst still protecting public health. However, developing interventions – particularly technological interventions – that find a safe, socially acceptable, and ethically justifiable balance between these two seemingly opposing demands is extremely challenging. There is n…Read more
  •  14
    This chapter provides an overview of six topics related to governance, ethical, legal, and social implications of artificial intelligence (AI) for sustainable development goals (SDGs) initiatives. We identified six common challenges and related opportunities to mitigate such challenges, as referred to by the authors analysing the chapters provided in the book The Ethics of Artificial Intelligence for the Sustainable Development Goals. They are (1) governance and collaboration, (2) private invest…Read more
  •  127
    A modal type theory for formalizing trusted communications
    Journal of Applied Logic 10 (1): 92-114. 2012.
    This paper introduces a multi-modal polymorphic type theory to model epistemic processes characterized by trust, defined as a second-order relation affecting the communication process between sources and a receiver. In this language, a set of senders is expressed by a modal prioritized context, whereas the receiver is formulated in terms of a contextually derived modal judgement. Introduction and elimination rules for modalities are based on the polymorphism of terms in the language. This leads …Read more
  •  2522
    This article presents the first, systematic analysis of the ethical challenges posed by recommender systems through a literature review. The article identifies six areas of concern, and maps them onto a proposed taxonomy of different kinds of ethical impact. The analysis uncovers a gap in the literature: currently user-centred approaches do not consider the interests of a variety of other stakeholders—as opposed to just the receivers of a recommendation—in assessing the ethical impacts of a reco…Read more
  •  337
  •  1704
    The ethics of digital well-being: a thematic review
    Science and Engineering Ethics 26 (4). 2020.
    This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to several k…Read more
  •  853
    AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act
    with Claudio Novelli, Federico Casolari, Antonino Rotolo, and Luciano Floridi
    Digital Society 3 (13): 1-29. 2024.
    The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios. To this scope, we propose to integrate the AIA…Read more
  •  643
    Taking AI Risks Seriously: a New Assessment Model for the AI Act
    with Claudio Novelli, Casolari Federico, Antonino Rotolo, and Luciano Floridi
    AI and Society 38 (3): 1-5. 2023.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text,…Read more
  •  611
    The debate on the moral responsibilities of online service providers
    Science and Engineering Ethics 22 (6): 1575-1603. 2016.
    Online service providers —such as AOL, Facebook, Google, Microsoft, and Twitter—significantly shape the informational environment and influence users’ experiences and interactions within it. There is a general agreement on the centrality of OSPs in information societies, but little consensus about what principles should shape their moral responsibilities and practices. In this article, we analyse the main contributions to the debate on the moral responsibilities of OSPs. By endorsing the method …Read more
  •  330
    A praxical solution of the symbol grounding problem
    Minds and Machines 17 (4): 369-389. 2007.
    This article is the second step in our research into the Symbol Grounding Problem (SGP). In a previous work, we defined the main condition that must be satisfied by any strategy in order to provide a valid solution to the SGP, namely the zero semantic commitment condition (Z condition). We then showed that all the main strategies proposed so far fail to satisfy the Z condition, although they provide several important lessons to be followed by any new proposal. Here, we develop a new solution of …Read more
  •  13
    Jus in bello Necessity, The Requirement of Minimal Force, and Autonomous Weapons Systems
    with Alexander Blanchard
    Journal of Military Ethics 21 (3): 286-303. 2023.
    In this article we focus on the jus in bello principle of necessity for guiding the use of autonomous weapons systems (AWS). We begin our analysis with an account of the principle of necessity as entailing the requirement of minimal force found in Just War Theory, before highlighting the absence of this principle in existing work on AWS. Overlooking this principle means discounting the obligations that combatants have towards one another in times of war. We argue that the requirement of minimal …Read more
  •  8
    In this article, we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change, and it can contribute to combatting the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges alr…Read more
  •  13
    Open source intelligence and AI: a systematic review of the GELSI literature
    with Riccardo Ghioni and Luciano Floridi
    AI and Society 1-16. forthcoming.
    Today, open source intelligence (OSINT), i.e., information derived from publicly available sources, makes up between 80 and 90 percent of all intelligence activities carried out by Law Enforcement Agencies (LEAs) and intelligence services in the West. Developments in data mining, machine learning, visual forensics and, most importantly, the growing computing power available for commercial use, have enabled OSINT practitioners to speed up, and sometimes even automate, intelligence collection and …Read more
  •  6
    Jus in bello Necessity, The Requirement of Minimal Force, and Autonomous Weapons Systems
    with Alexander Blanchard
    Journal of Military Ethics 21 (3): 286-303. 2022.
    In this article we focus on the jus in bello principle of necessity for guiding the use of autonomous weapons systems (AWS). We begin our analysis with an account of the principle of necessity as entailing the requirement of minimal force found in Just War Theory, before highlighting the absence of this principle in existing work on AWS. Overlooking this principle means discounting the obligations that combatants have towards one another in times of war. We argue that the requirement of minimal …Read more
  •  76
    The modern abundance and prominence of data have led to the development of “data science” as a new field of enquiry, along with a body of epistemological reflections upon its foundations, methods, and consequences. This article provides a systematic analysis and critical review of significant open problems and debates in the epistemology of data science. We propose a partition of the epistemology of data science into the following five domains: (i) the constitution of data science; (ii) the kind…Read more
  •  21
    Smart cities: reviewing the debate about their ethical implications
    with Marta Ziosi, Benjamin Hewitt, Prathm Juneja, and Luciano Floridi
    AI and Society 1-16. forthcoming.
    This paper considers a host of definitions and labels attached to the concept of smart cities to identify four dimensions that ground a review of ethical concerns emerging from the current debate. These are: network infrastructure, with the corresponding concerns of control, surveillance, and data privacy and ownership; post-political governance, embodied in the tensions between public and private decision-making and cities as post-political entities; social inclusion, expressed in the aspects o…Read more
  •  23
    A Comparative Analysis of the Definitions of Autonomous Weapons Systems
    with Alexander Blanchard
    Science and Engineering Ethics 28 (5): 1-22. 2022.
    In this report we focus on the definition of autonomous weapons systems (AWS). We provide a comparative analysis of existing official definitions of AWS as provided by States and international organisations, like ICRC and NATO. The analysis highlights that the definitions draw focus on different aspects of AWS and hence lead to different approaches to address the ethical and legal problems of these weapons systems. This approach is detrimental both in terms of fostering an understanding of AWS a…Read more
  •  1559
    Accountability is a cornerstone of the governance of artificial intelligence (AI). However, it is often defined too imprecisely because its multifaceted nature and the sociotechnical structure of AI systems imply a variety of values, practices, and measures to which accountability in AI can refer. We address this lack of clarity by defining accountability in terms of answerability, identifying three conditions of possibility (authority recognition, interrogation, and limitation of power), and an…Read more
  •  24
    Accepting Moral Responsibility for the Actions of Autonomous Weapons Systems—a Moral Gambit
    with Alexander Blanchard
    Philosophy and Technology 35 (3): 1-24. 2022.
    In this article, we focus on the attribution of moral responsibility for the actions of autonomous weapons systems (AWS). To do so, we suggest that the responsibility gap can be closed if human agents can take meaningful moral responsibility for the actions of AWS. This is a moral responsibility attributed to individuals in a justified and fair way and which is accepted by individuals as an assessment of their own moral character. We argue that, given the unpredictability of AWS, meaningful mora…Read more
  •  3
    The Ethics of Information Technologies
    with Keith Miller
    Routledge. 2016.
    This volume collects key influential papers that have animated the debate about information computer ethics over the past three decades, covering issues such as privacy, online trust, anonymity, values sensitive design, machine ethics, professional conduct and moral responsibility of software developers. These previously published articles have set the tone of the discussion and bringing them together here in one volume provides lecturers and students with a one-stop resource with which to navig…Read more
  •  16
    Autonomous weapon systems and jus ad bellum
    with Alexander Blanchard
    AI and Society 1-7. forthcoming.
    In this article, we focus on the scholarly and policy debate on autonomous weapon systems and particularly on the objections to the use of these weapons which rest on jus ad bellum principles of proportionality and last resort. Both objections rest on the idea that AWS may increase the incidence of war by reducing the costs for going to war or by providing a propagandistic value. We argue that whilst these objections offer pressing concerns in their own right, they suffer from important limitati…Read more
  •  3
    The World Health Organisation declared COVID-19 a global pandemic on 11th March 2020, recognising that the underlying SARS-CoV-2 has caused the greatest global crisis since World War II. In this chapter, we present a framework to evaluate whether and to what extent the use of digital systems that track and/or trace potentially infected individuals is not only legal but also ethical.
  •  6
    In this chapter, I draw on my previous work on trust and cybersecurity to offer a definition of trust and trustworthiness to understand to what extent trusting AI for cybersecurity tasks is justified and what measures can be put in place to rely on AI in cases where trust is not justified, but the use of AI is still beneficial.
  •  9
    Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions
    with Thomas C. King, Nikita Aggarwal, and Luciano Floridi
    In Josh Cowls & Jessica Morley (eds.), The 2020 Yearbook of the Digital Ethics Lab, Springer Verlag. pp. 195-227. 2021.
    Artificial Intelligence research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, term in this chapter AI-Crime. AIC is theoretically feasible thanks to published experiments in automating fraud targeted at social media users, as well as demonstrations of AI-driven manipulation of simulated mark…Read more
  •  71
    The ethics of algorithms: key problems and solutions
    with Andreas Tsamados, Nikita Aggarwal, Josh Cowls, Jessica Morley, Huw Roberts, and Luciano Floridi
    AI and Society 37 (1): 215-230. 2022.
    Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016, 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated anal…Read more
  •  2518
    The modern abundance and prominence of data has led to the development of “data science” as a new field of enquiry, along with a body of epistemological reflections upon its foundations, methods, and consequences. This article provides a systematic analysis and critical review of significant open problems and debates in the epistemology of data science. We propose a partition of the epistemology of data science into the following five domains: (i) the constitution of data science; (ii) the kind …Read more