Andrea Ferrario

ETH Zurich
  • The European Union’s Digital Services Act requires very large online platforms and search engines providers to assess and mitigate systemic risks. However, regulators and providers both face a fundamental challenge we refer to as the ‘convergence problem’—the inherent difficulty of achieving methodological and definitional agreement across diverse approaches investigating systemic risks. To address this challenge, we propose a novel dual-track framework that distinguishes between a permissive _R…Read more
  •  4
    Pluralism isn’t Just Methodological, it’s Political: A Response to Wörsdörfer
    with Michele Loi and Matteo Fabbri
    Philosophy and Technology 38 (2): 1-3. 2025.
  •  17
    Expanding HCXAI in the Age of AI Agents: Challenges and Recommendations
    Acm Chi 2025 Workshop Human-Centered Explainable Artificial Intelligence. 2025.
    AI agents—autonomous, multi-tasking systems beyond traditional AI—will reshape human-AI interaction shifting the focus from a simple human-versus-system autonomy debate to a triadic model: human users, AI agents, and digital resources. Further, the widespread adoption of agentic systems as consumer products will accelerate the large-scale integration of novel human-AI agent hybridization into everyday life, necessitating renewed examinations of agent identity, accountability, and trust in AI age…Read more
  •  36
    The ongoing debate about reliance and trust in artificial intelligence (AI) systems continues to challenge our understanding and application of these concepts in human-AI interactions. In this work, we argue for a pragmatic approach to defining reliance and trust in AI. Our approach is grounded in three expectations that should guide human-AI interactions: appropriate reliance, efficiency, and motivation by objective reasons. By focusing on these expectations, we show that it is possible to reco…Read more
  •  23
    Social Misattributions in Conversations with Large Language Models
    with Alberto Termine and Alessandro Facchini
    We investigate a typology of socially and ethically risky phenomena emerging from the interaction between humans and large language model (LLM)-based conversational systems. As they relate to the way in which humans attribute social identity components, such as role and face, to LLM-based conversational systems, we term these phenomena 'social misattributions.' Drawing on classical theories of social identity and recent debates in the philosophy of technology, we argue that these social misattri…Read more
  •  41
    We address an open problem in the philosophy of artificial intelligence (AI): how to justify the epistemic attitudes we have towards the trustworthiness of AI systems. The problem is important, as providing reasons to believe that AI systems are worthy of trust is key to appropriately rely on these systems in human-AI interactions. In our approach, we consider the trustworthiness of an AI as a time-relative, composite property of the system with two distinct facets. One is the actual trustworthi…Read more
  •  37
    Twenty-four years of empirical research on trust in AI: a bibliometric review of trends, overlooked issues, and future directions
    with Michaela Benk, Sophie Kerstan, and Florian von Wangenheim
    AI and Society 40 (4): 2083-2106. 2025.
    Trust is widely regarded as a critical component to building artificial intelligence (AI) systems that people will use and safely rely upon. As research in this area continues to evolve, it becomes imperative that the research community synchronizes its empirical efforts and aligns on the path toward effective knowledge creation. To lay the groundwork toward achieving this objective, we performed a comprehensive bibliometric analysis, supplemented with a qualitative content analysis of over two …Read more
  •  65
    Experts or Authorities? The Strange Case of the Presumed Epistemic Superiority of Artificial Intelligence Systems
    with Alessandro Facchini and Alberto Termine
    Minds and Machines 34 (3): 1-27. 2024.
    The high predictive accuracy of contemporary machine learning-based AI systems has led some scholars to argue that, in certain cases, we should grant them epistemic expertise and authority over humans. This approach suggests that humans would have the epistemic obligation of relying on the predictions of a highly accurate AI system. Contrary to this view, in this work we claim that it is not possible to endow AI systems with a genuine account of epistemic expertise. In fact, relying on accounts …Read more
  •  30
    The Patient Preference Predictor: A Timely Boost for Personalized Medicine
    with Nikola Biller-Andorno and Armin Biller
    American Journal of Bioethics 24 (7): 35-38. 2024.
    The future of medicine will be predictive, preventive, personalized, and participatory. Recent technological advancements bolster the realization of this vision, particularly through innovations in...
  •  406
    Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach
    with Alberto Termine and Alessandro Facchini
    Available at Https://Arxiv.Org/Abs/2403.17873 (Extended Version of the Manuscript Accepted for the Acm Chi Workshop on Human-Centered Explainable Ai 2024 (Hcxai24). forthcoming.
    Human-centered explainable AI (HCXAI) advocates for the integration of social aspects into AI explanations. Central to the HCXAI discourse is the Social Transparency (ST) framework, which aims to make the socio-organizational context of AI systems accessible to their users. In this work, we suggest extending the ST framework to address the risks of social misattributions in Large Language Models (LLMs), particularly in sensitive areas like mental health. In fact LLMs, which are remarkably capabl…Read more
  •  95
    Large language models in medical ethics: useful but not expert
    with Nikola Biller-Andorno
    Journal of Medical Ethics 50 (9): 653-654. 2024.
    Large language models (LLMs) have now entered the realm of medical ethics. In a recent study, Balaset alexamined the performance of GPT-4, a commercially available LLM, assessing its performance in generating responses to diverse medical ethics cases. Their findings reveal that GPT-4 demonstrates an ability to identify and articulate complex medical ethical issues, although its proficiency in encoding the depth of real-world ethical dilemmas remains an avenue for improvement. Investigating the i…Read more
  •  42
    The high predictive accuracy of contemporary machine learning-based AI systems has led some scholars to argue that, in certain cases, we should grant them epistemic expertise and authority over humans. This approach suggests that humans would have the epistemic obligation of relying on the predictions of a highly accurate AI system. Contrary to this view, in this work we claim that it is not possible to endow AI systems with a genuine account of epistemic expertise. In fact, relying on accounts …Read more
  •  65
    We address an open problem in the epistemology of artificial intelligence (AI), namely, the justification of the epistemic attitudes we have towards the trustworthiness of AI systems. We start from a key consideration: the trustworthiness of an AI is a time-relative property of the system, with two distinct facets. One is the actual trustworthiness of the AI, and the other is the perceived trustworthiness of the system as assessed by its users while interacting with it. We show that credences, n…Read more
  •  36
    AI knows best? Avoiding the traps of paternalism and other pitfalls of AI-based patient preference prediction
    with Sophie Gloeckler and Nikola Biller-Andorno
    Journal of Medical Ethics 49 (3): 185-186. 2023.
    In our recent article ‘The Ethics of the Algorithmic Prediction of Goal of Care Preferences: From Theory to Practice’1, we aimed to ignite a critical discussion on why and how to design artificial intelligence (AI) systems assisting clinicians and next-of-kin by predicting goal of care preferences for incapacitated patients. Here, we would like to thank the commentators for their valuable responses to our work. We identified three core themes in their commentaries: (1) the risks of AI paternalis…Read more
  •  80
    Transparency as design publicity: explaining and justifying inscrutable algorithms
    with Michele Loi and Eleonora Viganò
    Ethics and Information Technology 23 (3): 253-263. 2020.
    In this paper we argue that transparency of machine learning algorithms, just as explanation, can be defined at different levels of abstraction. We criticize recent attempts to identify the explanation of black box algorithms with making their decisions (post-hoc) interpretable, focusing our discussion on counterfactual explanations. These approaches to explanation simplify the real nature of the black boxes and risk misleading the public about the normative features of a model. We propose a new…Read more
  •  36
    Ethics of the algorithmic prediction of goal of care preferences: from theory to practice
    with Sophie Gloeckler and Nikola Biller-Andorno
    Journal of Medical Ethics 49 (3): 165-174. 2023.
    Artificial intelligence (AI) systems are quickly gaining ground in healthcare and clinical decision-making. However, it is still unclear in what way AI can or should support decision-making that is based on incapacitated patients’ values and goals of care, which often requires input from clinicians and loved ones. Although the use of algorithms to predict patients’ most likely preferred treatment has been discussed in the medical ethics literature, no example has been realised in clinical practi…Read more
  •  56
    Trust and monitoring are traditionally antithetical concepts. Describing trust as a property of a relationship of reliance, we introduce a theory of trust and monitoring, which uses mathematical models based on two classes of functions, including _q_-exponentials, and relates the levels of trust to the costs of monitoring. As opposed to several accounts of trust that attempt to identify the special ingredient of reliance and trust relationships, our theory characterizes trust as a quantitative p…Read more
  •  108
    Real engines of the artificial intelligence revolution, machine learning models, and algorithms are embedded nowadays in many services and products around us. As a society, we argue it is now necessary to transition into a phronetic paradigm focused on the ethical dilemmas stemming from the conception and application of AIs to define actionable recommendations as well as normative solutions. However, both academic research and society-driven initiatives are still quite far from clearly defining …Read more
  •  56
    In their article ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI’, Durán and Jongsma discuss the epistemic and ethical challenges raised by black box algorithms in medical practice. The opacity of black box algorithms is an obstacle to the trustworthiness of their outcomes. Moreover, the use of opaque algorithms is not normatively justified in medical practice. The authors introduce a formalism, called computational reliabilism, which allow…Read more
  •  108
    Trust does not need to be human: it is possible to trust medical AI
    with Michele Loi and Eleonora Viganò
    Journal of Medical Ethics 47 (6): 437-438. 2021.
    In his recent article ‘Limits of trust in medical AI,’ Hatherley argues that, if we believe that the motivations that are usually recognised as relevant for interpersonal trust have to be applied to interactions between humans and medical artificial intelligence, then these systems do not appear to be the appropriate objects of trust. In this response, we argue that it is possible to discuss trust in medical artificial intelligence (AI), if one refrains from simply assuming that trust describes …Read more
  •  40
    In Search of a Mission: Artificial Intelligence in Clinical Ethics
    with Nikola Biller-Andorno and Sophie Gloeckler
    American Journal of Bioethics 22 (7): 23-25. 2022.
    Artificial intelligence has found its way into many areas of human life, serving a range of purposes. Sometimes AI tools are designed to help humans eliminate high-volume, tedious, routine tas...
  •  87
    AI support for ethical decision-making around resuscitation: proceed with care
    with Nikola Biller-Andorno, Susanne Joebges, Tanja Krones, Federico Massini, Phyllis Barth, Georgios Arampatzis, and Michael Krauthammer
    Journal of Medical Ethics 48 (3): 175-183. 2022.
    Artificial intelligence (AI) systems are increasingly being used in healthcare, thanks to the high level of performance that these systems have proven to deliver. So far, clinical applications have focused on diagnosis and on prediction of outcomes. It is less clear in what way AI can or should support complex clinical decisions that crucially depend on patient preferences. In this paper, we focus on the ethical questions arising from the design, development and deployment of AI systems to suppo…Read more