•  3
    Recently, a growing number of experts in artificial intelligence (AI) and medicine have be-gun to suggest that the use of AI systems, particularly machine learning (ML) systems, is likely to humanise the practice of medicine by substantially improving the quality of clinician-patient relationships. In this thesis, however, I argue that medical ML systems are more likely to negatively impact these relationships than to improve them. In particular, I argue that the use of medical ML systems is lik…Read more
  •  3
    A novel advantage of the use of machine learning (ML) systems in medicine is their potential to continue learning from new data after implementation in clinical practice. To date, considerations of the ethical questions raised by the design and use of adaptive machine learning systems in medicine have, for the most part, been confined to discussion of the so-called “update problem,” which concerns how regulators should approach systems whose performance and parameters continue to change even aft…Read more
  •  211
    Generative AI entails a credit–blame asymmetry
    with Sebastian Porsdam Mann, Brian D. Earp, Sven Nyholm, John Danaher, Nikolaj Møller, Hilary Bowman-Smart, Julian Koplin, Monika Plozza, Daniel Rodger, Peter V. Treit, Gregory Renard, John McMillan, and Julian Savulescu
    Nature Machine Intelligence 5 (5): 472-475. 2023.
    Generative AI programs can produce high-quality written and visual content that may be used for good or ill. We argue that a credit–blame asymmetry arises for assigning responsibility for these outputs and discuss urgent ethical and policy implications focused on large-scale language models.
  •  125
    The virtues of interpretable medical AI
    Cambridge Quarterly of Healthcare Ethics. forthcoming.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are “black boxes.” The initial response in the literature was a demand for “explainable AI.” However, recently, several authors have suggested that making AI more explainable or “interpretable” is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a “lethal prejudi…Read more
  •  262
    Diachronic and synchronic variation in the performance of adaptive machine learning systems: the ethical challenges
    Journal of the American Medical Informatics Association 30 (2): 361-366. 2023.
    Objectives: Machine learning (ML) has the potential to facilitate “continual learning” in medicine, in which an ML system continues to evolve in response to exposure to new data over time, even after being deployed in a clinical setting. In this article, we provide a tutorial on the range of ethical issues raised by the use of such “adaptive” ML systems in medicine that have, thus far, been neglected in the literature. Target audience: The target audiences for this tutorial are the developers of…Read more
  •  507
    The virtues of interpretable medical artificial intelligence
    Cambridge Quarterly of Healthcare Ethics 1-10. forthcoming.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are 'black boxes'. The initial response in the literature was a demand for 'explainable AI'. However, recently, several authors have suggested that making AI more explainable or 'interpretable' is likely to be at the cost of the accuracy of these systems and that prioritising interpretability in medical AI may constitute a 'lethal prejudi…Read more
  •  585
    The promise and perils of AI in medicine
    International Journal of Chinese and Comparative Philosophy of Medicine 17 (2): 79-109. 2019.
    What does Artificial Intelligence (AI) have to contribute to health care? And what should we be looking out for if we are worried about its risks? In this paper we offer a survey, and initial evaluation, of hopes and fears about the applications of artificial intelligence in medicine. AI clearly has enormous potential as a research tool, in genomics and public health especially, as well as a diagnostic aid. It’s also highly likely to impact on the or…Read more
  •  937
    Limits of trust in medical AI
    Journal of Medical Ethics 46 (7): 478-481. 2020.
    Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that t…Read more
  •  442
    In Deep Medicine, Eric Topol argues that the development of artificial intelligence (AI) for healthcare will lead to a dramatic shift in the culture and practice of medicine. Topol claims that, rather than replacing physicians, AI could function alongside of them in order to allow them to devote more of their time to face-to-face patient care. Unfortunately, these high hopes for AI-enhanced medicine fail to appreciate a number of factors that, we believe, suggest a radically different picture f…Read more
  •  839
    Advocates of physician-assisted suicide often argue that, although the provision of PAS is morally permissible for persons with terminal, somatic illnesses, it is impermissible for patients suffering from psychiatric conditions. This claim is justified on the basis that psychiatric illnesses have certain morally relevant characteristics and/or implications that distinguish them from their somatic counterparts. In this paper, I address three arguments of this sort. First, that psychiatric conditi…Read more