This paper motivates institutional epistemic trust as an important ethical consideration
informing the responsible development and implementation of AI technologies (or AI-
Inclusivity) in healthcare. Drawing on recent literature on epistemic trust and public trust in
science, we examine the conditions under which we can have institutional epistemic trust in AI-
inclusive healthcare systems and their members' medical information providers. In particular, we
discuss that institutional epistemic t…
Read moreThis paper motivates institutional epistemic trust as an important ethical consideration
informing the responsible development and implementation of AI technologies (or AI-
Inclusivity) in healthcare. Drawing on recent literature on epistemic trust and public trust in
science, we examine the conditions under which we can have institutional epistemic trust in AI-
inclusive healthcare systems and their members' medical information providers. In particular, we
discuss that institutional epistemic trust in AI-inclusive healthcare depends, in part, on the
reliability of AI-inclusive medical practices and programs, its knowledge and understanding
amongst different stakeholders, its effect on epistemic and communicative duties and burdens on
medical professionals, and finally, its interaction with ethical values and interests as well as
background socio-political conditions which shape AI-inclusive healthcare systems. To assess the
applicability of these conditions, we explore a proposal for AI-inclusivity within the Dutch
Newborn Screening Program, thereby illustrating the importance, scope, and potential challenges
of fostering and maintaining institutional epistemic trust in a context where generating, assessing,
and providing reliable and timely screening results for genetic risk is of high priority. Finally, to
motivate the general relevance of our discussion and case study, we provide suggestions for
strategies, interventions, and measures for AI-inclusivity in healthcare more widely.