The adoption of AI in the health sector has its share of benefits and harms to various stakeholder groups and entities. There are critical risks involved in using AI systems in the health domain; risks that can have severe, irreversible, and life-changing impacts on people’s lives. With the development of innovative AI-based applications in the medical and healthcare sectors, new types of risks emerge. To benefit from novel AI applications in this domain, the risks need to be managed in order to…
Read moreThe adoption of AI in the health sector has its share of benefits and harms to various stakeholder groups and entities. There are critical risks involved in using AI systems in the health domain; risks that can have severe, irreversible, and life-changing impacts on people’s lives. With the development of innovative AI-based applications in the medical and healthcare sectors, new types of risks emerge. To benefit from novel AI applications in this domain, the risks need to be managed in order to protect the fundamental interests and rights of those affected. This will increase the level to which these systems become ethically acceptable, legally permissible, and socially sustainable. In this paper, we first discuss the necessity of AI risk management in the health domain from the ethical, legal, and societal perspectives. We then present a taxonomy of risks associated with the use of AI systems in the health domain called HART. HART mirrors the risks of a variety of different real-world incidents caused by use of AI in the health sector. Lastly, we discuss the implications of the taxonomy for different stakeholder groups and further research.