In this paper we take up the issue of near-term artificial intelligence (AI). "Near-term AI" is used to denote artificial intelligence algorithms that are already in place in a variety of public and private sectors, guiding decision that pertain to advertising, credit ratings, and sentencing in the justice system. Our focus here is to contribute to a critical discussion of the ways that AI is already being widely used in decision-making procedures in these areas. We will argue that developers an…
Read moreIn this paper we take up the issue of near-term artificial intelligence (AI). "Near-term AI" is used to denote artificial intelligence algorithms that are already in place in a variety of public and private sectors, guiding decision that pertain to advertising, credit ratings, and sentencing in the justice system. Our focus here is to contribute to a critical discussion of the ways that AI is already being widely used in decision-making procedures in these areas. We will argue that developers and deployers of AI systems--in senses to be defined--bear a particular kind of responsibility for the moral consequences of near-term AI. We will present a tool to aid developers and deployers in engaging in the moral reflection we argue is required of them, in order both the help them meet their moral obligations and to help address the material risks posed by what we take to be the status quo of actual near-term AI development. This chapter can be understood as a contribution to the field of technology assessment, but instead of suggesting policy revisions, we will propose a framework for ethical analysis that can be used to facilitate more robust ethical reflection in AI development and implementation.