•  715
    In this open access book, Timothy Aylsworth and Clinton Castro draw on the deep well of Kantian ethics to argue that we have moral duties, both to ourselves and to others, to protect our autonomy from the threat posed by the problematic use of technology. The problematic use of technologies like smartphones threatens our autonomy in a variety of ways, and critics have only begun to appreciate the vast scope of this problem. In the last decade, we have seen a flurry of books making “self-help” ar…Read more
  •  134
    Does Predictive Sentencing Make Sense?
    Inquiry: An Interdisciplinary Journal of Philosophy. forthcoming.
    This paper examines the practice of using predictive systems to lengthen the prison sentences of convicted persons when the systems forecast a higher likelihood of re-offense or re-arrest. There has been much critical discussion of technologies used for sentencing, including questions of bias and opacity. However, there hasn’t been a discussion of whether this use of predictive systems makes sense in the first place. We argue that it does not by showing that there is no plausible theory of punis…Read more
  •  197
    The Duty to Promote Digital Minimalism in Group Agents
    In Kantian Ethics and the Attention Economy: Duty and Distraction, Palgrave Macmillan. 2024.
    In this chapter, we turn our attention to the effects of the attention economy on our ability to act autonomously as a group. We begin by clarifying which sorts of groups we are concerned with, which are structured groups (groups sufficiently organized that it makes sense to attribute agency to the group itself). Drawing on recent work by Purves and Davis (2022), we describe the essential roles of trust (i.e., depending on groups to fulfill their commitments) and trustworthiness (i.e., the prope…Read more
  •  402
    Holm (2022) argues that a class of algorithmic fairness measures, that he refers to as the ‘performance parity criteria’, can be understood as applications of John Broome’s Fairness Principle. We argue that the performance parity criteria cannot be read this way. This is because in the relevant context, the Fairness Principle requires the equalization of actual individuals’ individual-level chances of obtaining some good (such as an accurate prediction from a predictive system), but the performa…Read more
  •  396
    Egalitarian Machine Learning
    Res Publica 29 (2). 2023.
    Prediction-based decisions, which are often made by utilizing the tools of machine learning, influence nearly all facets of modern life. Ethical concerns about this widespread practice have given rise to the field of fair machine learning and a number of fairness measures, mathematically precise definitions of fairness that purport to determine whether a given prediction-based decision system is fair. Following Reuben Binns (2017), we take ‘fairness’ in this context to be a placeholder for a var…Read more
  •  1995
    Social Media, Emergent Manipulation, and Political Legitimacy
    In Michael Klenk & Fleur Jongepier (eds.), The Philosophy of Online Manipulation, Routledge. pp. 353-369. 2022.
    Psychometrics firms such as Cambridge Analytica (CA) and troll factories such as the Internet Research Agency (IRA) have had a significant effect on democratic politics, through narrow targeting of political advertising (CA) and concerted disinformation campaigns on social media (IRA) (U.S. Department of Justice 2019; Select Committee on Intelligence, United States Senate 2019; DiResta et al. 2019). It is natural to think that such activities manipulate individuals and, hence, are wrong. Yet, as…Read more
  •  591
    On the Duty to Be an Attention Ecologist
    Philosophy and Technology 35 (1): 1-22. 2022.
    The attention economy — the market where consumers’ attention is exchanged for goods and services — poses a variety of threats to individuals’ autonomy, which, at minimum, involves the ability to set and pursue ends for oneself. It has been argued that the threat wireless mobile devices pose to autonomy gives rise to a duty to oneself to be a digital minimalist, one whose interactions with digital technologies are intentional such that they do not conflict with their ends. In this paper, we argu…Read more
  •  391
    Just Machines
    Public Affairs Quarterly 36 (2): 163-183. 2022.
    A number of findings in the field of machine learning have given rise to questions about what it means for automated scoring- or decisionmaking systems to be fair. One center of gravity in this discussion is whether such systems ought to satisfy classification parity (which requires parity in accuracy across groups, defined by protected attributes) or calibration (which requires similar predictions to have similar meanings across groups, defined by protected attributes). Central to this d…Read more
  •  1075
    What Should We Agree on about the Repugnant Conclusion?
    with Stephane Zuber, Nikhil Venkatesh, Torbjörn Tännsjö, Christian Tarsney, H. Orri Stefánsson, Katie Steele, Dean Spears, Jeff Sebo, Marcus Pivato, Toby Ord, Yew-Kwang Ng, Michal Masny, William MacAskill, Nicholas Lawson, Kevin Kuruc, Michelle Hutchinson, Johan E. Gustafsson, Hilary Greaves, Lisa Forsberg, Marc Fleurbaey, Diane Coffey, Susumu Cato, Tim Campbell, Mark Budolfson, John Broome, Alexander Berger, Nick Beckstead, and Geir B. Asheim
    Utilitas 33 (4): 379-383. 2021.
    The Repugnant Conclusion served an important purpose in catalyzing and inspiring the pioneering stage of population ethics research. We believe, however, that the Repugnant Conclusion now receives too much focus. Avoiding the Repugnant Conclusion should no longer be the central goal driving population ethics research, despite its importance to the fundamental accomplishments of the existing literature.
  •  1594
    Is there a Duty to Be a Digital Minimalist?
    Journal of Applied Philosophy 38 (4): 662-673. 2021.
    The harms associated with wireless mobile devices (e.g. smartphones) are well documented. They have been linked to anxiety, depression, diminished attention span, sleep disturbance, and decreased relationship satisfaction. Perhaps what is most worrying from a moral perspective, however, is the effect these devices can have on our autonomy. In this article, we argue that there is an obligation to foster and safeguard autonomy in ourselves, and we suggest that wireless mobile devices pose a seriou…Read more
  •  111
    Algorithms influence every facet of modern life: criminal justice, education, housing, entertainment, elections, social media, news feeds, work… the list goes on. Delegating important decisions to machines, however, gives rise to deep moral concerns about responsibility, transparency, freedom, fairness, and democracy. Algorithms and Autonomy connects these concerns to the core human value of autonomy in the contexts of algorithmic teacher evaluation, risk assessment in criminal sentencing, predi…Read more
  •  693
    Democratic Obligations and Technological Threats to Legitimacy: PredPol, Cambridge Analytica, and Internet Research Agency
    In Algorithms & Autonomy: The Ethics of Automated Decision Systems, Cambridge University Press. pp. 163-183. 2021.
    ABSTRACT: So far in this book, we have examined algorithmic decision systems from three autonomy-based perspectives: in terms of what we owe autonomous agents (chapters 3 and 4), in terms of the conditions required for people to act autonomously (chapters 5 and 6), and in terms of the responsibilities of agents (chapter 7). In this chapter we turn to the ways in which autonomy underwrites democratic governance. Political authority, which is to say the ability of a government to exercise …Read more
  •  375
    What We Informationally Owe Each Other
    In Algorithms & Autonomy: The Ethics of Automated Decision Systems, Cambridge University Press. pp. 21-42. forthcoming.
    ABSTRACT: One important criticism of algorithmic systems is that they lack transparency. Such systems can be opaque because they are complex, protected by patent or trade secret, or deliberately obscure. In the EU, there is a debate about whether the General Data Protection Regulation (GDPR) contains a “right to explanation,” and if so what such a right entails. Our task in this chapter is to address this informational component of algorithmic systems. We argue that information access is integra…Read more
  •  762
    Algorithms, Agency, and Respect for Persons
    Social Theory and Practice 46 (3): 547-572. 2020.
    Algorithmic systems and predictive analytics play an increasingly important role in various aspects of modern life. Scholarship on the moral ramifications of such systems is in its early stages, and much of it focuses on bias and harm. This paper argues that in understanding the moral salience of algorithmic systems it is essential to understand the relation between algorithms, autonomy, and agency. We draw on several recent cases in criminal sentencing and K–12 teacher evaluation to outline fou…Read more
  •  587
    Is the Attention Economy Noxious?
    with Adam Pham
    Philosophers' Imprint 20 (17): 1-13. 2020.
    A growing amount of media is paid for by its consumers through their very consumption of it. Typically, this new media is web-based and paid for by advertising. It includes the services offered by Facebook, Instagram, Snapchat, and YouTube. We offer an ethical assessment of the attention economy, the market where attention is exchanged for new media. We argue that the assessment has ethical implications for how the attention economy should be regulated. To conduct the assessment, we employ two h…Read more
  •  553
    Epistemic Paternalism Online
    In Guy Axtell & Amiel Bernal (eds.), Epistemic Paternalism, Rowman & Littlefield. pp. 29-44. 2020.
    New media (highly interactive digital technology for creating, sharing, and consuming information) affords users a great deal of control over their informational diets. As a result, many users of new media unwittingly encapsulate themselves in epistemic bubbles (epistemic structures, such as highly personalized news feeds, that leave relevant sources of information out (Nguyen forthcoming)). Epistemically paternalistic alterations to new media technologies could be made to pop at least some epis…Read more
  •  800
    Agency Laundering and Information Technologies
    Ethical Theory and Moral Practice 22 (4): 1017-1041. 2019.
    When agents insert technological systems into their decision-making processes, they can obscure moral responsibility for the results. This can give rise to a distinct moral wrong, which we call “agency laundering.” At root, agency laundering involves obfuscating one’s moral responsibility by enlisting a technology or process to take some action and letting it forestall others from demanding an account for bad outcomes that result. We argue that the concept of agency laundering helps in understan…Read more
  •  262
    What's Wrong with Machine Bias
    Ergo: An Open Access Journal of Philosophy 6. 2019.
    Data-driven, decision-making technologies used in the justice system to inform decisions about bail, parole, and prison sentencing are biased against historically marginalized groups (Angwin, Larson, Mattu, & Kirchner 2016). But these technologies’ judgments—which reproduce patterns of wrongful discrimination embedded in the historical datasets that they are trained on—are well-evidenced. This presents a puzzle: how can we account for the wrong these judgments engender without also indicting mor…Read more
  •  159
    Bias in Information, Algorithms, and Systems
    In Jo Bates, Paul D. Clough, Robert Jäschke & Jahna Otterbacher (eds.), Proceedings of the International Workshop on Bias in Information, Algorithms, and Systems (BIAS), . pp. 9-13. 2018.
    We argue that an essential element of understanding the moral salience of algorithmic systems requires an analysis of the relation between algorithms and agency. We outline six key ways in which issues of agency, autonomy, and respect for persons can conflict with algorithmic decision-making.
  •  137
    Agency Laundering and Algorithmic Decision Systems
    In N. Taylor, C. Christian-Lamb, M. Martin & B. Nardi (eds.), Information in Contemporary Society (Lecture Notes in Computer Science), Springer Nature. pp. 590-598. 2019.
    This paper has two aims. The first is to explain a type of wrong that arises when agents obscure responsibility for their actions. Call it “agency laundering.” The second is to use the concept of agency laundering to understand the underlying moral issues in a number of recent cases involving algorithmic decision systems. From the Proceedings of the 14th International Conference, iConference 2019, Washington D.C., March 31-April 3, 2019.
  •  187
    The moral limits of the market: the case of consumer scoring data
    with Adam Pham
    Ethics and Information Technology 21 (2): 117-126. 2019.
    We offer an ethical assessment of the market for data used to generate what are sometimes called “consumer scores” (i.e., numerical expressions that are used to describe or predict people’s dispositions and behavior), and we argue that the assessment has ethical implications on how the market for consumer scoring data should be regulated. To conduct the assessment, we employ two heuristics for evaluating markets. One is the “harm” criterion, which relates to whether the market produces serious h…Read more
  •  155
    The imprecise impermissivist’s dilemma
    with Casey Hart
    Synthese 196 (4): 1623-1640. 2019.
    Impermissivists hold that an agent with a given body of evidence has at most one rationally permitted attitude that she should adopt towards any particular proposition. Permissivists deny this, often motivating permissivism by describing scenarios that pump our intuitions that the agent could reasonably take one of several attitudes toward some proposition. We criticize the following impermissivist response: while it seems like any of that range of attitudes is permissible, what is actually requ…Read more
  •  167
    One of the main goals of Bayesian epistemology is to justify the rational norms credence functions ought to obey. Accuracy arguments attempt to justify these norms from the assumption that the source of value for credences relevant to their epistemic status is their accuracy. This assumption and some standard decision-theoretic principles are used to argue for norms like Probabilism, the thesis that an agent’s credence function is rational only if it obeys the probability axioms. We introduce an…Read more