•  110
    Automated Influence and the Challenge of Cognitive Security
    with Sarah Rajtmajer
    HoTSoS: ACM Symposium on Hot Topics in the Science of Security. forthcoming.
    Advances in AI are powering increasingly precise and widespread computational propaganda, posing serious threats to national security. The military and intelligence communities are starting to discuss ways to engage in this space, but the path forward is still unclear. These developments raise pressing ethical questions, about which existing ethics frameworks are silent. Understanding these challenges through the lens of “cognitive security,” we argue, offers a promising approach.
  •  3
    The American justice system, from police departments to the courts, is increasingly turning to information technology for help identifying potential offenders, determining where, geographically, to allocate enforcement resources, assessing flight risk and the potential for recidivism amongst arrestees, and making other judgments about when, where, and how to manage crime. In particular, there is a focus on machine learning and other data analytics tools, which promise to accurately predict where…Read more
  •  51
    Strange Loops: Apparent versus Actual Human Involvement in Automated Decision-Making
    with Kiel Brennan-Marquez and Karen Levy
    Berkeley Technology Law Journal. forthcoming.
    The era of AI-based decision-making fast approaches, and anxiety is mounting about when, and why, we should keep “humans in the loop” (“HITL”). Thus far, commentary has focused primarily on two questions: whether, and when, keeping humans involved will improve the results of decision-making (making them safer or more accurate), and whether, and when, non-accuracy-related values—legitimacy, dignity, and so forth—are vindicated by the inclusion of humans in decision-making. Here, we take up a rela…Read more
  •  394
    Online Manipulation: Hidden Influences in a Digital World
    Georgetown Law Technology Review 4 1-45. 2019.
    Privacy and surveillance scholars increasingly worry that data collectors can use the information they gather about our behaviors, preferences, interests, incomes, and so on to manipulate us. Yet what it means, exactly, to manipulate someone, and how we might systematically distinguish cases of manipulation from other forms of influence—such as persuasion and coercion—has not been thoroughly enough explored in light of the unprecedented capacities that information technologies and digital media …Read more
  •  514
    Technology, autonomy, and manipulation
    with Beate Roessler and Helen Nissenbaum
    Internet Policy Review 8 (2). 2019.
    Since 2016, when the Facebook/Cambridge Analytica scandal began to emerge, public concern has grown around the threat of “online manipulation”. While these worries are familiar to privacy researchers, this paper aims to make them more salient to policymakers — first, by defining “online manipulation”, thus enabling identification of manipulative practices; and second, by drawing attention to the specific harms online manipulation threatens. We argue that online manipulation is the use of informa…Read more
  •  153
    Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures
    AIES: AAAI/ACM Conference on AI, Ethics, and Society 1. 2019.
    For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions—what behavioral economists call our choice architectures—are increasingly technologically-laden. Which is to say: algorithms incr…Read more
  •  190
    The dominant legal and regulatory approach to protecting information privacy is a form of mandated disclosure commonly known as “notice-and-consent.” Many have criticized this approach, arguing that privacy decisions are too complicated, and privacy disclosures too convoluted, for individuals to make meaningful consent decisions about privacy choices—decisions that often require us to waive important rights. While I agree with these criticisms, I argue that they only meaningfully call into quest…Read more
  •  26
    For those who find Dreyfus’s critique of AI compelling, the prospects for producing true artificial human intelligence are bleak. An important question thus becomes, what are the prospects for producing artificial non-human intelligence? Applying Dreyfus’s work to this question is difficult, however, because his work is so thoroughly human-centered. Granting Dreyfus that the body is fundamental to intelligence, how are we to conceive of non-human bodies? In this paper, I argue that bringing Drey…Read more
  •  10
    Transparent Media and the Development of Digital Habits
    In Yoni Van den Eede, Stacy O'Neal Irwin & Galit Wellner (eds.), Postphenomenology and Media: Essays on Human-Media-World Relations, Lexington Books. pp. 27-44. 2017.
    Our lives are guided by habits. Most of the activities we engage in throughout the day are initiated and carried out not by rational thought and deliberation, but through an ingrained set of dispositions or patterns of action—what Aristotle calls a hexis. We develop these dispositions over time, by acting and gauging how the world responds. I tilt the steering wheel too far and the car’s lurch teaches me how much force is needed to steady it. I come too close to a hot stove and the burn I get in…Read more
  •  139
    Information Privacy and Social Self-Authorship
    Techné: Research in Philosophy and Technology 20 (3): 216-239. 2016.
    The dominant approach in privacy theory defines information privacy as some form of control over personal information. In this essay, I argue that the control approach is mistaken, but for different reasons than those offered by its other critics. I claim that information privacy involves the drawing of epistemic boundaries—boundaries between what others should and shouldn’t know about us. While controlling what information others have about us is one strategy we use to draw such boundaries, it …Read more
  •  25
    Ihde’s Missing Sciences: Postphenomenology, Big Data, and the Human Sciences
    Techné: Research in Philosophy and Technology 20 (2): 137-152. 2016.
    In Husserl’s Missing Technologies, Don Ihde urges us to think deeply and critically about the ways in which the technologies utilized in contemporary science structure the way we perceive and understand the natural world. In this paper, I argue that we ought to extend Ihde’s analysis to consider how such technologies are changing the way we perceive and understand ourselves too. For it is not only the natural or “hard” sciences which are turning to advanced technologies for help in carrying out …Read more