Automated Influence is the use of Artificial Intelligence (AI) to collect, integrate, and analyse peopleâ€™s data in order to deliver targeted interventions that shape their behaviour. We consider three central objections against Automated Influence, focusing on privacy, exploitation, and manipulation, showing in each case how a structural version of that objection has more purchase than its interactional counterpart. By rejecting the interactional focus of â€œAI Ethicsâ€ in favour of a more structural, political philosophy of AI, we show that the real problem with Automated Influence is the crisis of legitimacy that it precipitates.
|Journal||Canadian Journal of Philosophy|
|Publication status||Published - 2021|