Abstract
Automated Influence is the use of Artificial Intelligence (AI) to collect, integrate, and analyse people’s data in
order to deliver targeted interventions that shape their behaviour. We consider three central objections
against Automated Influence, focusing on privacy, exploitation, and manipulation, showing in each case
how a structural version of that objection has more purchase than its interactional counterpart. By rejecting
the interactional focus of “AI Ethics†in favour of a more structural, political philosophy of AI, we show that
the real problem with Automated Influence is the crisis of legitimacy that it precipitates.
Original language | English |
---|---|
Pages (from-to) | 125-148 |
Journal | Canadian Journal of Philosophy |
Volume | 52 |
Issue number | 1 |
DOIs | |
Publication status | Published - 2021 |