The Metapolitics of Algorithmic Politics

This is one of the things I’m working through right now – and in many ways pulling together the disparate threads of thinking others have been doing regarding algorithm, politics and media. I’ll be presenting on this topic at the ICA 2016 preconference ‘Algorithms, Automation and Politics’, organised by the folks at Political Bots.


From data-mined voter profiles to Twitter bots retweeting presidential candidates, the ongoing datafication of politics is clear to see. Yet alongside the steady uptake of data-driven analytics for specific political practices, there is a ‘metapolitical’ level where algorithm and automation are themselves political problems, and introduce broad, normative interventions into how ‘politics’ itself is conceptualised. I will examine four specific cases of politics’ datafication, and use them to raise four interconnected metapolitical problems. In doing so, I critique the political vision contained in their promise of neutrality, objectivity, and efficiency, and suggest that a renewed definition of what counts as political ‘preference’ and ‘participation’ will be necessary. I draw on my ongoing research into state surveillance and self-tracking as cases in being ‘data-driven’, on critical theories of platforms / algorithms, and Foucault’s works on biopolitics and governmentality.

§1. The default to algorithm. In the run-up to the 2012 US presidential elections, Google produced customised results for ‘Obama’ – but not for ‘Romney’. Google calmly explained that it was an innocent mistake: after all, an algorithm could hardly ‘favour’ either candidate.[1] Algorithmic discrimination is fast emerging as a political problem, even as the practice itself is colonising the field of politics. Its proponents have fielded a powerful, generic response to criticism: an algorithm is neutral, an algorithm is objective. This notion of neutrality undergirds everything from self-tracking devices’ promise to deliver impartial truth about your exercise habits, to social media platforms’ insistence that they cannot be responsible for what people do with them.[2] Algorithmic neutrality thus becomes a strategic way for governments and corporations to evade responsibility. Furthermore, neutrality enables the idea that algorithm and automation can make any activity more efficient, more objective – a technologically utopian ‘solutionism’[3]. What is urgently required are explicitly moral standards for regulating algorithm design – standards which look beyond the typical values like popularity and speed, and draw from our ideals of political deliberation and participation.

§2. Politics without political consciousness. In 2007, a study suggested that “rapid, unreflective judgments” of candidates’ faces are predictive of American gubernatorial election results.[4] Neuroscience experiments have added to the theme, showing for example that subjects without declarative memory of candidates’ stance on issues will still vote for candidates that best reflect the subjects’ own positions.[5] Such inquiries give buoyance to the idea that political choice can be identified and manipulated while bypassing conscious, deliberative subjects, reaching their behavioural, neurological, affective underbelly directly.[6] Yet the basic justification for such a strategy lies in the idea that objective truth about what people ‘really’ want lies behind the conscious citizen, the latter being prone to bias and errors. Just as there is no necessary default to algorithm (§1), however, there needs to be a normative standard for assessing which political questions should be asked of reflexive subjects, and which should be extracted from their bodies. Politics is not merely ‘accurate’ representation of preferences, but a practice of active, collective choice.

§3. Government without politic[ian]s.[7] The Internet of Things [IoT] and Snowden-era state surveillance have one important commonality: the belief that algorithmic prediction and ‘pre-emption’[8], powered by indiscriminate data collection, is a superior decision-making mechanism to human judgment. The same logic is now being applied to politics: “Why rely on laws when one has sensors and feedback mechanisms?”[9] Algorithmic governance / regulation is a vision where the optimal solution for the state is identified through algorithmic modelling, bypassing the human mess of politicians and discursive wrangling. This is another frontier which begs a sharper, renewed definition of what government entails. Is good government governing by the right principles, or optimising end results in areas like crime and food prices?[10] And what does the latter model portend for political discourse, which continues to employ a language of ideas and ideals?

§4. Data cynicism. As I write, robo-calls are assaulting South Carolina voters, insisting that, for example, Marco Rubio supports illegal immigrants.[11] Like Twitter bots or the infamous 50 Cent Party, robo-calls buoyed by new techniques in voter profiling are flooding the political public sphere. And as they do so, there is a growing cynicism on the part of the citizenry – a basic distrust of data-driven techniques when it is perceived to interfere with human deliberation and active political participation. The promise of objective, data-driven politics is likely to arrive hand in hand with heightened cynicism about the political process, precisely because this alleged objectivity will be based on data too complex, voluminous, and often legally restricted, for the public to properly access and comprehend. The fixation with data and accuracy risks further privileging the population – a conceptualisation of human subjects as statistically divisible resources[12] – over the public.

[1] Angwin J (2012) On Google, a Political Mystery That’s All Numbers. The Wall Street Journal, Available from: (accessed 22 January 2013).

[2] Gillespie T (2010) The politics of ‘platforms’. New Media & Society, 12(3), 347–364.

[3] Morozov E (2013) To Save Everything, Click Here: The Folly of Technological Solutionism. New York, Public Affairs.

[4] Ballew CC and Todorov A (2007) Predicting political elections from rapid and unreflective face judgments. Proceedings of the National Academy of Sciences of the United States of America, 104(46), 17948–53.

[5] Coronel JC, Duff MC, Warren DE, et al. (2012) Remembering and Voting: Theory and Evidence from Amnesic Patients. Am J Pol Sci, 56(4), 837–848.

[6] This is an aspect of what Mark Hansen calls ‘machinic sensibility’ in the age of new media: Hansen MBN (2015) Feed-Forward: On the Future of Twenty-First-Century Media. Chicago, University of Chicago Press.

[7] Paraphrased from: Psutka D (2015) Improve Government With Algorithms – Without Politicians. The Huffington Post, Available from: (accessed 17 December 2015).

[8] For literature on pre-emption, see: Amoore L (2011) Data Derivatives: On the Emergence of a Security Risk Calculus for Our Times. Theory, Culture & Society, 28(6), 24–43; Massumi B (2007) Potential Politics and the Primacy of Preemption. Theory & Event, 10(2).

[9] Morozov E (2014) The rise of data and the death of politics. The Guardian, Available from: (accessed 21 July 2014).

[10] This, as I will show in the full text, is a derivation of the basic problem of governmentality. Foucault M (2008) The Birth of Biopolitics: Lectures at the Collége de France, 1978-79. Senellart M (ed.), Basingstoke, Palgrave Macmillan.


[12] Foucault M (2004) Security, Territory, Population: Lectures at the Collège de France 1977-1978. Senellart M (ed.), New York, Palgrave Macmillan.