This piece was presented at the conference Digital Existence: Memory, Meaning, Vulnerability in October 2015, under the title “The Data-Driven Life: Existential Correlates of Knowing in the Surveillance Society”. The boilerplate caveat: most of it is preliminary thoughts in an ongoing project.
After years of rumbling in the backstage, online surveillance had its breakout moment in 2013. Since then, it’s been enjoying a rather privileged position in the public consciousness – alongside Donald Trump, ISIS and Cecil the Lion. Typically, these new surveillance technologies are considered in terms of privacy, paranoia and power. Today, I want to focus on what surveillance looks and feels like from the point of view of the citizen. Or rather, what the idea of digital surveillance looks and feels like. Certainly, real individuals do get attacked and destroyed very specifically by surveillance. But for millions of others, there is an absence of any personal experience of being tailed, accused, interrogated, exposed, directly observed. And yet it has become a matter of concern. This aspect of surveillance says something about the existential aspects of ‘knowing’. What counts as knowing? What are we supposed to be able to do with that knowledge? What is knowing supposed to feel like?
Think about the Snowden Affair. The irony is that we’re supposed to know what is going on; Snowden told us so. We have more information about NSA surveillance than ever. But any concrete experience of surveillance or its consequences is largely absent from our phenomenological environment. What we experience most of the time is the paraphernalia. The talk, the endless talk – the droning about drones more than drones themselves, at least so far.
So we live in an environment populated with chimeras and phantasms. Apocalyptic futures, probabilisms and hypothetical harms materialise in all kinds of ways, calling upon our senses. Although Snowden stamped the seal of fact indelibly upon post-9/11 US state surveillance, the practice of surveillance remains in a world ‘out there’, beyond my space and time. In this condition, we develop certain ways of feeling like we know; ways of establishing meaning and sense, ways of coping with the world.
One key technique for this feeling-knowing is interpassivity: the ability to draw on another who knows for me, does the knowing in my stead. Zizek’s example is canned laughter – where we feel a certain relief or satisfaction, even though we know very well we didn’t do the laughing.
First example. I call it Feinstein’s Apology. Prevalent is the idea that if only you knew what we know, you would feel as we do – provided, of course, that you believe us that we even know anything… the only possible way to establish a confidence is to simulate the judgment, the affects, of another, and take them unto myself in a transposition.
Second example. This is from the 2014 senate intelligence committee hearing – a public affair that is supposed to grant some transparency. It is good that we have public hearings to tell us which particular things are not public! Again, it’s not just that we are denied access to information; we are also granted access to a particular experience of deferral. When we watch this going on and say “see, there is nothing to fear”, or “see, the system is broken”, or even, “see, we just don’t know enough”… we’re not just pulling those decisions out of nothing. We’re relying on our experience of these simulations, these deferrals, themselves.
Closely related to interpassivity is subjunctivity: the habit of turning ‘what ifs’ into ‘as ifs’. If interpassivity was about inhabiting the knowing of other people and other things, subjunctivity is doing something similar with futures and probabilities.
For instance, consider how both supporters and critics of Edward Snowden constantly fall back on the refrain, ‘you never know’. The ironic thing is that the phrase, semantically, means: you will never know. But the way we use it is the opposite: you’ll never know for sure, so you have to pretend you know enough, and do something. Here is an example: comedian John Oliver wants to help Snowden figure out how to make people care about surveillance they can’t see or touch.
This is a cut from Last Week Tonight with John Oliver (full piece here).
Of course, you wouldn’t ever even know if someone has seen your dick pic. It simply gets shrouded in a Heisenbergian cloud. Potentiality is easy to ignore, but once you notice it, it just keeps popping up everywhere you go. But that’s just the point, isn’t it? It’s all about bringing something into affective relevance, using that shimmer of potentiality to make something appear more present, even though those supposed harms are out there beyond the present situation.
It might be objected that I make things out to be far more uncertain. To be sure, we do have a lot of tangible facts and information on hand. But these, too, are often played out in more uncertain ways. Consider the role of numbers in this debate. What we usually encounter is not numbers in their specificity, not numbers that supply calculations and equations. Instead, we get numbers as an idea: the belief in the idea of probability, rather than specific probabilistic calculations. We find enormous numbers wherever we go. Snowden apparently touched 1.7 million documents – a Rabelasian imagery of gigantic, grubby, treacherous hands the size of a small island country. But what does such a number mean? 3 million doesn’t tell us anything that 2 billion couldn’t; it’s like a child that says, “there was a mega-gazillion, daddy!” We do not develop any knowledge here that anchors to the formal, proportional, objective relativity of mathematics. Rather, they become activators of affect, swinging wildly between “oh, it’s nothing” and “dear god, it’s gigantic!”
We try to make things meaningful enough, and more than that, to make things certain enough for our decisions and feelings. We invest our confidence, our anxiety, on what we know to be deferred and speculative. Sometimes, we respond to the idea of having a feeling in order to have a feeling about it.
spreadsheets and nipple.io, respectively.
Of course, these parameters for the experience of knowing, the feeling of knowing, are historically specific. In fact, we have very similar technologies, also being used in the contemporary moment, but through very different mechanisms of feeling-knowing. Self-tracking, emblematised by the Quantified Self movement, is an increasingly mainstream practice of tracking physiological signals, habits, moods, and more. Fitbit and Apple Watch are the best known today, but there are many other kinds. The Muse headband uses four electroencephalogram sensors, and tells you when you’re ‘calm’. Affectiva, based on Paul Ekman’s facial action coding system, classifies your expressions into seven types of affects. Pact has cash incentives for meeting your exercise goals, but if you fail, you’re the one that pays the go-getters. Peeple, which went infamously viral recently, is a Yelp-for-people where everybody publicly scores everybody. You even have sexual activity trackers – the demure, covert seduction of ‘Spreadsheets’ and the lurid allure of ‘Nipple’. After all, tracking your technique, frequency, thrusts per minute, is no more perverse than everything else on this list.
It is telling that in general, we don’t even call this surveillance – even though many of the same technologies, same principles in data-mining and automated observation and correlative prediction apply. I’m not trying to say self-tracking is the same as state surveillance; just the opposite. The point is that very similar technologies can produce very different affective attitudes and ways of feeling-knowing. So self-tracking revolves around the promise of intimacy: get closer to the data, a data which really goes under your skin and finds out everything about you, and control it yourself.
So there is this idea around self-tracking that the traditional, ‘normal’ modes of intelligibility was actually insufficient. That for centuries, we lived as if a blind amnesiac, feeling our way through the maze of our own body and mind. That through self-surveillance, we will finally close the gap between my consciousness and my behaviour, that we will finally know ourselves in a direct and pure manner.
Heidegger identified a thrownness or clearing, Geworfenheit and die Lichtung, as the basic condition for human intelligibility. That is, a distance between the experiencing I and the object of my experience is necessary for meaning. A distance which we do not always experience – save in moments of categorial intuition or the phenomenological reduction – but a distance which holds up what we do experience. What self-tracking promises here is to eliminate this clearing, replacing it with a direct injection of raw, objective truth.
This promise of full frontal realness results in the idea that you can’t lie to yourself anymore. The data will not filter out your momentary indiscretions, your corner-cutting, and it certainly will not parlay with your pitiful excuses. You find this idea from both trackers and skeptics. Not by yourself shall you know thyself; you must learn to face yourself in another.
So: self-tracking promises intimacy, personal control, knowledge – all the things denied us in the world of state and corporate data-mining. Many speak of finally ‘owning’ your data. Power to the people through a democratisation of knowledge. But in fact, both practices have a certain recessiveness, a deferral and displacement of knowledge, at their very core.
We can understand this by asking how self-tracking differs from earlier practices of self-knowledge. The Quantified Self itself frequently invokes the Delphic maxim, know thyself – and thus bills itself as a futuristic fulfilment of an old and universal effort. The thing is, the most important thing in Sokratic dialogue was your experience of being tested by your interlocutor. Your friend or mentor was meant to be a basanos, a ‘touchstone’: not someone who tells you what you are, but someone against whom you could see yourself in a truer light.
In Phaedrus, Sokrates criticises the turn to writing for this reason. He says writing appears intelligent, but is only capable of giving one invariant message – an answer which tumbles carelessly amidst both the wise and unwise, open to abuse and closed to debate. The very stability that we typically prize in writing is here a castration of the experience that matters much more than the ‘message’. He insists that the lived process of interlocution is the real deal.
The more techno-utopian variants of self-tracking promise to skip all that fluff and get to the ostensible point. We are starting to short-circuit this process of interlocution in dramatic ways. The human becomes a kind of reaction machine: don’t bother processing anything, we’ll fire you stimuli in rapid succession. Don’t think, just move along the affective register. Make your privacy tools glow green, not red, make your run averages go up in a line, not down, make your thrusts per minute go up, not down. A self-improvement that comes with some indications or affects of self-knowledge, but not a lot in the way of sovereignty or, as we typically call that experiential process of discovery, wisdom.
In both cases, we also see that there is no process of knowing inherent to a particular technology. It is a contingent process by which we end up taking some things for granted, and tremble with anxiety about others. So I want to make a normative suggestion: that we think about the relationship to knowledge we cultivate. By relationship, I mean a more reflexive, critical stance against how such technologies mediate our conditions of affection. Bringing a device into your home or bed is like a new roommate, romantic partner, or even a pet; you’re bringing in a set of agencies, affordances, that change the how of your most intimate experiences. When we choose a roommate, nobody just thinks about the increase in rent, or the efficiency in doing the dishes. We know it affects how we behave, our comfort levels, our sense of home, our social life, our personal hygiene, and more. Our collective attitude to surveillance technologies also need to take a richer view. Let me give some brief examples of what I mean.
First example. In 2010, ‘Kim’ Flint died. He flipped over a car, cycling too fast. He was trying to regain the ‘King of the Mountain’ status for downhill speed records – a status he had recently lost, on the Strava cycling tracking app. He wasn’t just obsessed with exercise, or with competition. He was obsessed with his data, and the visual presentation of his data. Initially, there was some debate about whether Strava was at fault. The lawsuit from Flint’s family was eventually dismissed; as Forbes put it, “are we really all so data driven now that we can’t actually look at a really steep hill and recognize it as terrifying on a bike?” But in fact, that’s exactly what we are becoming, when Beddit tells you to go to sleep when you may not feel tired, or when Fitbit tells you you can run more when you think you can’t. The promise of raw data, of objective truth, in self-tracking, depends on us being willing to overrule our own intuition.
Second example. Pplkpr promises to tells you that one friend makes you the most scared and another makes you the most aroused. You learn, in a quantified sense, exactly who is the most significant contributor to your hopes, happiness, anxiety, self-consciousness. “It made me realise the truth”, say beta users. Some explain that the app gave them the excuse to cut people out of their lives, because they could now point to objective proof that they were making a loss from this social contract. This is what happens when we do not question the relationships we have entered into, and only think of technologies as direct informational delivery.
And what about state surveillance? The techniques of subjunctivity and interpassivity I described are also relationships, relationships whose contingent nature has been forgotten. It’s a collective habituation into rituals of pretense and presumption. I don’t mean that it’s all make-believe; I mean that these simulated ways we feel like we know are not a necessary consequence of the real material properties of surveillance.
One last point. This focus on contingent relationships undermines any pretense to an invariant epistemic standard of knowledge. That means it changes the moral yardsticks that we use. We can’t just keep fixating on ‘informed decisions’, ‘the full facts’, and ‘rational deliberation’. Those things are important to aspire to, but we will never get there in full, and so their effect is limited. This means we can’t just criticise things like subjunctivity and interpassivity and say, stop that foolishness, stop pretending and come back to ‘real knowledge’. Our real knowledge or experience has been very limited since at least the printing press. And in this situation, it is unreasonable to demand that people act ethically for Samaritans they can’t see and they’ve never met. The ‘tricks’ in making us feel what is outside our lives is crucial for us to have ethical and political agency about these technologies. The point is not to know everything, or even to know one thing beyond any shadow of doubt; both impossible dreams. The point is to build a basis for feeling like knowing, a basis for experiencing that which is beyond our experience. And to do it in a way that doesn’t make us believe all our neighbours are terrorists in waiting, or that all Muslims are Mohammed Badguys – an actual term used in NSA software. To do it in a way that makes us strive towards a moral relationship to technology.