Os Keyes offers a provocative assessment of “Why data science is a profound threat for queer people”.
Keyes starts by defining ‘queer’ primarily in terms of fluidity, autonomy, a distinct lack of definition, and “the freedom to set one’s own path”. They argue that data science is fundamentally set in opposition to these qualities – grounded as it is in norms, discrete categories, precise definitions and predicable futures.
Keyes first draws parallels between data science and what the trans activist Dean Spade has termed “administrative violence”. This describes the ways that administrative systems “create narrow categories of gender and force people into them in order to get their basic needs met”. Administrative violence spans a range of practices; from the binary M/F categories which persist in many official forms of data gathering; the difficulty that data analysts have with giving equal credence to ‘third categories’ such as bigender or non-binary; through to the bureaucracy and medical certification required to be categorised as ‘officially trans’. Spade argues against working to reform and improve these systems. As Keyes paraphrases, “the rigid maintenance of hierarchy and norms is really what the state is for… We are attempting to negotiate with a system that is fundamentally out to constrain us”
However, the datafication of contemporary society extends well beyond these forms of state administrative control. Keyes also draws on Anna Lauren Hoffmann’s notion of “data violence” – a much-extended and expanded form of violence perpetuated through the masses of data that are collected through all our digital systems. As illustrated by the interest that Amazon, Google, and Facebook take in their users’ everyday lives, data violence operates at a scale, fluidity and ubiquity that far outstrips the administrative state. In this sense, data science is premised on upon an ambition to control and standardize the paths that all aspects of our lives can take – from the films that we watch to the likelihood of getting a home loan and/or being incarcerated.
From this perspective, there is little about the ongoing datafication of society that bodes well for queer and trans individuals. Indeed, Keyes quite reasonably contends that the people who stand to be harmed most by the application of data science to everyday life are those who do not fit neatly into standardised systems, and those whose lives fall between the cracks of dataveillance. In short, data science does not respond well to outliers or those whose lives do not fit neatly into discrete categories.
Take, for instance, a trans school student whose home life is disjointed – meaning that they have a slightly irregular pattern of attendance, erratic parental engagement with school online systems, non-standard medical records, and an awkward ‘Other’ gender category. Any gaps, omissions and blanks in this students’ data profile will invariably lead to reduced calculations and a limited range of diagnoses and decisions being reached about them. Key issues will either be ignored, or perhaps additional unwarranted assumptions made. Either way, the chances of being mis-represented will be high.
Of course, the standard response from data science is that we need to work harder to ensure that there are no such gaps and omissions. In short, the generally accepted solution to any data-related shortcomings would be that the school and family need to work to increase this student’s data visability and participation – i.e. to expand the extent to which dataveillance operates. Yet as Keyes responds, “I don’t know about you, but my idea of a solution to being othered by ubiquitous tracking is not ‘track me better’”.
All this leads Keyes to argue against what they contend is an essentially inhumane activity. They contend that datafication is essentially concerned with using inevitably limited sources of information to monitor, track, profile and corral people into preferred courses of action. As they put it: “perhaps a more accurate definition of data science would be: the inhumane reduction of humanity down to what can be counted”. In a little more detail …
“We can use datalogical systems for efficiency gains and for consistency gains; we can remove that fallible, inconsistent ‘human factor’ in how we make decisions, working more consistently and a million times faster. Which is, you know, fine, sort of. But by definition, a removal of humanity makes a system, well, inhumane!”.
So why would we actively work to legitimise inhumane forms of datafied schooling? Indeed, from this perspective, Keyes contends that there is little point fighting to reform and improve data-driven systems that are fundamentally designed to control and harm marginalised populations. For example, training school facial recognition systems on more diverse data sets so that they recognise black faces more accurately simply increases the harm that these systems can then do to black people.
As such, the end-point to Keyes’ argument is one of non-participation, rejection and radical reinvention. They argue that it the most logical response for queer and trans communities is to refuse to participate in such systems that are fundamentally designed to control and do harm. Instead, we need to develop alternate ways of “being, living and knowing” that work around data science and its artefacts. Ultimately, then, this requires us to somehow work toward an alternate form of ‘radical data science’ – i.e.
“a data science that is not controlling, eliminationist, assimilatory. A data science premised on enabling autonomous control of data, on enabling plural ways of being. A data science that preserves context and does not punish those who do not participate in the system”.