Automating educational inequality through data…? Notes on Virginia Eubanks

In her 2017 book Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, Virginia Eubanks examines how social inequalities are extended and intensified when digital tools are embedded in longstanding systems of power and privilege. The ‘digital poorhouse’, as she calls it, is an extension of the American ‘poorhouses’ of the 19thcentury, where poor and working-class people were quarantined from society and subject to ‘invasive surveillance, midnight raids and punitive public policy that increase the stigma and hardship of poverty’ (p.12).

Eubanks argues that the use of digital tools (including algorithmic decision-making, risk profiling and digital tracking) in social welfare and child protection has led to many injustices that eclipse any of the past. Not only does the digital poorhouse hide poverty from the middle class, it also creates an ethical distance from the inhumane decisions regarding ‘who gets to eat and who starves, who has housing and who remains homeless, and which families are broken up by the state’ (p.13). The stakes are high. Eubanks uses first-hand evidence to examine the realities of datafying a complicated system in which there is no normal.

While Eubanks’ work might appear tangential to our Data Smart Schools project, her analysis provides several useful lines of thought. First, Eubanks demonstrates that automated decision-making serves only to intensify and extend pre-existing inequalities in that system. The nuances of everyday life are difficult to capture and categorise. Yet algorithmic decision-making, by its very nature, cannot account for such nuance, as it is based upon historical patterns and categories that signal degrees of deviance from a constructed ‘norm’.  Mathematical models – whether they are used for social welfare or schools – are always ‘historical’ and are based on the notion that particular patterns found in past data-sets will repeat into the future. While this way of thinking about the future might have currency in some highly-controlled contexts, they do not readily correspond to the nature of schooling. For example, schools are places where students are encouraged to think positively and aspirationally about their futures. As such, mathematical models based on historical data (of the school and/or the student) could be limiting or detrimental for students wishing to better themselves.

Second, Eubanks argues that the efficiency and objective nature of quantification leaves professional middle classes with the impression that those who are in need are getting help, while those who are denied services and resources are ‘fundamentally unmanageable or criminal’ (p.181). This kind of thinking can calcify divisions between different members of society by reinforcing particular stereotypes and associations. Similarly, in the case of schools and schooling, datafication and automated decision-making, risk assessment and league tables can have a carrot-and-stick effect, rewarding particular ‘desirable’ behaviours and practices and punishing other behaviours deemed ‘inappropriate’.

When considering schools and students it might be more relevant to think of datafication and algorithmic decision-making as rendering some students and teachers (typically those who are deviant or resistant) hypervisible to systems oriented toward discipline, while rendering the same students and teachers as invisible to systems oriented toward reward and progression. Recent work by Data and Society’s Mary Madden examines the double down effect that marginalised groups experience in datafied systems. These groups not only endure intense surveillance and behavioural monitoring, but also remain invisible to systems that may provide them with opportunities to alleviate their situation (i.e. being shortlisted when applying for a job). Are marginalised students in our schools experiencing the same kind of hypervisibility/ invisibility? How is this different from the kind of bias implicit in teacher judgments? How might teachers make use of data in their evaluations of particular students and situations? These are important questions given that the realities of ‘living with data’ are often quite different to the contexts within which they were conceived.

Finally, Eubanks reminds us that datafication cannot fix social problems that ultimately require greater resourcing and funding. As she reasons:

Systems engineering can help manage big, complex social problems. But it doesn’t build houses, and it may not prove sufficient to overcome deep-seated prejudice against the poor, especially poor people of colour (p.125).

This argument certainly holds true in terms of schools and schooling systems. Datafying schools and schooling processes may improve efficiency and reduce costs, however it is unlikely to help children to read or provide needy students with the support and resources they need. It is all too easy to use the outcomes of datafication in punitive ways. Take, for instance, the tendency in some schools to use NAPLAN data to name and shame some schools, while lauding others. This has a direct effect on student enrolments and, consequently, teacher employment.

Eubanks therefore reminds us that it is essential to challenge policymakers and administrators to be clear about their intentions behind the introduction of digital tools and their consequences (i.e. league tables, school funding). There also needs to be considerable capacity building within the educational sector in terms of digital data and its consequences. Indeed, educational administrators, school leaders and teachers are all groups that would benefit from learning how to interpret data ethically and accurately – thus ensuring that they do not accelerate educational inequality through data.

Eubanks, V. (2017). Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor, NY: St Martins Press.