When do humans make decisions in algorithmic processing?

In Automated Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, Virginia Eubanks draws attention to the moments in which human decision-making is required in order to make an algorithm ‘work’ in a given social context.

As she contends, human input is required to decide how variables will be defined. Humans are also implicitly involved in the processes that enable designers to arrive at these decisions. As Eubanks reasons, choices about what goes into these variables ‘reflect the priorities and preoccupations of their creators’ (p.143).

Eubanks illustrates this with the example of the Allegheny Family Screening Tool (AFST) – a predictive risk-modelling tool designed to assist with child welfare call-screening decisions in Allegheny County, Pennsylvania. Eubanks’ analysis of this has come under some criticism, however, for the purposes of the Data Smart Schools project, her discussion of the variables involved in algorithmic decision-making is helpful.

For an algorithm to work, software designers must first defineoutcome variables, proxies, predictive variablesand validation data. Most importantly, software designers should have a clear sense of how these data relate to the social problem at hand. Using the AFST example, Eubanks demonstrates the fragility of these algorithmic components:

Outcome variables refer to what is being measured to indicate the phenomenon predicted. However, as Eubanks explains, because the AFST is trying to predict child abuse and potential fatalities, the actual numbers of previous cases to model are extremely low. For this reason, the AFST uses two related variables (or proxies) as stand-ins for child maltreatment.

  • The first proxy is ‘community re-referral’ – these are instances when a call to the child protection hotline was initially screened out, but there is a second call lodged about the same child within two years.
  • The second proxy is child placement – these are instances when a call to the child protection hotline is screened in, resulting in the child being place in foster care within two years.

These variables might seem like sensible indicators of child abuse, but their provenance lies in reports from local communities about alleged abuse. As Eubanks reasons, callers to the hotline can remain anonymous meaning there are many false or misleading calls, particularly when there are acrimonious relations within a family, or if a family is simply seen as parenting differently to their neighbours.

Predictive variables are the data points that are correlated with outcome variables. Using a statistical procedure called a ‘stepwise probit regression’, Eubanks describes how the team behind the AFST ‘knocks out’ potential variables that are ‘not correlated highly enough with the outcome variables to reach statistical significance’ (p.144). This can be described as a form of ‘data dredging’ or ‘statistical fishing trip’ – leaving the computerised method with a number of factors that are best thought to ‘fit’ a predictive model of child harm. Yet as Eubanks points out, correlation does not necessarily imply causation – and in the case of child abuse there are often extraneous factors that actually bring the two identified events together. Importantly the datasets that are used only draw on information about families that access services, so may well be missing important factors that impact maltreatment, neglect and abuse.

The final step which requires human judgment is the use of validation data.  Validation data is used to see how well any predictive model performs. To do this, the predictive variables are run against actual cases to see if they can reliably predict the outcomes represented in the historical data. Eubanks notes that the AFST was deemed to be 76% accurate. In statistical terms this might sound impressive. Yet, given that there were 15,139 reports of abuse in 2016, this means that the AFST would have produced 3,633 incorrect predictions. As Eubanks explains ‘it is guaranteed to produce thousands of false negatives and positives each year’ (p.146). In practical terms, then, the repercussions of these incorrect predictions on families and children is likely to be huge.

In terms of our own interest in data and schools, we need to pay attention to when – and how – predictive modelling is being used. Most importantly, we need to explore the data that these models are based upon, and what actors were involved in this construction of the algorithms being used. Clearly these aspects of data science are fraught with assumptions, hunches, guesswork and improvisation. Detailing these human inputs into the systems – and tracing this to the very human consequences of these outputs – is an important part of making sense of how data is working (or not) for schools and school communities.


Eubanks, V. (2017). Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor, NY: St Martins Press.