Facial recognition and facial detection in schools – reasons for concern?

As we have noted before, facial detection and facial recognition technologies are beginning to enter school settings in a number of guises. These are forms of data-driven schooling that we are going to have to pay sustained attention to during our fieldwork – particularly in light of the continuing rollout in Melbourne schools of facial recognition attendance systems such as Looplearn.

These are controversial technologies – as reflected in a couple of recent articles by  Hartzog & Selinger (2018) and Stark (2019). Hartzog and Selinger’s thesis is evident from their title onwards – “Facial Recognition is the Perfect Tool for Oppression”. This culminates in an argument for an outright ban of facial recognition technologies – describing them as “the most uniquely dangerous surveillance mechanism ever invented”. Stark’s piece concurs with these arguments, also arguing for the shutdown of these technologies in all but the most controlled circumstances. As Stark contends, “facial recognition, simply by being designed and built, is intrinsically socially toxic, regardless of the intentions of its makers; it needs controls so strict that it should be banned for almost all practice purposes”.

The paradox for our own research project is straightforward enough – why are these contentious viewpoints barely registering in current discussions around the increased use of facial technologies in schools? Indeed, a recent Monash survey of Australian public opinion found high levels of approval for facial recognition systems in schools being developed for the purposes of “monitoring attendance and ensuring student safety”. Yet, clearly these are data-driven technologies that are accompanied by a number of valid concerns. As such, the Hartzog and Stark articles both raise a number of pertinent issues underlying the burgeoning adoption of facial detection and facial recognition technologies in schools. These will all help us make sense of these technologies when encountered in our research schools.

First, is the ease with which Facial Recognition can be implemented and adopted across schools. This is technology that fits neatly with established school practices, processes and infrastructures. Two specific points from the Hartzog and Stark articles are of relevance here:

  • The re-use of existing photograph databases: Schools have long routinely collected and kept photographic records of students’ faces. Facial recognition technologies will often be dependent on the re-use of a school’s existing name-and-face photographic database. Coupled with their relatively stable populations, this makes schools relatively straightforward places to implement such systems (in comparison, say, to hospital or library settings). This re-use of existing name-and-face data raises issues regarding the nature of any informed consent for the initial taking of school photographs.
  • The appropriation of existing CCTV infrastructures: Another factor hastening the implementation of facial recognition systems in schools is the prevalence of video monitoring and closed-circuit surveillance infrastructure. The past 20 years saw the enthusiastic adoption of CCTV inUS, UK and Australian schools. The large majority of schools now have surveillance cameras systems, placed everywhere from playgrounds to student toilet areas. School enthusiasm for surveillance technologies has also seen the tentative adoption of teacher body-cameras, fingerprint enrolment and RFID-tagging of students. In addition, pushes for video analyses of classrooms as a pedagogical aid, and the use of facial detection techniques to ‘measure’ student concentration and engagement, have all boosted this normalisation of camera/facial technology. In short, facial recognition systems fit neatly into well-established school surveillance cultures while also practically requiring little new data infrastructure or data sources.

 

Second, are a number of points from the Hartzog and Stark articles regarding how facial recognition systems are likely to alter the nature of schools and what it means to attend school. Issues here include:

  • The inescapability of facial data: Unlike other forms of personal data (i.e. any piece of data connected to an individual’s name), facial data lends itself to constant and permanent surveillance. In short, people are always connected to their faces – especially while in school settings. For example, nearly all schools have dress codes that preclude students’ faces being covered by hair, hoods or other obstructions. This leaves it difficult for students to obscure their face from surveillance cameras. Thus, unlike other forms of personal data (such as emails and social media posts), there is no option for students to self-curate and restrict what data they ‘share’. While students might be able to demur from facial detection elements of their school’s learning systems (for example, the use of eyeball tracking or facial thermal imaging for learning analytics), there is no right to opt-out from facial recognition systems (indeed, any opt-out effectively renders school facial recognition systems ineffective).
  • The end of obscurity in schools: This constant surveillance means that there will be a substantial curtailment in students’ right to obscurity while in school. In short, students will find it increasingly unable to blend into the background, take a back seat, and generally go about their business ‘under the radar’. These might seem like undesirable behaviours from an educational point of view, yet for specific groups of students these are legitimate coping strategies and an invaluable means of ‘doing school on their own terms’. Attempting to manage what is known and disclosed about oneself is a legitimate way of ensuring that one’s actions and intentions are correctly interpreted and understood. In contrast, facial recognition systems lead to what Hartzog and Selinger terms “the normalized elimination of practical obscurity”.
  • The dehumanizing effect of facial recognition at school. Hartzog notes that our faces are perhaps the most personal of all our ‘personal data’. A person’s face is closely associated with who they ‘are’ in ways that are not the case with other embodied biometric data (such as iris patterns, fingerprints or gait). As such, the statistical processes through which facial recognition technologies quantify and frame an individual’s face are both reductive and dehumanising. These technologies work by assigning numerical values to schematic representations of facial features (such as the distance between different points on the face), and then making comparisons between those values.This constitutes very limited engagement with a person’s face in contrast to how other humans would see that face. A studentis not ‘seen’ by facial recognition technologies that in a manner that is able to discern their full range of facial emotions – for example, someone who is lost, utterly bereft or has a glimmer of recognition.Indeed, one of likely tragicomic practical consequences of facial recognition and detection technologies is students and teachers having to contort their facial expressions in ‘unnatural’ ways that allows the technology to ‘detect’ and/or ‘recognise’ them.

 

A third aspect of facial technologies arising from the Hartzog and Stark articles is their likely part in augmenting oppression within school settings. This might be manifest in two ways:

  • Making schools more authoritarian: Hartzog and Selinger contend that “surveillance conducted with facial recognition systems is intrinsically oppressive”. Indeed, facial recognition technologies are most likely to be implemented with the intention of controlling who enters a school, and where they subsequently are located. Of course, schools are institutions founded upon regulation, control, and disciplining the minds and bodies of students. In this sense, facial recognition technologies fit well with the historical purposes and structures of schools. As such, these technologies are most likely to exacerbate (rather than mitigate) the authoritarian tendencies within some schools. As Hartzog and Selinger reasons, “the sheer intoxicant of power will tempt overreach, motivate mission creep, and ultimately lead to systematic abuse”. As such, there is also a good chance that facial recognition technologies will prompt students to act differently and normalise their conduct – as Hartzog and Selinger put it, “impeding crucial opportunities for human flourishing”.
  • Foregrounding attributions of race and gender in school decision-making. Finally, are widely-held concerns over the disproportionate impact that facial recognition places on racial and gendered identifications.As Stark observes, the ways in which these technologies schematize human faces foregrounds ‘calculations’ of race and gender as a means of arbitrarily dividing human populations. This has been highlighted in recent high-profile controversies over the inability of some facial recognition systems to ‘successfully’ discern non-white faces due to the racially-skewed databases that such systems are trained on. Yet, even if these identifications are more technically accurate, Stark argues that reducing students into socially-constructed racialized categories is discriminatory and ‘socially toxic’. These technologies will certainly foreground issues of race within schools, and therefore exacerbate pre-existing racially discriminatory practices. As Stark concludes: “If human societies were not racist, facial recognition technologies would incline them toward racism; as human societies are often racist, facial recognition exacerbates that animus” (Stark 2019, p.53).

 

Facial recognition and facial detection technologies continue to be taken up by schools with various intentions and justifications – from boosting school safety in the wake of campus shootings, through to better targeting teacher attention in the classroom. However, the underlying message from both the Hartzog and Stark articles is that any ‘added value’ or gained ‘efficiencies’ are outweighed by a number of wider connotations of having these technologies in school. What might appear to be the relatively benign implementation of facial technology in school is a case of what Stark terms“trading off its enormous risks for relatively meagre gains” (p.54). This raises the concern that schools are being co-opted as sites for the normalisation of what is a ‘societally dangerous’ technology – what Stark (2019, p.55) describes as a “facial privacy loss leader”. Thus, it can be strongly argued that schools should not be places where communities become desensitized to being automatically identified, profiled, and potentially discriminated against. In contrast to our project ethos of ‘doing data differently’, these might be a form of data-driven technology that should not be ‘done’ at all.