Facial recognition tech in schools: an ongoing tussle between regulation and ‘innovation’

The current furore over the potential introduction of facial recognition technology into Melbourne’s schools offers a neat example of the vagaries of regulation and legislation that surround data-driven digital technologies in school.

While other countries have moved to ban facial recognition technology in schools (e.g. the Swedish ‘Datainspektionen’ recently fining schools for deploying FR systems), the response in Australia has been more ambiguous.

On one hand, a local start-up has been running six-week trials in independent and Catholic schools of its facial recognition roll-call and classroom attendance system. This has recently been awarded a $470,000 Federal government ‘Accelerating Commercialisation’ grant to continue the development of these products to market.

At the same time, the Federal department of education has pushed back. An initial rebuttal of the technology from the Federal minister for education, led to an internal department review of facial recognition technology in school. This concluded that facial recognition technology “posed a risk by using and storing students’ unique biometric data”.

This then prompted a set of ‘rules’ that were circulated in a memo to all government schools. These rules require any school that is considering facial recognition to first undertake “a rigorous privacy assessment”, undertake ‘significant consultation’ with parents and carers, and then report back to the Department’s privacy team.

This raises a number of interesting issues.

  • The reactive, ad hoc regulation of specific emerging data-driven technologies in the absence of a robust general regulatory framework. As the DET memo starts by acknowledging: “While privacy obligations do not absolutely prohibit the use of biometric technologies in schools …”
  • The immediate compartmentalising of these concerns in Australia under the aegis of ‘privacy’ and ‘consent’ … other countries are discussing this in terms of ‘safety’ and ‘risk’ (US), or ethics and rights (Europe)
  • The fragmented nature of Australian schools meaning that regulation does not extend to independent and Catholic schools, where ‘proof-of-concept’ can be established
  • The tension between (i) government priorities of science, innovation, and commercialisation, and (ii) concern for education and schools.
  • The inadequate nature of ‘informed consent’ in these instances – the systems being proposed here rely on complete sweeps of a classroom in order to operate, leaving ‘opt-in’ and ‘out-out approaches counter-productive from the system provider point of view [the system has to scan your face before it can recognise that you have opted out!]. Tellingly, the Swedish decision was based partly on the logic that “students’ consent could not be freely given because the school administration has a moral authority over them” (Kayali 2019).


As Sven Bleummel (the Victorian Information Commissioner) reasoned, such debates should not distract from the societal implications of the technology use:

“It’s not about the information security, whether something can be quite safe and encrypted. Although unfortunately we often see stories where despite the best effort to encrypt things they are often still vulnerable to unauthorised disclosure. The first issue is about what sort of society we want to live in, and do we want our children growing up thinking [facial recognition] is normal.  The second category of risk is the understandable and tangible risk of information being secure. We’ve seen lots of debate over security of the MyHealthRecord, and it may well be that a system like that will be very secure. But you have to look at the ends of the system. What happens when reports are provided at the end, what happens when the biometrics are collected in the first place. And unlike a password you cannot reset your facial geometry”