Joanne McNeil (2022) writes about the recent case of a US text-based counselling service – offering a confidential means for potentially suicidal teens to seek guidance through text-based messaging.
Drawing on over $20 million start-up funding raised from tech-related philanthropic sources (including Melinda Gates and Steve Ballmer), ‘Crisis Text Line’ was subsequently found to be harvesting text data from its teen callers with a view to developing other products – despite initial claims to ‘NEVER share data’ of the teens contacting the service.
Some of CTL’s leaders reasoned that it would be unethical not to use this data to gain insights into teen-suicide behaviour. Less nobly, the data was seemingly used to drive a spin-off service described as ‘a Grammarly for emotion’ – helping other online customer-service teams respond to texts in a caring, empathetic, warm manner.
This clash between the ambitions of tech developers and innovators and the complex nature of something like suicide counselling typifies the dangers of what McNeil (2022) terms “the Silicon Valleyfication of Everything”:
“Crisis Text Line put its market proposition above the needs of its vulnerable users: its dehumanizing data collection practices were part of a series of callous acts. Suicide prevention doesn’t look like the “speed of a private tech company” or “awesome” machine learning. It requires safety and care with no strings attached. This care includes generosity and expansion of public resources like access to housing, food, healthcare, and other basic needs; it can’t be measured in KPIs. The very purpose Crisis Text Line claimed to serve is incompatible with the Silicon Valley way of doing business”
**
REFERENCE
McNeil, J. (2022) Crisis Text Line and the Silicon Valleyfication of Everything. Vice, 11th February, https://www.vice.com/en/article/wxdpym/crisis-text-line-and-the-silicon-valleyfication-of-everything
]]>Sometimes it is perhaps better not to know … even if the data are available:
“I was surprised when I first started using it because I thought that kids were not stupid …. I didn’t think kids were on YouTube and things as much as they probably do. Occasionally you walk past and during the footy season catch kids looking at the footy scores but I didn’t think with some kids it was such a permanent thing.
The first time I used it I think I looked at it and we’d done a double period and I set them to work – the first part of the class was doing something simple, and then the second part of the class was really full on – okay now you’re well into the task that you’re doing.
One kid for almost the full first period had been sitting on some Manga website. I was like ‘what?’ I was really disappointed because I thought I had more of an idea … like, I can usually read faces. I always say to the kids, “I always know when you’re on YouTube because your eyes light up and you get this little smirk on your face.”
So, I was really disappointed [when I saw the data]. Because being a drama teacher, I feel like often I try and engage the kids and I often get them really engaged. And I was disappointed, because I thought that was a really good class … but that kid was sat and zoned-out for the whole time we were doing the activity.
[Brookdale teacher – interview 08_11_21]
]]>First, Charlton McIlwain raised the discriminatory power of categorisation in pre-digital data systems, and how these harmful uses of categories are perpetuated in new digital systems. As he put it, “the categories precede the algorithm” – meaning that new AI systems latch onto the strident categories already at play in society – such as categories of gender, race, ability, and so on. As such, algorithmic systems are unlikely to lead to wholly different outcomes – such as liberation, or a less oppressive social relations. Instead, algorithms are most likely to support, strengthen, intensify and amplify long-established harms associated with the use of these categories to sort, classify and reach decisions.
Danah Boyd then responded to this point in terms of how we might set about establishing different ways of appropriating data. They raised the need for societies to move beyond categories – eschewing the use of group-based models altogether, and instead finding alternate ways of reflecting on people and their lives. Clearly, this is a huge shift with no clear obvious alternative. However, Boyd raised the idea of at least beginning to encourage the intersectional use of categories – i.e. processing categorical data in ways that show categories as they relate to each other, thereby developing insights into how the patterning of social outcomes is multi-faceted and contingent on intersecting circumstances and forces.
]]>One possible change for schools looking to make better use of their data is to establish some form of ‘data trust’. This idea is outlined in some detail (alongside other alternate data governance mechanisms) in a joint publication from the Ada Lovelace Foundation and AI Council, titled ‘Exploring legal mechanisms for data stewardship’.
‘Data Trusts’ are based around the long-standing legal tool of ‘Trust Law’ – historically used in English Common Law for managing the estates of overseas medieval soldiers, deceased people and other forms of shared property. In essence, a ‘trust’ is a legal relationship between one party (the trustee) which manages the rights associated with an asset for the benefit of other parties (the beneficiaries). This management arrangement is set up to ensure that different parties’ access to and rights to the asset are determined in an equitable manner – i.e. according to principles of fairness and justice.
Centuries on from the first uses of Trust Law, digital data is beginning to be seen as an appropriate asset to be governed by trust arrangements. The Lovelace document makes the point that present uses of digital data tend to be beset by inherent structural power imbalances between individuals and ‘big tech’ actors. In short, the value of data lies in the aggregation of many individuals’ data into big data sets – something that is most readily done by platform providers or data companies as part of what individuals ‘consent’ to do when agreeing to the terms of service.
As noted throughout the critical data studies literature, this imbalance in scale leaves individuals in a vulnerable position – unable to fully understand what is being ‘consented’ to, and open to a range of discriminations, harms and other potential misuses of their data. In an institution such as a school, one’s leeway to ‘opt out’ of using a data-driven system is often impractical.
Applying trust law to the governance of digital data therefore increases each individual’s capacity to exercise the data rights that they currently have in law, while allowing groups of individual to collectively define the terms under which their data should be used. As such, establishing a data trust as a means of data governance with a school is a way of balancing the obvious asymmetries of power between school authorities (in their powerful role as data controllers) and individual students, parents and even teachers (in their less powerful roles as data subjects).
A key element of any data trust is the role of the ‘trustee’ – tasked with acting in good faith in ways that leverage the interests of the beneficiaries. Trustees can negotiate data-sharing agreements with other actors (such as for-profit data brokers, or non-profit research institutes) who want to make use of the pooled data represented in the trust. The trustee role demands a high level of data and legal knowledge, along with a fiduciary duty to always act in the best interests of the trust’s beneficiaries. This is a serious undertaking, with trustees legally held to account by the constitutional terms of the trust. The success of any data trust depends on the appointment of an appropriate trustee.
The first chapter in ‘Exploring legal mechanisms for data stewardship’ actually concludes with a hypothetical description of how a high school might establish a data trust as a form of data governance. Here is it imagined that several schools might club together to form a data trust to pool the data generated from a commonly-used learning management system (this might include test scores, usage data, other indications of learning progress). This group of schools is represented by a board – with one of the board members appointed as the data trustee. Through these mechanisms the data trust can then work to …
This echoes the neoliberal capitalist logic that increased knowledge about events and entities through ever more specific pieces of information lends an increased ability to control these events and entities.
This can be seen throughout the past 100 years of administration and bureaucracy – from the filing cabinets of the 1890s to the digital data dashboards of the 2020s.
Robertson, C. (2021). The Filing Cabinet: A Vertical History of Information. University of Minnesota Press.
]]>**
The importance of informed procurement of new technologies
“Administrators at all levels of education need to be much savvier about how they contract with edtech firms. They need to negotiate for control of data and for notice about revisions to programs. Without such safeguards, the ethical standards of educators will give way to the rough and tumble ‘whatever works’ ethos of tech firms” (Pasquale 2020, p.72)
“At present, we don’t have nearly the insight we need into the collection, analysis, and use of data by firms developing educational software” (Pasquale 2020, p.73)
**
Data profiling in schools as a continuity of established forms of grading
“Grades have become a near-universal disciplinary mechanism; soon, a ‘total behaviour record’, measuring students along axes of friendliness, attentiveness, and more, could easily be developed” (Pasquale 2020, p.75)
**
The tough choice of pushing for either ‘mending’ or ‘ending’ the datafication of schools
“Mending surveillance edtech means giving it more data, more attention, more human devotion to tweaking its algorithmic core. Ending it would reorient education reform toward more human-centred models. To decide between the two paths requires deeper examination of both these projects” (Pasquale 2020, p.75)
Striving for ‘better’ forms of data-driven technologies in education “would double down on the quest for data, looping in unorthodox data sources, such as constant self-monitoring and self-reporting of emotional states by data subjects” (Pasquale 2020, p.76)
“The datafication of teaching and learning has contributed to many troubling trends. And yet it would be foolish to dismiss AI in education wholesale” (Pasquale 2020, p.85)
**
Reference
Pasquale, F. (2020) ‘New laws of robotics: defending human expertise in the age of AI’ Belkap
]]>The increased presence of digital technologies in education means that ‘data’ is implicated in all aspects of contemporary schooling. This can be seen as a cyclical process. On one hand, data drives all of the digital technologies now being used in education. At the same time, the increased use of digital technologies in education results in the increased production of more data. Digital technologies also facilitate the increased capacity for schools to collect, compute and circulate their own analyses of data. While it might often go unnoticed, data is an integral part of the digitised school.
Unsurprisingly, perhaps, when talking to teachers, administrators, students, or principals about ‘data’ in their school, people might well be referring to very different things. In the course of our research, then, we have noticed at least four distinct forms of digital ‘data’ in schools that people might be aware of and/or interested in.
First, is the increasing use of system-level digital data as a form of education governance. The prominence of standardised measures such as PISA, TIMSS and NAPLAN over the past twenty years is now prompting the emergence of what can be termed ‘Algorithmic Governance’ and ‘Synthetic Governance’. This describes the use of data to underpin system-level governance – often in the form of ‘big data’ analytics and AI solutions that are developed by tech industry in partnership with governments and corporate strategy consultants and professional services providers such as Ernst & Young, KPMG, PwC, McKinsey and Bain.
This form of ‘datafication’ is obviously important to talk about, but the specific focus of our DSS research is the use of data at the individual school level. Here, then, there are three other distinct forms of ‘datafication’ to consider.
Perhaps most prominently, is the ways in which digital technologies are becoming a key part of what schools refer to as ‘data-driven decision making’ (DDDM). The idea of DDDM has spread from the US to other countries over the past twenty years as part of the growing push for teachers to engage in ‘evidence-based’ practice. This has become understood to involve data being generated within the school and individual classrooms – usually by managers or teachers – to inform their decision-making. This is local data that is generally focused on intra-school processes, and is primarily what most schools understand as ‘data’, and are most used to engaging with and talking about.
Over the 2000s and 2010s, teachers would often bemoan the pressure for them to be collecting data from their classes, and then be seen to make use of. Now, a range of digital applications and products are marketed to take care of this – from simple student surveying and ‘quiz’ software through to more sophisticated classroom monitoring systems. In this sense, ‘data’ is often understood by teachers as anything that informs their practice.
However, tellingly, this is not the ‘datafication’ that a lot of academic discussion in the ‘critical data studies’ space is interested in (ourselves included). More specifically, then, ‘data’ in schools also relates to the ‘trace data’ that is generated by the use of digital technologies. Here, two addition forms of ‘data’ arise. On one hand, is the ‘trace data’ generated from official systems and learning platforms which is deliberately used to drive (mostly commercial) products in schools to provide some sort of analysis and feedback of student performance. This includes the various forms of ‘learning analytics’, pupil dashboards and ‘personalised learning’ tools that make use of trace data to infer insights about people and processes in education.
However, on the other hand we need to remain mindful of the trace data generated routinely from the many other devices, applications, software and systems used in classrooms and school. This data is not used to infer insights about learning and students. Indeed, schools mostly ignore or are not conscious of this data. Nevertheless, this data is often extracted and used by technology developers to improve products, and/or sometime sold on to third parties. Indeed, the re-use of data is a key element of why many digital education products remain ‘free’ for schools to use.
In this sense, one of the main conclusions from our DSS project is a very simple one – academic researchers, policy-makers, business interests, tech companies, administrators, school leaders, and teachers are often talking about very distinct and different forms of ‘data’ within schools and school systems.
]]>“It can sometimes seem as though we expect the literature on DBR (design-based research) to be made up largely or entirely of success stories. From the state of our literature, it appears that as a community, we think of publication as a way of recognizing not only good scholarship but also designers’ achievement of their intended goals with learners. The ideal story arc for a DBR article, in this conception, is one in which the hard-working, well-meaning designer persists in the face of adversity (including the resistance of benighted clients), refining his or her theory and practices in tandem and ultimately making good” (O’Neill 2016, p.499).
Yet, as O’Neill reminds us, ultimately all designs will fail in one way or another. As such, it is important to pay close attention to understanding how even initially successful designs in education eventually break down. To this end, he points to Mike Cole’s mission statement of “Studying successful innovations until they fail.”
**
Reference
O’Neill, D. (2016). Understanding design research–practice partnerships in context and time. Journal of the Learning Sciences, 25: 497–502
]]>An accessible pre-print version of the chapter can be accessed from the Monash ‘Bridges’ Figshare portal: click here
]]>“Good teaching is a combination of art and skill and experience. I’m of the firm belief that no amount of data capture is going to be able to reproduce that. This is apparently a fringe belief in educational technology circles. When I’m in a classroom, I’m not interested in predicting what I think the student is going to do or what grade they might get. That is actually completely irrelevant to me, but prediction is central to how this technology operates. So, I’m skeptical of what the future looks like for “smart” educational technology because it’s so invested in capture and control and prediction, rather than helping people maximize their own goals, what they think they’re there for”
(Chris Gillard 2021, p.267)
**
Gillard, C. (2021). “Smart” Educational Technology: A Conversation between sava saheli singh, Jade E. Davis, and Chris Gilliard. Surveillance & Society, 19(2), 262-271.
]]>