“Data activism can be seen as a form of socio-political mobilization, as it brings people (and information and technology) together for some kind of action variably contentious in nature, and explicitly addressing, confronting, or engaging with datafication”.
In their new paper, Crooks and Currie consider what such data activism usually looks like in practice, the tensions that arise, and what alternate approaches might be more effective in supporting genuine political change and resistance.
Throughout their discussions, the authors make deliberate use of the term ‘minoritized’ to describe those groups most disadvantaged and oppressed by dominant uses of data in US society. In contrast to demographic labels such as ‘minority’, ‘under-represented’ or ‘underserved’, the term ‘minoritized’ foregrounds the unequal power relations that have historically seen the dominance of white male interests in determining and maintaining the public sphere.
From this perspective, the fact that US society is beset by white supremacy, economic precarity, heteronormativity, misogyny and heteronormativity is not an unintended consequence, but facets of society that are actively reproduced by the dominant political order. In other words, the idea of ‘minoritized’ groups recognises that these are inherent features (rather than unfortunate outcomes) of how US society is ordered and arranged.
The turn to grass-roots data activism
The deployment of numbers and statistics for activist ends has a long history preceding the recent rise of digital datafication. For example, Crooks and Currie look back to W.E.B Du Bois’s ‘data portraits’ from the beginning of the twentieth century as an early instance of politically-motivated reworkings of official data. Since then, community activism has been various data-driven efforts to tell alternate stories, highlight disparities and generally challenge the dominant use of ‘official statistics’. These tactics can involve the repurposing of existing data and the generation of new data – all with the intention of contending and disputing official accounts, drawing attention to underpublicized issues, and mobilising public option. Such uses of statistics as a resistant tool has been described as ‘statactvisim’ (Bruno et al 2014) – a neat term that encompasses the reuse of data about people and places by the communities that these data purport to represent.
There are many different instances of digital data being reused and repurposed in these ways – prompting growing enthusiasm for the democratic possibilities of grassroots ‘data activism’ amongst community groups around the world. Projects seeking to generate new forms of data will now often involve ‘citizen recordings’ and the ‘crowd-sourcing’ of new data-sets – where members of the public collaborate to measure and record evidence of issues otherwise not being recognised and documented by authorities. This can result in the production of data for community-produced maps, reports, dashboards and other documentation of local harms related to police violence, environmental degradation, provision of municipal services, and so on.
In particular, Crooks and Currie highlight the recent spate of local campaigners in US cities working to record all instances of racially-motivated police brutality – a prolonged process of “sifting through government documents such as police reports, court transcripts, and legal forms and adding this information to a database of officer-involved homicides in the community”. More mundane examples can be found in the educational sphere – for example, local parents banding together to document the depleted book stocks of libraries in disadvantaged areas.
The limits of data activism – ‘numbers will not save us ‘
Such grassroots examples of data activism are laudable, but based on a clear tension between being critical of current official uses of data while retaining a faith in the potential of other data to achieve more equitable ends. As such, these forms of data activism perpetuate an uncritical framing of data as means of evidencing and communication, and do not tackle the power relations inherent in the datafication of minoritized communities. As Crooks and Currie contend:
“Shifting community concerns into the register of the datalogical – a presumptive data positivism – poses risks to the very communities such a shift is supposed to empower”.
As such, Crooks and Currie offer a number of arguments why such grassroots data activism might notconstitute a good use of community time, effort and resources. One argument relates to the ‘burden of producing evidence’. These burdens take a variety of forms. At a personal level, any form of data work is repetitive, mentally exhausting, and potentially traumatic when (re)processing of data relating to violent harms perpetrated against one’s own local community. Moreover, at an institutional level, any willingness of communities to undertake data work therefore disincentivizes state authorities to begin to do so. Crooks and Currie note that state authorities are often happy to offload their recording and reporting responsibilities onto under-resourced community organisations.
Following this is the ‘burden of using evidence’. Here, Crooks and Currie note that the effective use of data usually demands high levels of technical expertise and technological resources and infrastructures. While community groups might be able to generate alternate forms of data, they are often unable to make full use of this data in the same ways as official statistical agencies. At best, the sectors of any community most likely to benefit and extract value from newly generated data are those who are better resourced. Crooks and Currie note that often the main beneficiaries of community-generated data are outside actors – such as non-governmental organization workers and data professionals.
Third, is the contention that data-intensive technologies run the risk of imposing their own purposes on political messages. In other words, engaging in the additional production, circulation and processing of data serves ultimately to direct more resources toward data industries that are ultimately set up to marginalise and oppress. In global infrastructural terms, community production of data directs additional resources toward the ‘data economy’ and ‘platform capitalism’ that underpin digital data, “rather than the redress of structural inequality”. In terms of local culture, the production of data-based ‘evidence’ might also work to distract from and devalue other forms of community knowledge and argument-building.
Perhaps Crooks and Currie’s most serious concern, is the point that the generation and analysis of data of any form is ultimately disadvantaging to minoritized groups, and ultimately cannot serve liberatory ends. The production of ‘crime’ statistics, health indicators, housing conditions and other similar metric, have long been used as means of producing ‘evidence’ of racial difference and racial inferiorities, and thereby encoding racial discrimination. Any attempt at data activism therefore faces a ‘double bind’ of digital data being “embedded in the structure of domination”. In this sense, the community-based production of additional forms of data is likely to only ever be little more than a self-inflicted form of what addition oppression. As Crooks and Currie conclude: “Even in cases where such technologies tout beneficial or ethical uses, what results is ‘the datafication of injustice’”.
The promise of agnostic data
Here, then, Crooks and Currie begin to sketch out alternate ways of appropriating data for community empowerment. In particular, they propose the idea of ‘agonistic data practices’. These does not attempt to use data to liaise with official agencies, to persuade policymakers to adopt alternate viewpoints, or ‘reconcile’ official accounts with community accounts. As Crooks and Currie reason, the realities of national and municipal governance do not involve official agencies acting logically in the interest of minoritized publics when faced with new evidence. Minoritized communities are simply not in a position to initiate productive data-driven dialogue with official institutions and governments. Instead, data is perhaps most productively used as means of provoking intra-community solidarity and awareness:
“… to mobilize antagonisms that produce solidarity among their community … us[ing] data-intensive technology as a powerful affective device for shoring up the community in a confrontation with the powers that be”.
The idea of agnostic data is therefore rooted in a conflict (rather than consensus) approach toward data activism – drawing on an understanding of unequal social orders having to be resisted and reformed through ongoing clashes of power. In other words, data is not a means through which agreements can be reached – rather, data is better seen as a means through which disagreements can be articulated and amplified.
Following the likes of Chantal Mouffe, Ernesto Laclau and the idea of ‘agonistic pluralism’, Crooks and Currie argue that community use of data is best approached as a stimulus for generative and productive antagonisms. This requires the use of data in an adversarial manner to highlight difference, raise (rather than resolving) questions, and provoke grassroots political involvement and engagement with alternative futures.
Crooks and Currie conclude with a couple of alternate suggestions of approaching community appropriation of data – both of which move beyond the idea of data being representational, and instead look toward data as ‘supra-representational’. First, they raise the idea of using data as a means of affective stimulation – “enflame[ing] political differences” and mobilising communities to begin to act on their passions and interests. Here, Crooks and Currie draw on examples of evocative data sloganeering such as ‘We are the 99 percent” or the guerrilla projection of ‘climate countdown’ timers onto government buildings. These examples illustrate the deployment of data to convey a sense of urgency, fear and affiliation against a common adversary, as well as acting as an invitation to action.
Second, Crooks and Currie raise the idea that data might also be used in an ‘aesthetic’ manner as part of the construction of narratives and stories. As Crooks and Currie conclude, in this manner:
“… data should be understood in the context of culturally available elements of story, as props that can be used to create a compelling scene, a happy ending, a devious villain, and so on. Agonistic data practices can amplify and sharpen a community’s narrative. Data makes for good stories, and stories .. are vital to agonistic politics”
These alternate approaches therefore raise a new spectrum of possible forms of ‘school’ data for our own DSS research project. How might data be deployed along agnostic lines to agitate for the decolonialisation of the curriculum, to raise awareness of teachers’ increasing work-intensification, or similar other injustices and unfairnesses that abound in schools? The idea of data not being used to provide better answers, but instead being used to provoke better questions and solidarity is certainly worth pursuing further in our discussions of the data-driven futures of education.
**
REFERENCES
Bruno, I, Didier, E. and Vitale, T. (2014) Statactivism: forms of action between disclosure and affirmation. Partecipazione e Conflitto: The Open Journal of Sociopolitical Studies 7 (2):198–220
Crooks, R. and Currie, M. (2021): Numbers will not save us: agonistic data practices, The Information Society [forthcoming]
Milan, S. and van der Velden, L. 2016. The alternative epistemologies of data activism. Digital Culture & Society 2 (2):57–74
]]>‘Queer’ as a challenge to dataism
In one sense, the idea of ‘queer data’ raises a straightforward challenge to the ambitions of data science – in short, to what extent can data ever ‘ethically represent’ queer people? This therefore foregrounds longstanding concerns in the social sciences over the representational limits of data. To what extent can data fully represent anyone who self-identifies as non-straight and/or non-cisgender? To what extent can data fully represent identities, experiences and political positions related to gender and sexuality that defiantly resist dominant societal norms?
Seen along these lines, queer people present an obvious challenge to the promised capacity of data science to accurately and insightfully model the social world in statistical terms. Indeed, as we have written about before, quantitative data is notoriously limited in its capacity to account adequately for the complexities of lesbian, gay, bisexual, transgender, and queer lives. As Ruberg & Ruelos (2020) put it:
“an individual’s sexual and gender identities, especially for LGBTQ people, cannot be understood as a set of static, fixed data points … traditional notions of demographic data do not allow for the fluidity and multiplicity of gender and sexual identities that characterize the lived experiences of many LGBTQ people”.
This epistemological challenge cannot be addressed simply by including additional categories of ‘Other’ or ‘X’ alongside traditional binaries of male/female, man/women. Instead, this raises the more fundamental contention that perhaps some things cannot be subject to quantification if they are inherently fluid, non-fixed and polyvalent. For example, the complex relationship between gender identities and transgender identities fundamentally complicates any presumptions that a ‘category’ of ‘gender’ might be effectively determined along a single axis of data.
Moreover, the idea of queer data also raises questions of intent and effect – even if it were possible, why would these be facets of a person that we would want to quantify, for what purposes, and with what anticipated outcomes? From a queer point of view, history suggests that most attempted acts of quantification constitute acts of control and marginalisation. This applies to even the most well-intentioned acts of quantification. For example, as Jen Jack Gieseking (2018) observes, the very premise of ‘big data’ privileges the perspectives and interests of (pre)dominant social groups, and further marginalises the viability and voices of other groups that have historically be made ‘small’ through administrative forms of (mis)measurement, erasure and violence.
The possibility of ‘queering data’
This fundamental incompatibility between queer lives and digital data therefore leads us on to the provocative move of rejecting conventional data science assumptions and instead exploring ways of ‘queering data’. This involves the use of ‘queer’ as a verb to challenge and resist expectations and norms that currently surround data and datafication – particularly in terms of challenging the entwinement of digital data with power and the politics of oppression (Jakobsen 1998).
In this light, then, data can be seen as profoundly oppressive and exploitative – a intentionally hostile act of dispossession rather than a passive neutral act of measurement. Here Ruberg & Ruelos (2020) draw on Joanna Drucker’s (2011) description of the etymological origins of ‘data’ not with the Latin term ‘datum’ (something which is given and observed, like a given truth), but the Latin term “capta” (something which is created and then captured or taken). Seen along these lines, it make little sense to argue that we simply need to develop ‘better’, more ‘accurate’ or more ‘nuanced’ ways of categorising the true identities and desires of queer people. As noted earlier, these are not stable or fixed entities that can be classified in the form of a permanent measure or indicator. Moreover, attempting to classify these entities is itself an attempt to control, regulate and limit the identities and desires of queer people. As Ruberg & Ruelos (2020) reason:
“Even as we strive for social justice through and within data, we must acknowledge the worrisome tension in calling for marginalized lives to be better captured, translated into data, and put to use by corporations and regulatory bodies”
Instead it is more productive to consider the tricky question of what – if anything – alternate forms of ‘data for queer lives’ might look like. Is datafication something that needs to be resisted in all instances and at all costs? Alternately, is it possible to imagine what Ruberg and Ruelos speculate could be “a more nuanced approach that embraces a queer understanding of gender and sexuality—one that is more inclusive, acknowledges complexity, and affirms the identities of respondents”? This raises a series of challenges:
These are not challenges that are easily addressed, but certainly point to need to reimagine the basic premises of what has been described elsewhere as the dominant ideology of ‘dataism’. The challenge for anyone not wanting to dismiss the idea of data and society altogether (and a response of total rejection certainly does have merit from a queer perspective), is therefore how to reimagine and reinvent data science along non-objectivist and non-positivist lines. This suggests an initial step of stripping data of any ‘supposed objectivity’ – instead, acknowledging (and making use of) the incoherence of most aspects of people’s everyday lives, and rethinking the tools and ambitions of any quantitative analysis. In short, Ruberg & Ruelos argue that this requires us to ‘shake loose’ any heteronormative assumptions and ‘destabiliz[e] the very belief that demographic data can sufficiently reflect the realities” of our messy everyday lives.
**
Of course, rethinking data along these lines is not simply a queer issue. These are clearly conversations and contentions that need to be continued throughout all aspects of ‘critical data studies’. Reimagining data and datafication along these lines raises the need for our discussions of data to confront dataism from the intersectional perspectives of queer people who might be of colour, working class, with different dis/abilities – in other words, paying close and prolonged attention to disrupting the ways in which data has traditionally impacted negatively on “any lives and bodies of those whom data has across its cultural history, sought to regulate, surveil, devalue, and even dehumanize”.
**
REFERENCES
D’Ignazio, C., & Klein, L. F. (2020). Data feminism. MIT Press.
Drucker J (2011) Humanities approaches to graphical display. Digital Humanities Quarterly 5: 1
Gieseking J. (2018) Size matters to lesbians, too: Queer feminist interventions into the scale of big data. The Professional Geographer 70(1): 150–156
Hamraie, A, & Fritsch, K. (2019). Crip technoscience manifesto. Catalyst: Feminism, Theory, Technoscience, 5(1): 1-33.
Jakobsen, J. (1998). Queer is? Queer does? Normativity and the problem of resistance. GLQ: a Journal of Lesbian and Gay Studies, 4(4): 511-536.
McGlotten S (2016) Black data. In: Johnson PE (ed.) No Tea, no shade: new writings in black queer studies. Duke University Press (pp.262–286)
Ruberg, B. & Ruelos, S. (2020). Data for queer lives: How LGBTQ gender and sexuality identities challenge norms of demographics. Big Data & Society, 7(1), 2053951720933286.
]]>“If a major intellectual, indeed civic, battle about datafication and its implications for ‘society’ is under way, how well-placed are the social sciences to wage this battle? … Do we have the tools to get in view what is problematic about datafication for social life? Do we have a clear enough idea any more of what should count as critique, and on what empirical and normative resources it depends?” (Couldry 2020, p.1136)
Couldry’s concerns stem from the apparent dominance in recent critical studies of data of approaches rooted in STS (science and technology studies)– most notably work that draws upon Actor Network Theory (ANT) and its various iterations and incarnations.
On one hand, Couldry acknowledges that STS studies offer a welcome “attention to complexity” and build rich descriptions of the complex interconnections and relationships that underpin the people, actions and resources that constitute contemporary data infrastructures. In particular, STS approaches offer a powerful means of teasing out how these networks come together, and what roles they play in shaping data processes and institutions, as well as the formation of data publics
Yet, Couldry also makes the point that STS is based around a distinct ‘flat ontology’ that lends itself to mapping and describing networks, while doing little to address more critical questions about the nature of social order. In particular Couldry highlights the tendency of STS studies to pay little attention to the distinctive position of human agents within data assemblages, as well as often failing to pay close attention to the larger social order in which a particular assemblage emerges and stabilizes.
Couldry therefore makes a case for paying “less deference” to STS, and more closely aligning critical data studies with critical sociology approaches that focus squarely on the role of data in reconfiguring the overall order of social life. This raises questions about the politics of datafication and the new forms of power that are exerted through data assemblages – not least the embodied agency of the reflexive human subject, as well as consequences of datafication for the constitution of social knowledge and the social world. As Couldry contends, this focuses our attention on the crucial question that underpins the critical data studies approach – i.e.:
“… how is the overall order of social life being reconfigured to promote particular corporate and governmental interests on the basis of new and radical forms of reduction – the reduction of human life to configurations from which profit through data can be maximally extracted?” (Couldry 2020, p.1140)
Couldry expands on this question in the form of three interlinked areas of inquiry that critical data studies needs to address through the extended use of critical social theory:
Couldry offers some suggestion of how established forms of critical sociology and critical theory can be brought to be bear on these questions of social order, social values and social epistemology. For example, he suggests that accounts of how relative social order emerges from data infrastructures might draw on Norbert Elias’s notion of ‘figuration’ – thus highlighting the position of human beings within data-driven networks, and the moral tensions that arise from the resulting patterns of interdependence. Similarly, questions of how social order is sustained might explore issues of definitional power through theorists such as Luc Boltanski. They might also draw on ideas of representational and categorical power from the likes of Judith Butler – not least Butler’s work on how powerful institutions regularly recognize some people as significant, and other as not. Finally, Couldry reminds us that critical sociology has an inherent affinity toward questioning the forms of capitalism that underpin datafication. Here Couldry points to Marxian theorists such as Moishe Postone, whose work can foreground the larger social and economic forces shaping new data-driven social orders, and crucially raise the moral question of what consequences these social orders might have for human beings and the things they value.
All told, Couldry argues that these additional perspectives and theory-driven re-focusings might push critical data studies toward interrogating the ways in which data is becoming an organizing principle across all aspects of society, the ways in which corporate and government actors are contriving to build different types of social order through datafication, and the forms and dynamics of the data-driven processes that lie behind these reorganisations. In this sense, critical sociology can expand the horizons of critical data studies toward “processes of social formation themselves on the largest scale”.
**
Reference
Couldry, N. (2020). Recovering critique in an age of datafication. New Media & Society, 22(7), 1135-1151.
]]>The notion of the data assemblage immediately foregrounds the idea of taking a ‘sociotechnical’ approach toward data – i.e. seeing the generation and processing of data as a coming-together of technical matters, scientific innovation, economic and political forces alongside social concerns (see Law 1987). Approaching any data assemblage in these terms, therefore requires paying equal attention to technological artefacts and technical processes, alongside the ways in which social, political and economic factors influence the datafication processes and practices. Indeed, Rob Kitchin and Tracey Lauriault (2018) point to a range of different technological, political, social and economic apparatuses that frame how a data assemblage might operate and work. These include:
Examples of such data assemblages might include the national census, standardised international measures of school performance (such as the OECD PISA statistics), national crime statistics, health indices, measures of unemployment, and so on.
Each of these assemblages will involve the bringing-together of various people, places, processes and practices. Take, for example, the data assemblage that might underpin higher education ‘rankings’ of universities around the world. Any such enterprise will be underpinned by an implicit positivist system of thought that presumes that educational activities can be quantified and be assigned a comparative value. Ideologically, this foregrounds values of managerialism and institutionalism, alongside an assumption of market-driven efficiencies (see Welsh 2021). The categories, measures and calculations of performance will have been negotiated and challenged by various stakeholders – not least individual universities and statistical commentators – and subsequently detailed in different forms of documentation. The data will be produced and provided by individual university administrative units, and national education agencies – all purporting to some form of objectivity and unbiased reporting. The rankings will be administered and promoted by organisations (such as Quacquarelli Symonds, Shanghai Ranking Consultancy and the Times Higher) seeking to profit from the generation of the data, alongside consultants seeking to assist individual universities in ‘gaming’ the generation of more favourable data. The ‘results’ will be publicised and circulated through a range of different media and infrastructures. These metrics will feed into wider market-driven uses of the data to construct rankings and measures of quality that, in turn, align with the broader political economy of university student enrolments, allocation of funding, and general public and professional understandings of ‘prestige’.
Critical data studies researchers will therefore often focus on detailing and documenting the constituent elements and apparatus of any data assemblage – paying close attention to the connections that are formed between these apparatus, as well as broader connections with wider data regimes. Yet, as with all forms of critical data studies this is never simply a descriptive exercise. One key question to ask is how any data assemblage works to enhance and maintain the exercise of power within a society. In this sense, Kitchin makes explicit comparisons between the idea of the data assemblage Foucault’s concept of the ‘dispositif’ – in particular, the question of how any specific data assemblage functions to produce what Foucault terms ‘power/knowledge’. In this sense, critical data studies researchers are driven to interrogate the types of knowledge that any data assemblage is set up to produce, how this knowledge functions to further the strategic goals and aspirations of dominant institutions, and what wider frameworks the produced data operates within. To illustrate this point, Kitchin quotes Foucault’s (1980, p.196) observation that …
‘… the apparatus is thus always inscribed in a play of power, but it is also always linked to certain coordinates of knowledge which issue from it but, to an equal degree, condition it. This is what the apparatus consists in: strategies of relations of forces supporting, and supported by, types of knowledge’
**
References
Foucault, M (1980). Power/Knowledge. Pantheon Books.
Iliadis, A., & Russo, F. (2016). Critical data studies: an introduction. Big Data & Society, 3(2), 2053951716674238.
Kitchin, R. and Lauriault, T. (2018) Towards critical data studies: charting and unpacking data assemblages and their work. in Thatcher, J., Shears, A. and Eckert, J., (eds) Thinking big data in geography: new regimes, new research.. University of Nebraska Press (pp.3-20)
Law, J. (1987). The structure of sociotechnical engineering—a review of the new sociology of technology. The Sociological Review, 35(2), 404-425.
Ribes, D. and Jackson, J. (2013). Data bite man: the work of sustaining a long-term study. in L. Gitelman (ed) ‘Raw data’ is an oxymoron. MIT Press (pp.147-166).
Welsh, J. (2021). A power-critique of academic rankings: Beyond managers, institutions, and positivism. Power and Education, 13(1), 28-42.
]]>That said, critical data studies is a useful statement of intent to look beyond the depoliticised boosterism and hype that propelled discussions of ‘big data’ and ‘data science’ during the 2010s. As such, ‘critical data studies’ is term that draws together researchers from across the social sciences, humanities, legal and policy fields, arts and design. Uniting these different academic perspectives is the intention to “avoid the hubris of pseudo-positivism and technological determinism, in favour of the nuanced and contingent” (Dalton et al.2016, p.1).
There are perhaps three distinguishing foci within critical data studies that set it apart from previous sociologies of numbers and statistics.
In all these ways, then, critical data studies signifies a set of distinct epistemological and ontological approaches to making sense of the social issues relating to digital data in contemporary society – foregrounding familiar sociological concerns with post-positivist and interpretivist understandings, an interest in the social construction of data, and a reflexive concern with revealing and challenging dominant power structures. These ideas and concerns are therefore reflected in the ways in which leading proponents of critical data studies frame key areas of concern. For example, Dalton and Thatcher (2014) suggest several ways in which researchers can set about fully interrogating contemporary ‘regimes of data’, i.e.:
**
References
Dalton, C. and Thatcher, J. (2014) What does a critical data studies look like, and why do we care? Seven points for a critical approach to ‘big data’. Society and Space May 12th
Dalton, C., Taylor, L. and Thatcher, J. (2016). Critical data studies: a dialog on data and space. Big Data & Society, 3(1), 2053951716648346.
Iliadis, A. an Russo, F. (2016). Critical data studies: An introduction. Big Data & Society, 3(2), 2053951716674238.
Neff, G., Tanweer, A., Fiore-Gartland, B. and Osburn, L. (2017). Critique and contribute: A practice-based framework for improving critical data studies and data science. Big Data, 5(2), 85-97.
]]>The idea of ‘design justice’ has become a regular feature of socially-minded discussions of technology development. In practical terms, the ‘Design Justice’ Network formed in 2015 as a community of like-minded design practitioners, community organisations and tech-facing institutions. Now, to complement these activities, Costanza-Chock’s (2020) book for MIT Press lays out the principles, ideas and commitments underpinning the design justice approach– detailing the central tenets of an analytical framework that chimes with the DSS project’s interest in re-imagining the presence of data-driven digital technologies in schools and other educational settings.
The book addresses a series of questions about the design of sociotechnical objects and systems – from the values that get encoded and reproduced in design processes, through to fundamental questions of how design problems get framed, and who gets to design for them. Running throughout these discussions are two core beliefs relating to inclusive and active participation. First, is the assertion that “those who are directly affected by the issues a project aims to address must be at the centre of the design process” (p.7). Second is the ethos that “absolutely anyone can participate meaningfully in design” (p.7). To illustrate this latter point, Costanza-Chock evokes Victor Papenek’s (1972) definition of design as “the conscious effort to impose a meaningful order”. Seen along these lines, then, design is clearly something that everyone does as part of their everyday lives. As such, there is no reason that people cannot engage in the (re)design of the emerging sociotechnical systems in their lives.
Basic definitions and principles
The book starts with a basic initial definition – i.e. “Design justice rethinks design processes, centres people who are normally marginalized by design, and uses collaborative, creative practices to address the deepest challenges our communities face”. This is then expanded into the following ten collective principles ….
Intellectual precedents and guiding ideas
Running throughout these design justice principles is the ambition to reimagine design along explicitly justice-orientated lines. As such, there are clear correspondences with Costanza-Chock’s own background in Participatory Action Research and the Scandinavian tradition of Co-Design – both of which emphasise the need to position design as taking place within communities of shared inquiry and action
Nevertheless, ‘Design Justice’ is distinct in also taking its lead from Black feminist thinking. This includes ideas of intersectionality – foregrounding the idea of gender, race, class and disability as interlocking systems. In other words, these are not categories that operate independently, but are experienced in conjunction with each other by individuals and groups who exist at their intersections. As such, Costanza-Chock contends that designing an object along non-intersectional lines that focus on ‘single-axis’ conceptualisations of ‘fairness’ is not good enough. For example, designing a system solely in response to concerns over gender equity is unlikely to benefit all women. Instead, gender needs to be approached as one of many elements in a ‘matrix of domination’. In other words, gender interlocks with other systems of oppression (such as race and class), prompting different women to experience different forms of benefit/harm from a sociotechnical object or system depending on their location within this matrix.
Design justice also draws on Black feminist ideas of ‘situated knowledge’. This stipulates that any efforts to design for particular intersectionalities needs to draw on the insights about power and oppression that come from those who occupy the same subjugated standpoints. In short, only those with lived experience of their particular positionality will have genuine insights into how oppression is experienced on the levels of personal biography, group/community, and the systemic level of oppressive social institutions. These experiences and insights need to be a central element of the design process.
In this sense, design justice clearly follows on from long-standing debates in the field of disability justice around inclusive and respectful design. Indeed, throughout the book, Costanza-Chock evokes disability justice slogans such as ‘Nothing About Us Without Us’ and the advocacy slogan “If You’re Not At The Table, You’re On The Menu” (p.84). As such, Costanza-Chock duly acknowledges design justice’s precedents in previous traditions of ‘value-sensitive design’, ‘universal design’ and ‘inclusive design’. For example, the universal design ambition for making designs accessible to the widest possible set of potential users raises the idea of deliberately designing for those who are currently systematically disadvantaged within the matrix of domination. This raises the idea of designing explicitly for people who might otherwise be deemed in conventional design practice as ‘edge cases’, and seeking to shift advantage by prioritising the needs of the usually marginalised. Similarly, the tradition of ‘inclusive design’ also explicitly recognises that user experience is shaped by specific context. This pushes for the design of products that are sensitive and responsive to the diversity of their users – e.g. in terms of ability, language, culture, gender and age – rather than expecting each individual to fit the requirements of the system. As the inclusive design mantra puts it, design needs to start from the expectation that perhaps only “one size will fit one” as opposed to “one size fits all”.
Issues & Challenges Raised
The design justice approach is intentionally provocative – raising a number of questions and advocating for a radical divergence away from how many designers might conceive the purpose and role of their work. In taking these ideas forward, the following issues recur throughout Costanza-Chock’s discussions:
i. Design Justice is not ‘blaming’ designers … but challenging them to rethink the outcomes of their work
First, it is important not to see design justice as blaming designers and prevailing design cultures. The starting point of the book is undeniably blunt: i.e. that the technology that we make “too often contributes to the reproduction of systemic oppression” (p.xvii). Yet, this is not to accuse designers of deliberately setting out with intent to systematically exclude or disadvantage already marginalised groups. As Costanza-Chock stresses, many designers undoubtedly consider themselves as liberal (if not left-learning) citizens, well-aware of social justice imperatives, and perhaps even an ally to marginalised groups. These are not people that set out to design biased, unfair and disadvantaging systems. Nevertheless, the fact remains that technology design is often beset by unconscious bias –reproducing norms, majority opinions, ideal types, ‘average’ users, and standardisations. This is especially the case when the primary consumer of a technology is the institution rather than the individuals within in. As Costanza-Chock puts it: “larger systems – including norms, values, and assumptions – are encoded in and reproduced through the design of sociotechnical systems” (p.4).
This slippage is illustrated if we unpack the popular idea of technology ‘affordances’ – i.e. what a technology is seen to allow users to achieve. Take for instance, the claim that a ‘Chat’ feature affords student/teacher communication, or that a ‘Shared Document’ platform affords collaborative co-writing of text. From a design justice point of view, these presumed affordances are not universally experienced, but depend on each person’s circumstances and context. As such, Costanza-Chock reminds us that the design of any presumed ‘affordance’ needs to be seen in terms of ‘affordance perceptibility’ and ‘affordance availability’. This first notion of ‘perceptibility’ relates to the issue of whether the user can literally see, hear and/or decode what the technology is offering. Is text visible to different forms of sightedness, or written in a script that the user can decode? Similarly, affordance ‘availability’ raises the issue that many digital objects are not equally available to all.
Crucially, Costanza-Chock also goes onto raise the converse notion of ‘disaffordances’ (where the design of an object actively prevents some users from doing something), and ‘dysaffordances’ (where some users have to mis-identify themselves in order to engage with an object). Think of a non-binary person having to ‘choose’ a binary gender in order to progress through an interface. All of these disadvantages, micro-aggressions and exclusionary features might not be deliberately designed into technologies, but are nevertheless prevalent throughout the digital landscapes that we encounter during our day-to-day lives.
ii. Design Justice is primarily a process of community organising
Another key characteristic of the design justice approach is the emphasis placed on working with community groups, activists and others involved in already tackling the issues to which any technosocial object is being designed to address. Costanza-Chock starts by reasoning that any designer is very unlikely to be the first person to have identified a problem. On the contrary, where-ever there is a problem then there is likely to be people already acting on it in some fashion. The key starting-point for any design intervention, therefore, is engaging with these pre-existing actions and working to develop a rich understanding of what people are already doing. This is especially important in terms of identifying likely forms of contextually-appropriate technology use. Costanza-Chock makes the point that many effective forms of new technologies and technology-based practices are first imagined and initiated within marginalised communities, activist groups and other social movement networks. Paying attention to how people have already developed technology products and practices ‘on the ground’ is therefore crucial.
In this sense, Costanza-Chock stresses the point that a large part of design justice is community organising. This does not simply involve designers seeking to work with individuals in various ‘participatory’ roles. In contrast, “design justice practitioners choose to work in solidarity with and amplify the power of community-based organisations” (p.91). This requires designers to be prepared to give away power during all stages of the co-design process – to partner with community actors and concede power while doing so. This might involve handing over responsibilities for convening the group, choosing who participates, structuring the work, and making other key decisions along the way.
iii. Design Justice should always look to build on existing subaltern uses of technology
One benefit of working with community organisations, social movements and local activists is the opportunity to explore resistant, subversive and oppositional forms of technology appropriation. As Costanza-Chock puts it, ‘Resistance Is Fertile’. Technological innovation can often arise from the ways in which lay-people ‘misuse’ systems, appropriate technologies for alternate purposes, and generally bend the rules and expectations of how technologies can be used. As such, “those whose needs have long been marginalised within the matrix of domination have a strong information advantage when it comes to anticipating those needs and developing possible solutions” (p.111)
That said, it is important for designers to enter into any exchanges of knowledge and ideas in an ethical and respectful manner. Design justice is not a case of designers looking to appropriate, ‘borrow’ or steal ideas. Instead, design justice involves designers working with marginalised design practices, attributing fairly and collaborating with lay-people as genuine co-designers and co-owners. Technology design has a shameful history of co-opting resistant practices and stripping them of their political intent (as has been the case with the mainstream adoption of hackathons, hacklabs and maker spaces). Instead, designers should strive to support existing ideals of grassroots innovation, convivial tools, alt-tech and other existing forms of locally-relevant and “socially useful production”. As Costanza-Chock concludes: “The most valuable ingredient in design justice is the full inclusion of, accountability to, and control by people with direct lived experience of the conditions designers claim they are trying to change” (p.25).
iv. Design Justice requires designers to call oppression out, rather than ignoring its existence
Finally, design justice compels designers to explicitly acknowledge and work to counter collective disadvantage and discrimination. This requires designers to move away from any temptation to adopt an approach that deliberately sets out to ignore issues – as is the case with ‘colour-blind’ or ‘gender-blind’ design, or buying into the logic of ‘individualised equality’ or ‘symmetrical treatment’. In contract, design justice is not aiming to somehow eliminate bias and unfairness, but actively and intentionally address it. As Costanza-Chock puts it, “racial hierarchies can only be dismantled by actively antiracist system design, not by pretending they don’t exist” (p.62).
This suggests that designers should foster deliberately antagonistic relationships with issues of equity and equality. Rather than striving to treat all users as equal, and attempting to erase difference and deny discrimination, designers need to explicitly identify disadvantage and oppression, call it out and act upon it. This requires designers to actively address issues of marginalisation and oppression in their designs:
“Regardless of the design domain, design justice explicitly urges designers to adopt social justice values, to work against the unequal distribution of design’s benefits and burdens, and to attempt to understand and counter white supremacy, cishetropatriarchy, capitalism, ablism, and settler colonialism” (p.68)
Implications for ed-tech designers
All these issues and ideas imply a number of practical shifts in how the design of new technologies is approached. As far as designers and developers of educational technologies are concerned, taking on the design justice approach implies deliberately refocusing their practices and work processes. This includes diversifying who gets to ‘do’ design within an ed-tech design team, who is involved in ‘participatory’ practices, who is featured in any user stories and narratives, and the diversity of any presumed ‘user personas’. This implies ed-tech designers spending more time developing intersectional user narratives, intersectional testing approaches, benchmarks and impact assessments. All told, design justice implies taking a slower, more circumspect and self-aware approach to all phases of the ed-tech design process. This is the opposite to the sociopathic compulsion within some sectors of the tech industry to ‘move fast, and break things’.
In this sense, design justice calls for more humility and less hubris amongst ed-tech design communities. This implies designers distancing themselves from the self-styled notion of coming into an educational context to ‘solve’ problems – what Costanza-Chock describes as a ‘Design Saviour’ mentality. Instead, designers need to see their work as just one component in a community’s ongoing ‘cycle of struggles’ (p.231). At the same time, design justice also implies that ed-tech designers actively embrace a radical set of aims and intentions – looking beyond goals of producing ‘inclusive’ or ‘fair’ ed-tech, and instead pursuing aims of technology that supports increased justice, autonomy and sovereignty for marginalised groups. Perhaps the most radical conclusion that design justice principles will sometimes lead to is making the decision to not design the technology at all. Evoking the #TechWontBuildIt movement that has sprung up amongst US tech workers, Costanza-Chock reminds us that “there are many cases where a design justice analysis asks us not to make systems more inclusive, but to refuse to design them at all” (p.19). These are all ideas that are not commonly expressed in ed-tech circles to date. In this sense, design justice implies a significant shift in mindset for everyone involved in designing, developing, producing and procuring emerging technologies in education.
Conclusions
Despite its strident appearance, design justice does not offer a neat treatise or set of solutions. Costanza-Chock is at pains to point out that design justice is an ongoing process rather than a prescription – raising questions, and pointing us in different directions rather than explicitly instructing people how to do design better. As ever, it is important to acknowledge that there are limitations to any approach. For example, design justice might be criticised for perpetuating a belief that ‘design can save us’. These are all issues and injustices that we need to learn to work around, mitigate, and minimise. Institutional racism, patriarchy and transphobia are struggles that cannot be completely overcome and ‘won’ through better design – socially-minded designers can never compensate fully for the horrors of contemporary society. As Ruha Benjamin (2019, p.175) writes, there is always a risk that ‘justice’ will be co-opted to legitimise “dominant notions of design as a universal good. I wonder how justice itself might be altered by its proximity to design as a buzzword and brand”.
Nevertheless, design justice certainly constitutes a useful rejoinder to the bleak analyses of technology that critical ed-tech studies can sometimes engender. It offers a ready way into thinking about how to disrupt the cycle of ed-tech working mainly to reinforce interlocking systems of structural inequality. It prompts us to take seriously “the relationship between design and power” (p.xvii), the ways in which ed-tech artefacts ‘have politics’ and are encoded with values, as well as the narrow ways, places and paradigms in which ed-tech products are conceived and developed. Perhaps the central message in the book is that good technology design is primarily a matter of social innovation – developing new communal formations and genuinely participatory processes within which different forms of interaction, mutual learning and innovation can occur.
Design justice certainly raises questions for our own research interests in exploring the extent to which compulsory schooling can ever be a fertile setting within which to pursue genuinely participatory design interventions. Compulsory schooling (and, in particular, the use of data-driven technologies within schools) is not a context that readily lends itself to redistributive action and active advancement of marginalised interests. Yet, this is not to say that we should not continue to try. Design justice certainly gives us a set of ideas and ideals to aim for. Whether we get there or not remains to be seen.
References
Benjamin, R. (2019). Race after technology. Polity
Costanza-Chock, S. (2020) Design justice: community-led practices to build the worlds we need. MIT Press
Papanek, V. (1972) Design for the real world: human ecology and social change. Thames & Hudson
]]>At the same time, is continued frustration over the socially-unware and politically uninterested disposition that seems to pervade the computational and data sciences. It is argued, for example, that data scientists show little regard for the complex social contexts within which their work is implemented. Social commentators are infuriated by attitudes within the data sciences that technology is neutral, data is objective, as well as the all-absolving claim of ‘I am just an engineer’.
Thus, while the humanities, arts and social sciences certainly need to up their game, there is mounting pressure for data science to become much more politically-oriented. This contention is worked through in admirable detail by the Harvard data scientist Ben Green in his call to approach ‘data science as political action’. In particular, Green develops two aspects of this thesis – firstly, considering why data scientists should recognize themselves as political actors, and secondly reflecting how data scientists might ground their practice in politics. In so doing, the following points are raised.
#1. Data scientists are not apolitical/ data is not neutral
Green starts by reminding us that attempting to present oneself (or one’s actions) as apolitical is itself a political stance. More specifically, attempting to claim neutrality is a ‘fundamentally conservative’ position that signals implicit support for maintaining the status quo and, therefore, the interests of dominant social groups and hegemonic political values.
As such, Green has little time for data scientists who claim neutrality or that they are somehow operating ‘outside of politics’. No knowledge or action is purely objective, and no data science can be carried out in the expectation of simply discovering ‘knowledge for knowledge’s sake’. Here Green draws on Donna Haraway to remind us that no branch of science is able to lay claim to providing a completely detached, objective “conquering gaze from nowhere”. Knowledge cannotbe value-free – instead, any knowledge is aligned with the social contexts that generate it. As such, data science is not a matter of developing neutral technologies that are capable of being used for good or bad ends.
To reiterate this point, Green draws on Langdon Winner (1980) to argue for embracing the understanding that every artifact arising from data science ‘has politics’. All data systems, processes and procedures are based on design decisions that have impacts that are determinative for society. This is not to solely blame data scientists for the consequences of their design and development work. Yet it is important for the field to acknowledge some degree of responsibility – especially for how data scientists choose to interact with the aspects of society that their products interact with.
#2. The trap of turning to data ‘ethics’
Of course, we have recently seen well-publicized efforts to imbue data science with an awareness of ethics, and to professional understandings of fairness, accountability and transparency. Yet Green remains skeptical of these recent shifts in emphasis. In this sense, Green follows on from the burgeoning criticism of high-profile episodes of ‘ethics-washing’ amongst Big Tech actors where ethics frameworks and ethics boards are established to no great effort (other than as an attempt to avoid regulation). While attuning data science toward issues of ethics is a welcome ‘first step’, it remains an insufficient response by itself.
For example, Green argues that recent calls for computational and technical professions to adopt a form of Hippocratic oath overlook the fact that professional codes rarely (if ever) result in social justice. These codes seldom contain clear normative directions of what data scientists should be doing (i.e. beyond vague illusions to ‘being aware’ of the social, cultural and political impact of their work). Moreover, these codes rarely are reinforced by mechanisms to ensure that programmers and engineers follow the stated principles or else are held accountable for any violation. It can also be argued that the idea of easily achievable ‘ethical’ action simply propagates the false dichotomy that technology and society are somehow distinct from each other rather than being inherently entwined.
As such, there continues to be good grounds to be doubtful of calls for ethics training to be established within data science education. Indeed, there is growing pushback from within the data science community against the tokenism of ethics training. As Ellen Broad argues, the idea of training a new generation of data scientists who are highly technically-skilled and thoroughly-drilled in issues of fairness/ accountability/ transparency replicates the well-worn trope of the ‘unicorn data scientist’. These are characteristics that no one individual can be expected to possess as a matter of course. Instead, these qualities will usually only result from a mixture of people working together in collaborative teams.
#3. The trap of turning to ‘data science for social good’
Alongside data ethics, Green also problematizes the seemingly progressive position of pursuing data science for social good. This is the logic that while never capable of providing perfect solutions, data science can be used to improve current circumstances. On one hand, Green commends data scientists for being willing to engage in more nuanced thinking and engagement with social issues. After all, an ambition to ‘do good’ brings a human focus to what might otherwise be largely technical concerns. Nevertheless, Green laments how these efforts are hampered ultimately by their reliance on vague and unarticulated political assumptions about what ‘social good’ might constitute (let alone the question of whether an unproblematic ‘good’ might be achievable at all).
At best, Green argues, this approach to data science falls into a non-politicized “know it when you see it” approach to deciding what constitutes social good. This leads quickly to crude equivalencies such as ‘poverty=bad’ or ‘staying enrolled on a university course=good”. Couching one’s actions in such broad-brush presumptions is a convenient way of glossing over the fact that deciding what constitutes ‘good’ involves normative judgement which ideally should be driven by an underpinning guiding political philosophy.
This lack of grounding principles means that the ‘social goods’ being pursued by data scientists can cover a wide (and sometimes conflicting) range of political characteristics. This can result in dangerous over-simplifications of issues that are actually politically complex and might lack clear consensus over what is desirable. As such, data scientists run the risk of blithely “wading into hotly contested political territory” and acting in a contestable (perhaps regressive) manner. As Green concludes:
“By framing their notions of ‘good’ in such vague and undefined terms, data scientists get to have their cake and eat it too: they can receive praise and publications based on broad claims about solving social challenges while avoiding any actual engagement with social and political impacts”
#4. Meaningful social change can only result from direct engagement with the politics of data
At best, then, Green reasons that talk of ‘data ethics’ or data science for ‘social good’ can only be opening points of complex conversations about what might be the most desirable applications of data science in any social context. Crucially, these conversations need to be framed by explicit sets of values, and ready to embrace the politics of negotiating between competing perspectives, goals, and agendas. As such, there can be no clear-cut ‘right’ or ‘wrong’ applications of data science that do not merit scrutiny. Instead, Green pushes for a cultural shift throughout the field to encourage a collective understanding amongst data scientists of being co-engaged in political action that has varying impacts on different groups of people over time.
Pursuing data science along these lines clearly requires additional time and effort. Indeed, if Green’s arguments are taken to their logical conclusion, any decision regarding how to apply data science to a social setting can only be taken after a considerable amount of deliberation, debate, dialogue and consensus building. These deliberations need to be especially mindful of the complexities of the social contexts in which any data system, tool or application is to be implemented. This is not to say that data scientists need to compromise their technical interests, expert knowledge, or passion for problem-solving and innovation. Yet, as Green reasons, any computational skills and passions need to be bolstered by a new concurrent acknowledgement that:
“… data science is a form of political action. Data scientists must recognize themselves as political actors engaged in normative constructions of society and, as befits political work, evaluate their efforts according to the material downstream impacts on people’s lives”
#5. So what might a political data science look like?
So, what might as politically-engaged data science look like, and how might its practitioners think and act differently? Of course, answering this question first requires clarity on precisely what political agendas are to be pursued. Taking his own political agenda as a starting point, then, Green extends the idea of ‘social good’ by considering how the field might evolve toward a more deliberative and rigorous grounding in a politics of ‘social justice’. Along these lines, then, Green outlines four phases that can guide individual change and institutional reform across the data sciences. In brief, this involves:
These suggestions raise a number of interesting new directions that data science might wish to pursue. First, is a need for data scientists to pursue clearly articulated visions of social benefit. This might require developing better understandings of the social contexts that data science work will be implemented. For example, educational data scientists might well benefit from secondments to teach in secondary schools, administer humanities classes in a university, or work in the Global South.
Green’s suggestions also raise the prospect of consulting and/or conducting social research on the issues that data science is attempting to address. Here, Green recommends engaging with academic literature rooted in the STS tradition. He also suggests conducting studies that adopt ‘critical design’, ‘anti-oppressive design’ and other participatory approaches to perceiving data science problems and then developing data-led ways to address these problems. To these ends, Green evokes the South African disability rights mantra of “Nothing About Us Without Us”. In other words, participatory approaches to data science cannot merely rely on tokenistic ‘end user’ consultation, but instead must genuinely commit to ensuring the central involvement of voices of those who are directly impacted by the data science.
Conclusions
As a data scientist himself, Green is certainly not presenting an anti-data science rant. If anything, Green is simply attempting to imbue his chosen field with a heightened sense of perspective. It is important to recognize that data science does not sit outside of society – neither does it have any power (and/or responsibility) to take on the task of completely changing and/or ‘saving’ the world. Yet, by the same token, neither is data science wholly culpable for the detrimental impacts of its work. Data scientists are simply part of the same social milieu as non-data scientists.
In this sense, Green’s arguments culminate in a call-to-arms along similar lines to a few other recent commentaries in this area – in short, are data scientists seeking to work with the system or against the system? Here Green looks back to André Gorz’s distinction between “reformist reforms” (actions that limit their objectives to what is rationally achievable within a given system), and “non-reformist reforms” (actions which are driven by an interest in what should be made possible in terms of human needs and demands). In short, reformist reformers start with existing systems and strive to improve them, while non-reformist reformers start from a set of desired social conditions and seek ways to attain them (regardless of system constraints).
To date, Green reasons that data science has almost always been focused on ‘reformist reforms’. After all, this is a field founded upon a standard logic of accuracy, efficiency and improving the performance of systems rather than substantively altering them. As such, conventional data science is an inevitably conservative pursuit. Nevertheless, Green raises the prospect that we perhaps need to set about developing more revisionist forms of data science (see also previous posts on Os Keyes and Roderic Crooks). For me, this is perhaps the most interesting conclusion for education data scientists to begin to seriously consider. How might a substantially different form of data-based education be established that undermines, usurps and utterly blows away the current conditions of the datafied classroom? This is not simply a case of appropriating data science ‘for good’, but a far more radical proposition of harnessing data science for revolt!
Reference
Ben Green (2018) Data Science as Political Action: Grounding Data Science in a Politics of Justice. https://arxiv.org/pdf/1811.03435
]]>
As we enter the 2020s, STS researchers are now addressing a diversity of topics – including a thriving literature on ‘number studies’. This subfield is developing various lines of discussion relevant to our own research – not least the epistemological and ontological implications of conceiving the world in terms of numbers and statistics. Also of particular relevance is recent interest in addressing questions of what numbers ‘do’ and what is being ‘done’ with numbers in contemporary social settings. There is much that we can take from such studies when looking to make sense of the datafication of schools.
Lippert and Verran frame this aspect of STS as addressing the topic of ‘After Numbers’ – reflecting a temporal perspective on the ‘doing’ of numbers and data. This involves considering all aspects of what Lippert and Verran term the ‘life-cycles’ and ‘narratives’ of any piece of data.This immediately draws attention toward what happens ‘behind’ and ‘before’ a data point is encountered in a school. For example, what is the story of this number’s emergence and its preceding history as a ‘becoming-number’? What interests, motivations, commitments and expectations were invested in the expected data?
Conversely, we also need to pay due attention to the ever-diversifying roles that any data-point has ‘after’ it has been generated – especially as it finds a place within the everyday life of school. This involves studying the diverse ways that a piece of data is ‘brought to life’ within the school; the contrasting ways that actors might relate to the same piece of data; and how different people practically work with (and/or work around) the data. Such perspectives involve taking a long view of data – not least the lasting ramifications and iterations of a data point beyond its most obvious and visible uses (what Lippert and Verran describe as ‘nthorder calculations’ and social effects).
As these ‘before’ and ‘after’ perspectives imply, STS encourages us to think beyond the idea of data as a stable, fixed entity with a distinct fixed substance and character. Instead, it pushes us to approach data in relational terms. This requires thinking about what is being done with data amongst different networks of actors and interests. This includes examining the interactions and transactions that take place around (and through) data, as well as the meanings that people ascribe to data at various times and different contexts. This also involves ascribing data some form of ‘social life’ of its own – entwined with the activities of human and non-human actors. In all these ways, then, STS pushes us to consider data in terms of what data does within the social context of a school, rather than what data is (Day et al.,2014).
These questions can be addressed toward various aspects of school ‘data’. For example, we might choose to explore material and epistemic practices that shape data (and stories of data), as well as how people ‘live with’ data alongside the less conscious ways that they also ‘live in’ data (Day et al. 2014). All told, the numerical turn within STS foregrounds a distinct line of questioning that certainly relates to our own concerns around datafication. As Whitney and Kiechle (2017, p.4) summarise:
“Who quantifies, and to what purpose? Are numbers merely fact and/or rhetoric, or are they available as meaningful bodily experiences and stories about the past, present, and future? How do conflicting social forces attempt to make different meanings from numbers? How does the practice of quantifying nature differ between corporate, state, and non-state actors? How do narratives and bodies challenge or reinforce the centrality of numbers in understanding, representing, and regulating environments?”.
Underpinning these questions are a few broader principles that we also would do well to bear in mind when conducting our own research. For example, STS has long sought “symmetry” in telling all sides of the story of any technological artefact or phenomenon (be it a single data point oran entire school system). This implies paying attention to all ‘relevant social groups’ (from the most minor influences to the major shapers). This also implies giving equal consideration to dominant and peripheral opinions, successful and failed versions of the technology.
STS studies are also characterised by a disposition to belligerently address everyday aspects of a phenomenon while also remaining creative and well-humoured. Indeed, Gunderson (2016, p.46) describes the “aesthetic standard” in STS as including a “playful seriousness, attention to thick descriptions of the mundane, pleasure in subverting common assumptions”.
Finally, is an underlying interest in identifying alternatives – particularly in terms of working out how technology might better act in the public interest. For example, there is a long-standing STS interest in ‘deliberative democracy’ – supporting the discussion and debate of significant science and technology ‘controversies’ that affect wider society. Thus, STS looks to give greater prominence to marginalized and excluded versions of what technology could be through representative means.
REFERENCES
Day, S., Lury, C. and Wakeford, N. (2014) Number ecologies. Distinktion: Scandinavian Journal of Social Theory, 15:123-154
Gunderson, R. (2016). The sociology of technology before the turn to technology. Technology in Society, 47:40-48
Lippert, I. and Verran, H. (2019). After Numbers? Innovations in Science and Technology Studies’ Analytics of Numbers and Numbering. Science & Technology Studies, 31(4):2-12.
Whitney, K., and Kiechle, M. (2017). Introduction: Counting on Nature. Science as Culture, 26(1):1-10.
]]>Regardless of whether or not these constitute a particularly Nordic sensibility, one of the interesting initial set of distinctions related to the motivations and justifications for the use of algorithms.
These can be seen as different institutional and organisational logics that might underpin the implementation of data-driven systems and processes, i.e.:
These three logics therefore underpin distinctly different justifications for the use of data-driven systems and processes within a school. For example, whereas the logic of algorithmic control is associated with justifications of necessity and protection, the logic of algorithmic care stresses a desire to support the thriving and well-being of students and teachers. Conversely, the logic of empowerment is grounded in ideals such as progress and development.
These distinctions provide a useful way into exploring the implementation of data-driven systems and processes in our research schools – moving our analysis toward more nuanced readings of how power is being exercised through these digital technologies. When are these logics being employed and with what outcomes? What different interpretations are there within a school for the same algorithmic system?
Crucially, it is important to see these logics in terms of institutional intentions for the individuals who make up ‘the school’. For example, the logic of empowerment certainly relates to a desire to increase the capacity of teachers and students to act – albeit to act in ways that relate to underlying principles of optimization and efficiency. Similarly, the logic of care relates to supporting forms of behaviour that the school considers as being in an individual’s best interests.
This therefore raises the additional question of what these logics might look like if developed from alternate perspectives – such as the collective interest of specific student or teacher groups. What forms of control, care or empowerment would these groups be striving toward? To what extent would these be compatible with the institutional logics of the school and education system?
REFERENCE
Juho Pääkkönen, Jesse Haapoja & Airi Lampinen (2019) Nordic Perspectives on Algorithmic Systems: Notes from a Workshop on Metaphors and Concepts. June 17th, https://rajapinta.co/2019/06/17/nordic-perspectives-on-algorithmic-systems-notes-from-a-workshop-on-metaphors-and-concepts/
]]>Writing about the processing work that underpins the presentation of data on government portals, Helene Ratner and Evelyn Ruppert describe various ‘aesthetic practices’. This is data work that takes place around the collation of different data produced at different sites and its subsequent presentation for public consumption. The notion of aesthetics does not relate to concerns over beauty per se – rather the behind-the-scenes re-arranging and ‘neatening’ of data in a form that can be smoothly processed.
These practices include various data ‘cleaning’ decisions regarding which data are to included and which are to be discarded (what Ratner and Ruppert describe as a process of ‘managing absence, inaccuracy and indeterminacy’). These practices also include various data ‘packaging’ decisions about how to overcome inconsistencies and ‘frictions’ within a dataset – these latter decisions include how best to standardize, classify and label data (Leonelli 2016). As Ratner and Ruppert contend, these are all normative decisions that result in some data being made absent, and therefore some information remaining undocumented.
As such instances imply, it is important to understand the collation of data-sets and their subsequent processing and presentations as socially-shaped ‘translations’ of data rather than objective reflections. In this manner, any data portal or data visualization might be best approached as a performative ‘site of projection’. The important questions here, then, relate to what is being projected by whom, with what intentions, and with what outcomes? Conversely, what aspects of the data are is not being projected?
References
Leonelli, S. (2016). Data-centric biology. Chicago, University of Chicago Press.
Ratner and Ruppert (2019) Producing and projecting data: aesthetic practices of government data portals. Big Data & Society [forthcoming]
]]>