The use of data visualisations in school

An important (but often overlooked) feature of how schools ‘do’ data is the production and consumption of data visualisations. In this sense, we need to pay close attention to the role that computer-generated charts, maps, graphs and infographics play in how data is communicated and made sense of in schools.

As is the case in many organisations, schools tend to rely on a limited number of visualisation tools and techniques. For example, the majority of school data visualisations derive from the default settings of popular data-processing tools such as Excel, Tableau and Google Charts. As Cole Nussbaumer puts it, much of what passes for data visualisation is simply the result of “what happens if you put the data in Excel and [choose] ‘Chart Data’”.

Besides a reliance on ‘cookie-cutter’ outputs, schools might also have one or two other forms of data visualisation that have seeped into their local visual cultures. For example, in the state of Victoria (where our three research schools are located), many schools have developed a predilection for producing ‘box and whisker’ plots – aping the use of this graphing technique in official government reports, such as NAPLAN and MySchool. However, in general, it is fair to say that staff, students and parents are exposed to a limited range of data visualisation types.

 Indeed, the same familiar types of data visualisation are seen so often in schools that we rarely think to challenge them. Yet some of the key ways that data is ‘done’ in schools relate to the visualisation of data. Of course, some data visualisations might be inappropriate for the type of data being used, or just plain incorrect. On the other hand, even when technically acceptable, a badly formulated visualisation can quickly hamper people’s understanding of the data being presented. For example, most data processing packages make it easy to produce overly-elaborate and unhelpfully detailed visualisations (littered with what the statistician Edward Tufte has termed ‘chartjunk’). In the wrong hands, Excel can be a prolific source of garishly coloured, excessively labelled 3D pie charts, ‘doughnuts’ and the like.

An important point to consider here, then, are the ways in which specific software tools shape the information that is conveyed through data visualisations, as well as the associated meaning-making that results. Tufte’s well-known essay ‘The Cognitive Style of PowerPoint’ tackles one prominent example – methodically developing an argument that the unhelpfully hierarchical and linear nature of ‘slideware’ software leads to the over-simplification of information and arguments. As Tufte put it, “slideware often reduces the analytical quality of presentations. In particular, the popular PowerPoint templates (ready-made designs) usually weaken verbal and spatial reasoning, and almost always corrupt statistical analysis”.

So, when coming across data visualisations in our case study schools it is important to pay close attention to the origins and underpinning assumptions of their production and presentation. As Wieringa et al. (2019) suggests, visualizations used to communicate school data will be often imbued with a misleading sense of objectivity: “the impression is that these visualizations show facts about, rather than interpretations of, data”. Instead, it is helpful to unpack the socially-constructed nature of any data visualisation that we come across. In particular, the following questions need to be considered:

  • What software has been used to create the data visualisation? On what basis was this software chosen? What alternatives were considered?
  • What were the decisions involved in configuring and producing these data visualizations?
  • What evidence is there are of a software package’s default settings?
  • What are the ‘interpretative acts’ deployed in the production of the visualisation (both knowing and unknowing), and how does this shape meaning-making?
  • What decisions have been made in terms of appearance and aesthetics? For example, colour palettes, shapes, thicknesses, plane, axis, scaling, and so on.
  • How do these production decisions privilege certain viewpoints and perpetuate particular power relations?
  • What material form does this visualisation take (for example, is it displayed on a foyer plasma screen, distributed as an un-editable ‘fixed’ PDF, or on photocopied sheets of paper)? How does this material form encourage and/or restrict people’s engagement with the visualisation, the underpinning data, and/or reuse of the information?

 

As well as examining the visualisation artefacts themselves (e.g. an actual chart, paper-based report or school ‘data wall’), these questions also prompt us to subjectthe use of visualization software in schools to a form of ‘tool criticism’. As Karen van Es and colleagues (2018) outline, this involves questioning the ‘epistemological affordances’ of the software tools that have been used. This draws on Lev Manovich’s notion of ‘software epistemology’ – i.e. an interest in the epistemic impacts of the tool(s) being used, and asking questions of whatknowledge is (and becomes) in relation to a specific piece of software. So how might an Assistant Principal’s fondness for Excel bar charts frame (and perhaps restrict) the production and distribution of knowledge within her school? How do specific data visualisation tools differently shape that ways that data are understood and talked about in school? What alternate visualisations might be preferable, and how might they be encouraged?

 

REFERENCES

van Es, K., Wieringa, M. and Schäfer, M.  2018. Tool criticism: from digital methods to digital methodology. in International conference on Web Studies (WS.2 2018), October 3–5, Paris. ACM, New York. https://doi.org/10.1145/3240431.3240436

Wieringa, M., Van Geenen, D., Van Es, K. and Van Nuss J. (2019)  The fieldnotes plugin: making network visualisation in Gephi
accountable.  in Daly, A, Devitt, K. and Mann, M. (eds)  Good data.  Amsterdam, Theory on Demand #29