Data-driven decision making … what we already know about teachers’ use of data

[Neil Selwyn & Michael Henderson]


There is a growing literature exploring how teachers make use of data in schools. This ranges from teachers’ own internal record-keeping through to making use of data from standardised national testing. Of course, our research project is specifically interested in how teachers engage with digital forms of data. One of the premises of our study is that there is a wealth of digital data that teachers are often not aware of, and therefore make no use of at all. However, it is useful to consider the existing research literature on how teachers generally engage with the data-sets and data-sources that they are already being presented with (and often expected to engage with) during the course of their work.

So, what does the literature tell us about these ‘known’ data practices? How might we summarise what is already know about teachers as users of ‘data’? In this initial blog post we review the literature to address the following questions …

  1. What sorts of ‘data’ are teachers most commonly dealing with in school? For what purposes and with what expected outcomes?
  2. How can we make sense of the forms of data work that teachers engage in?
  3. What does previous research tell us about the realities of teachers’ engagement with data?


i) What sorts of ‘data’ are teachers dealing with … and why?

Teachers, school leaders and administrators have been making use of data as long as schools have been in existence. However, the imperative to make systematic use of data has grown over the past two decades with the rise of standardised testing and various other indicators, measurements and feedback related to what goes on in school. In the US, the ‘No Child Left Behind’ Act (2001) reinforced the importance of teachers’ use of data to inform their practice. All around the world, teachers are now working in school environments where “it is no longer acceptable to simply use anecdotes, gut feelings, or opinions as the basis for decisions” (Mandinach 2012, p.71).

This had led to a variety of data approaches being promoted to teachers as best practice. These include ‘data-informed instruction’, ‘data-driven instruction’, ‘data-based instruction’, ‘teaching analytics’, and ‘data-driven decision making’. Such labels reflect an over-riding logic that data can be used to guide the decisions and choices that teachers make when planning their teaching, and hence “drive significant instructional improvement” (Wardrip & Shapiro 2016, p.18). Of course, the growing significance of data-based teaching is not solely concerned with improving learning. In addition, data generated by teachers is also expected to be used for accountability purposes (i.e. reporting to parents, school authorities and inspectorates), as well as a basis for whole-school improvement.

Against this background, the types of data that now implicitly inform the working lives of teachers is varied. Indeed, understandings of what constitutes ‘school data’ tend to be loose and all-encompassing, such as Schildkamp’s definition of “information that is systematically collected and organized to represent some aspect of schools”. In these broad terms, the data that teachers are routinely expected to be making use of include assessment and achievement data, as well as classroom observations, homework, student satisfaction measures and feedback, school inspectorate reports, attendance data and other ‘contextual’ data relating to student background.


ii) What forms of data work do teachers engage in?

It is broadly acknowledged that teachers need to engage in substantial data-work in order for these various forms of data to be of meaningful use to them. A common refrain in this literature is that ‘data does not speak for itself’. In this sense, teachers’ use of data is usually described as a process of interpretation, inferences and meaning-making. As Wardrip & Herman (2018, p.32) put it, teachers do not simply receive data and then act on the data because the instructional implications are self-evident”.

While specific definitions are rare, the idea of teacher ‘data literacy’ is generally acknowledged to involve a combination of analytical and interpretative skills. In terms of data analysis, then, skills are usually reckoned to include the ability to:

  • formulate hypotheses;
  • identify valid proxy indicators of other phenomenon;
  • identify variables that can be manipulated and variables that cannot (what might be terms dependent and independent variables);
  • combine difference sources of data (either in terms of aggregation and/or triangulation);
  • drill down to the ‘item level’ to gain deeper understandings of aggregated data;
  • monitor outcomes – i.e. generate one’s own data relating to the outcomes of any implemented change.


In her conceptual framework for teachers’ data-driven decision making, Mandinach distinguishes these skills in terms of knowing how to identify, collect, organize, analyse, summarize, synthesize and prioritize data (Mandinach 2012). These stages are summarised in the iterative model presented in Mandinach’s following figure:

Image result for a conceptual framework for data-driven decision making



At the same time, the other key aspect of teacher data-work is understood to be the capacity to interpret analysed data. This involves contextualising any analysis of data in a way that supports meaningful understandings that can then inform subsequent actions. These skills are seen to include knowing how to ‘notice’, ‘interpret’ and ‘construct implications’ from data (Coburn and Turner 2011).

This type of interpretive data-work involves teachers being able to relate data to other knowledge they might have about their students, classes, school and other contextual information. For example, in what ways does the data teachers have prompt additional insights into a student on the basis of what is already known about them? How does this data confirm or refute what is already known?

There are a few different ways this interpretation and contextualisation might take place. For example, Mandinach’s notion of ‘pedagogical data literacy’ suggests that teachers make best use of data when they can combine their ‘data literacy’ skills with their knowledge about the pedagogical and curriculum contexts that the data relates to. Elsewhere, Wardrip (2018) suggests that teachers make best use of data when they approach data as a means of addressing problems. Crucially, this does not imply that data can be used to solve problems – rather it can be used to support problem finding, problem setting and/or and problem framing.


iii) The realities of teachers’ use of data

 In theory, then, teachers should be making extensive use of school data along these complementary lines of analysis and interpretation. However, while ‘data-driven’ and ‘evidence-based’ practice is now part of professional teaching provision in many school systems, a number of limitations and tensions are beginning to be identified:

  • Teachers are presented with an over-whelming array of data sources and tools. Stretching back 10 years, studies have reported “a proliferation of technological tools” (Wayman et al. 2007) being provided to teachers to support data-driven decision-making. These range from data warehouses, student information systems, instructional management systems and assessment systems, through to classroom apps and devices that help teachers to diagnose students’ learning strengths and weaknesses. Indeed, Wayman et al’s (2007) study of one moderate-sized US school district, found more than 80 different technology tools that various educators were using to support data practices.
  • Teachers most commonly engage with data in the form of pre-processed, pre-analysed data. Studies of actual school data uses suggest that teachers are more likely to be passive consumers of other people’s data analysis, rather than active constructors of their own data analyses. For example, learning management systems and other teaching technologies offer a range of dashboards, indicators and other synopses of student and teacher data. Similarly, teachers receive annual synopses of standardised test scores from programs such as NAPLAN and other state-wide assessments. Thus, in terms of Mandinach’s (2012) DDDM model presented earlier, teachers are most likely to commence their engagement with data after the ‘information summarize’ stage (with outside actors taking prior responsibility for the stages of data collection, data organisation and information analysis). In one sense, this presentation of pre-processed data overcomes any problems of limited data-handing and data-analysis skills. On the other hand, this limits a teacher’s capacity to fully interpret, contextualise and understand the implications of the data.
  • Teachers often perceive school data to lack usefulness and/or validity. A number of studies note teachers’ reluctance to make use of data that they consider to be unreliable, inauthentic and/or dis-connected from the realities of their students, classrooms and instructional practices. As Wardrip and Herman put it, “teachers are reluctant to base decisions that impact students on data that they might see as unreliable, like testing data … When these sources of information about students are discrepant, teachers may tend to rely on their own interpretations about student competence, learning and performance”. In these instances, it is noted that teachers (particularly those in high-performing schools) tend to rely more on intuition and personal experience. As Mandinachputs it, educators must feel that they are “using the ‘right’ data”.
  • Only a limited number of teachers make use of school data. For example, the teachers making most extensive use of data are often found to be those with data-related skills and interests (such as those teaching maths and science subjects). When teachers do make use of data, this tends to be for compliance and accountability purposes, rather than for continuous improvement. This is reflected in the tendency for teachers to make occasional (rather than frequent) use of data. For example, Schildkamp reports that on average, teachers only use data for instruction between “yearly” and “a couple of times per year”.
  • Only a limited range of school data tend to be used. Studies find most teachers to engage with a limited range of data – especially what Wardip and Shapiro (2016) term ‘low-hanging- fruit data’ such as student performance data generated from standardized assessments. In addition, is a tendency to use data to focus on specific proportions of students. These are often reported to be the students who can help improve a teachers’ status on accountability indicators – i.e. the middle-range ‘bubble kids’ (Lauren & Giddis 2016), rather than those at the lowest and highest levels.
  • The classroom impact of teachers’ use of data appears limited. Wardrip and Shapiro (2016) note the surprising failure of data-informed instruction to ‘penetrate the classroom’, even in school districts that are considered to be leading the use of educational data. These authors conclude that despite the theoretical arguments, we still do not know how teachers’ data-work might relate to productive changes in instruction and learning outcomes.



This literature on teachers’ use of ‘general’ school data throws up a number of interesting challenges for us to address in our work supporting schools’ use of digital data. It also raises a number of pointers toward likely ‘conditions of success’ and factors that mightunderpin ‘successful’ use of digital data by teachers (these will be reviewed in our second post). For the time being, we can conclude with a series of questions and challenges that arise from the material reviewed so far:

  • How can we widen the ‘user’ base of data in school? How can we support all teachers to engage in data-work? Moreover, how can access to data be extended beyond interested teachers and administrators, and also students, parents and other relevant community members who might also be able to use data for actionable purposes?
  • How can schools be provided with sources of data that are relevant to individual teachers’ work and classroom contexts? For example, what opportunities exist for teachers to unobtrusively generate their own data within their classroom practice?
  • Rather than simply using data to overcome problems by ‘improving’ current ways of doing things, how can teachers be encouraged to use data to challenge and problematize the status quo (as Herman puts it, moving beyond seeing school data as a means of helping things run more smoothly, but also as a potential ‘perturbing’ source of prompting change?
  • How can teachers be encouraged to engage with data to address issues that are not necessarily driven by compliance and accountability objectives, but perhaps related to more personally meaningful issues? At the same time, how can teachers be encouraged to engage with data to address issues that are not related to learning, teaching and instruction, but might relate to other aspects of their work (such as teachers’ well-being)?



  • Mandinach, E. (2012) A Perfect Time for Data Use: Using Data-Driven Decision Making to Inform Practice, Educational Psychologist, 47:2, 71-85
  • Mandinach, E. and Gummer, E. (2013).  A Systemic View of Implementing Data Literacy in Educator Preparation. Educational Researcher42(1):30-37
  • Schildkamp, K., Poortman, C.,  Luyten, H. &  Ebbeler, J. (2017) Factors promoting and hindering data-based decision making in schools, School Effectiveness and School Improvement, 28:2, 242-258
  • Wardrip, P. & Herman, P. (2018) ‘We’re keeping on top of the students’: making sense of test data with more informal data in a grade-level instructional team, Teacher Development, 22:1, 31-50,
  • Wardrip, P. & Shapiro, R. (2016) Digital media and data: using and designing technologies to support learning in practice, Learning, Media and Technology, 41:2, 187-192