The difficulties of establishing any form of critical oversight and scrutiny over data-driven processes is illustrated in the difficulties encountered throughout New York City’s attempts to establish an ‘Automated Decision Systems Task Force’.
The city government was lauded in 2017 when it passed Local Law 49 which established a task force to scrutinize any error or bias implicit in the automated decision-making processes at work across government services. Like many municipalities, New York is increasingly reliant on automated processes – including algorithmic systems that are used to determine where students will be sent to school, and other forms of educational planning and resourcing.
Yet despite this political will – and widespread public support – a number of problems have since arisen that have largely curtailed the task force from engaging in its remit. Officials have been accused of failing to provide transparent access to information and data – citing issues of privacy and security. More fundamentally, City officials have failed to reach an agreed definition of automated decision systems, as well as failing to identify any form of automated system that the task force could begin to work with.
This has led some task force members to now begin to call the task force out as a publicity project rather than a serious form of accountability – raising accusations of ‘ethics-washing’ along similar lines to the recent efforts of tech industry actors keen to avoid increased regulation. Doubts have also been raised over the government’s will to allow the task force to meaningfully engage with city residents and the general public (supposedly a key element of the task force’s remit).
All told, even this well-resourced, high-profile effort has steadily bumped up against a number of seemingly entrenched organisational, technical and political barriers to establishing alternate, more equitable uses of data. All told, the notion of ‘doing data differently’ is more challenging than it might be imagined.