This blog is based on an article in the Journal of Social Policy by Paul Henman. Click here to access the article.
Computer technologies have been used by governments in managing social services for over 50 years but have received little research attention. We hear a lot about Artificial Intelligence (AI) now and the promises and problems it may herald. Yet it is already part of government operations, creating both service improvement and efficiencies, as well as problems of government over-reach, bias, and unfairness. In some areas, AI has replaced human bureaucrats to legally binding decisions about people’s access to services or benefits.
Although now in its 50th year of publication, research in the Journal of Social Policy has largely ignored digital technology. Only 7 papers have been published that have the words ‘digital’, ‘computer’, ‘automation’, ‘electronic’, or ‘ICT’ in its title, abstract or keywords. Why is this, and how has computerisation changed social policy and its administration over this last half century?
Computerisation of social policy and its delivery is now widespread. It has transformed interactions with government social services from within 9-to-5, bricks-and-mortar offices to 24/7 online transactions. Payments and decisions are automated. What was a street-level bureaucracy, has transformed into a screen-level or even a system-level bureaucracy. In many areas, human administrators have lost discretion to determine outcomes according to the diversity of human need, and thus been deskilled. In other ways, decisions support systems and AI provide digested information to help human make more informed decisions.
Automation has changed the substance of policy. Policy has become more complex, sometimes to better shape policy and services to be more individualised. Sometimes computers are used to increase the conditions and difficulty in getting benefits and services, to punish the poor. This complexity makes it harder for administrators and citizens to navigate, and reinforced a ‘digital divide’ between those who can access and use computers and those who cannot. Perhaps paradoxically, computerisation of complex policy can also help to reduce social exclusion, if governments choose to design it appropriately.
There are a few noticeable trajectories of new and emerging uses of digital technologies.
First, computers often enhances state surveillance and control. Governments in Australia, the Netherlands and the USA all introduced systems to detect social welfare fraud. Instead however, they made it harder for people to get their entitlements and wrongly accusing them of fraud. The Dutch Government resigned and the Australian government admitted its system was unlawful and agreed to a AU$1.8 billion settlement. But still we are told computers are accurate, always get it right, and believe it when the computer says “no”.
Second, AI is now increasingly being used to make social services more personalised, by identifying differences between people. They help government officials profile people and predict their futures. Unemployed people are sorted into groups by their likelihood of finding a new job, children are assessed for their risk of family harm or abuse, the chance of an offender re-offending is calculated. These approaches are helpful in tailoring needs and time and financial resources to those most in need. But they can also get it wrong – by reinforcing racist and sexist perspectives.
Third, the decision-making processes of AI are not transparent. They are a ‘black box’. Governments do not provide independent reviews of the software coding, and often hide behind commercial in confidence when using off-the-shelf software. This means that governments’ decision making is getting more and more hidden and unaccountable.
What should social policy and public administration researchers, and social welfare advocates be doing to ensure AI is used ethically, responsibly, and accountably?
Social policy scholars and advocates need to stop ignoring technology. It is not just humans that matter. As computers are now making legal decisions, we need to treat their actions seriously. This requires having an appreciation for their role, how they are designed, built, and deployed. Adopt conceptual approaches – such as Actor Network Theory and Affordance theory. Learning and thinking critically about algorithms and data is also needed.
Researchers can also adopt new digital research methods to better understand how people fare in the welfare state, and their experience of automation and AI. Accessing social media, analysing large texts using computational methods, and visualisation of findings, can all advance a digital social policy sub-discipline.
When governments provide services and benefits to the society’s disadvantaged people, the onus should be on getting it right, and helping people. Not to further punish them and entrench their powerlessness and disadvantage. This means we need to take seriously how automation and AI can enhance these social goals, rather than take us further from them.
About the author
Paul W. Fay Henman is Professor of Digital Sociology and Social Policy, University of Queensland, Australia, and Chief Investigator of the ARC Centre of Excellence for Automated Decision Making & Society. Recent publications include Administrative justice in a digital world, Improving public services using artificial intelligence, and Governing by algorithms and algorithmic governmentality.