Magic Numbers: Evidence and Universal Credit

This blog is based upon an article recently published in the Journal of Social Policy.  You can access the article by clicking here.

Much commentary has been made on the role (or lack) of evidence in policy development. This has tended to focus on the kinds of evidence that make the final cut and those that tend to be ignored. Previous research has also suggested that policy makers prefer certain kinds of evidence because of its perceived scientific rigour – what has been referred to as an ‘evidence hierarchy’. But, with only a few exceptions, these accounts usually miss out the views of evidence use held by those involved in policy development.

We wanted to know how the Department for Work and Pensions (DWP) obtained and used evidence, particularly in relation to the development of its flagship policy, Universal Credit (UC). UC is the most recent attempt to address the difficult task of simplifying the UK’s complex social security and taxation systems. UC was introduced by the Coalition government (2010-2015) and continued by the current Conservative administration. However, it has attracted an avalanche of criticism, from ongoing problems with rollout, an increasing number of devastating stories about its impact on individuals and suggestions that it is ‘shambolic’ and is unlikely to deliver value for money.

In addition to key policy changes such as UC, the DWP has also undergone significant organisational changes, mostly resulting from government austerity measures to cut public spending. It was during this critical time period that we undertook research with policymakers and analysts (social researchers, statisticians and economists) in the DWP’s key locations around the UK.

In our research, we found that three inter-related factors were critical to the development and use of evidence in relation to UC:

  • Evidence hierarchy – Although we found that the evidence hierarchy was important, the selection of evidence was constrained by the overarching austerity paradigm. In practice, this was a ‘Zeitgeist’ that resulted in policy officials and analysts focusing on quantitative evidence when advising Ministers.
  • Capability – Rather than being constrained by the evidence hierarchy as such, we found that DWP officials favoured certain kinds of evidence based on their own perceived capabilities to handle and develop it for policy.
  • Political feasibility – The context of Departmental team structures also had a bearing on this capability. It was also constrained by what officials saw as being politically feasible in relation to Ministers’ preferences. This added an additional layer of complexity into the selection and use of evidence.

Together, these three dimensions constituted significant ‘filtration mechanisms’ that determined the kinds of evidence that officials selected for policy developmen

t and those that were missed out. A key issue is that there is no clear guidance for policymakers or analysts on how evidence is to be used in policy. HM Treasury’s Magenta Book provides guidance for evaluations of government projects, policies, programmes and delivery but focuses on evidence selection, not use.

If the kinds of perceptions of capability and feasibility that we uncovered do drive evidence use, there is a danger that vital, emerging evidence will continue to be missed out. For example, there is growing evidence of the social (rather than purely economic) costs of austerity, such as the widespread and growing need for foodbanks and the impact of sanctioning on benefit claimants. But, despite providing clear lessons for policy development, these have not yet filtered into the policy process.

So, what can be done to improve the use of evidence in the policy process?

  1. It isessential that policy makers and analysts have the necessaryresources and skills to better equip them to interpret, present and translate a broad range of evidence into policy.
  2. How policy and analyst teams are structured is critical to promoting better use in departments in central and local governments (as well as government agencies).
  3. Policymakers and analysts would benefit from having criteriafor good evidence use, in addition to existing guidance about evidence selection.
  4. We need more, and better, collaboration between the two ‘epistemic communities’ of government and academia.

The changing political environment (including Brexit and devolution) has the potential to give voice to new forms of evidence. But, unless more attention is placed on how evidence is translated into policy this will continue to be mediated by the technical capacity of practitioners, as well as (often arbitrary) ideas about what is considered feasible.


About the authors

Dr Jo Ingold is Associate Professor of Human Resource Management & Public Policy at Leeds University Business School. Jo has recently completed an ESRC-funded project focused on employer engagement in active labour market programmes in the UK and Denmark and has co-edited a Special Issue of Human Resource Management Journal on employer engagement.



Dr Mark Monaghan is Lecturer in Criminology and Social Policy at Loughborough University. Mark has a long standing interest in the politics of evidence-based policy making and has explored this in the area of illicit drugs and social security policy


Jo and Mark were awarded the Ken Young Prize for the Best Paper published in Policy & Politics in 2016 for their article on ‘evidence translation’.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s