Rethinking Research Impact: How and Why Social Policy Academics Should Try to Change the UK’s Current Approach

This blog is based upon an award-winning article recently published in the Journal of Social Policy.  You can read an open access version of the article by clicking here.

Academics working in the UK are being increasingly encouraged and incentivised to seek research impact beyond the academy and the consequences of these changes have caused alarm for some. This approach now appears to be influencing discussions about research impact in a range of other contexts, including Australia, Canada, Denmark and the Netherlands. It is therefore important to consider how the current approach is faring and how it might be improved. In an article in the Journal of Social Policy, which was recently awarded the Cambridge University Press Award for Excellence in Social Policy Scholarship, we review the multiple concerns about research impact highlighted in existing literature and then assess these via: (i) an interview based case study of 52 academics working on health inequalities (an area of research in which most researchers have long tried to influence policy); (ii) impact-related guidance documents from research funders and REF2014 panels; and (iii) the REF impact case studies submitted to the Social Work and Social Policy panel in REF2014. Our findings highlight a range of problems with the current approach to measuring, assessing and rewarding research impact. In this blog, we briefly summarise key concerns that we identified, before making some tentative suggestions in response. This is not a comprehensive list; our goal is simply to open up a dialogue on improving the UK’s current ‘impact’ architecture.

Problem 1. Research impact agenda might encourage and reward what Ben Baumberg Geiger describes as ‘bad’ impact. The ESRC’s guidance states that “you can’t have impact without excellence” but many interviewees felt that this not only possible but, in many ways, actively incentivised. For example, current incentives seem to encourage researchers to seek impact for single studies, rather than seeking to promote the collective insights provided through syntheses of bodies of research.

How might existing science and policy knowledge help here? Science studies authors such as Knorr Cetina clearly demonstrate that academics work hard to develop grant applications that have the greatest chance of success and that, as a consequence, grant applications reflect academics’ perceptions of what funders are looking for. If major sources of academic funding (from the higher education councils, through REF, and UK research councils) are all stressing the importance of research impact, then we can expect that many academics will commit themselves to undertaking this kind of work, regardless of how appropriate they feel it is, for specific, single studies. If we want to encourage researchers and policy actors to learn from research syntheses (rather than single studies) then incentive structures need to shift accordingly.

Problem 2. Some of our interviewees recounted instances where their work had been reinterpreted, and sometimes completely misinterpreted, by research users in troubling (e.g. ethically problematic) ways. Currently, neither REF nor UK research councils do much to acknowledge the fact that impact may not always be desirable.

How might existing science and policy knowledge help here?: Researchers across disciplines but particularly those involving humans and animals in their research have spent a lot of time developing guidelines on conducting ethical research. But undertaking research ethically does not guarantee that the results of that research will be used ethically. Now that impact is being incentivised, we need to be building on these foundations to develop ethical dimension to reward systems for research impact.

Problem 3. Even researchers supportive of the aims of the current research impact agenda expressed concerns about whether its current architecture captures impact in a meaningful way. Many described the disconnect between their knowledge of the complex ways in which evidence influences policy, and the experience of ‘playing the game’ of depicting much more straightforward impact: one interviewee wryly stated that one achievement of the impact agenda has been to make “people lie more convincingly on grant applications”. It is often the most far-reaching kinds of research impacts that are most difficult to demonstrably track (whereas it is the less ambitious forms of impact that more easily enable a clear audit trail). Or it may be, as Christina Boswell has demonstrated for immigration policies, that research is used by policymakers more for ‘symbolic’ than ‘instrumental’ reasons (i.e. research is used to lend authority to decisions that have already been made).

How might existing science and policy knowledge help here? If the research impact agenda operates to incentivize simplistic accounts of research impact achievements, then we can expect academics to be persuaded to produce such accounts. To avoid this, we could consider a system of incentives and rewards that focus on demonstrable efforts to engage potential beneficiaries in research. Less radically, the criteria for demonstrating impact for research could be broadened, especially where research has contributed to substantial social or policy changes over long periods of time (e.g. research which informs debates which, in turn, inform social and policy change; as opposed to research which is directly cited in policy documents).

Problem 4. Looking at current guidance on research impact, the system currently seems to be predicated on the idea that improving the use of research in policy means increasing the flow of research into policy. Yet, empirically informed theories of the policymaking process, from Lindblom’s concept of ‘muddling through’ to Kingdon’s ‘policy streams’ model, paint a picture in which decision-makers face a daily barrage of information, with advocates and lobbyists working to pull their focus in different directions. Policymakers may therefore not necessarily welcome increasing numbers of academics sending more research outputs their way or seeking input into projects.

How might existing science and policy knowledge help here? Academic incentive structures ought to focus more on improve the use of research in policy, which may actually mean reducing the flow of research outputs into policy but improving their quality or accessibility.

Problem 5. Several interviewees, particularly those who were at an earlier career stage, suggested that impact reward systems may unintentionally reify traditional academic elites. It is, after all, a lot easier to achieve research impact if you are already a senior academic with a strong reputation in policy circles; it’s even easier to do this if you went to school with, or are otherwise personally connected to (e.g. as friends/neighbours/family) senior policy folk (see for example Ball; Ball & Exley). Further issues arise when we consider that the timing of key opportunities for ‘impact’ can be particularly difficult for academics with caring responsibilities (evening and weekend networking opportunities, for example). Since we know that women take on a disproportionate amount of caring work, then this is a gender issue.

How might existing science and policy knowledge help here? Research on policymaking processes in UK suggest that older, white men continue to dominate structures. Surveys of higher education suggests the situation is similarly imbalanced for chair level academic posts. If our interviewees are correct, then the research impact agenda is reinforcing this, much as Les Back argues, with the impact agenda encouraging ‘impact super heroes’. This suggests, at the very least, that REF impact case studies ought to be subject to some form of equality assessment (as research outputs are).

Problem 6. Around a third of our interviewees discussed the challenges of maintaining critical, intellectual independence while trying to align one’s research ever more closely with policymakers’ concerns. The ‘fudge’ which several of our interviewees described resorting to, involved phrasing policy recommendations in strategically vague ways, softening perceived criticism and (as one put it) “bend[ing] with the wind in order to get research cited”.

How might existing science and policy knowledge help here? Research on both academic and policy work has highlighted the value of critical and blue skies academic work and few seem to be actively suggesting that it is desirable to restrict this kind of work. Yet, a research funding system that rewards demonstrable research impact does squeeze the space for this kind of work since it must now compete against proposals for empirical research offering research impact potential. We may need to call for more research calls/funding specifically for critical/theoretical and blue skies research, remove the time limits for impact case studies in REF2021, and reward efforts for knowledge exchange and public engagement (rather than demonstrable impact). Demonstrable efforts to improve the quality of public or political debate with academic scholarship might, for example, be deemed as worthy of reward as a citation in a policy document (which may, after all, simply be an example of symbolic research use).

We expect that most of the concerns raised here will be familiar to colleagues who have come across contributions on this issue from Greenhalgh & Farly, Pain et al, Back, or simply from discussions with colleagues in the offices, staff rooms and corridors of the UK’s universities. Our goal in writing this, as we all gear up to REF2021 (and as other countries look to the UK’s impact structures), is to encourage colleagues to develop suggestions for more constructive approaches to encouraging and rewarding research impact. The current architecture is not a ‘done deal’: researchers who care about the relationships between research and policy should try to improve how the UK incentivises and reward research use.


About the authors

Professor Kat Smith is based in Social Policy at the University of Edinburgh, where she is currently Co-Director of SKAPE (the Centre for Science, Knowledge & Policy at Edinburgh) and Director of Research for the School of Social and Political Science. Kat’s email address is Katherine.Smith@ed.ac.uk and she tweets from @KatSmithInEd. Her research focuses on the interplay between research and policy, including researching who influences the policies that impact on people’s health, how and why. She has particular interests in public health and inequalities and is currently writing up research exploring public understandings of health inequalities (building on this recent meta-ethnography).

Dr Ellen Stewart is a Chancellor’s Fellow in the Centre for Biomedicine, Self & Society at the University of Edinburgh. She is Associate Director of SKAPE and an elected member of the Social Policy Association Executive Committee. Her research focuses on public roles in the governance of healthcare, and she is currently writing up a postdoctoral study of public involvement and evidence use in Scottish hospital closures. Ellen’s email address is e.stewart@ed.ac.uk and she Tweets from @DrEllenStu.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s