REF must trust scientists

REF must trust scientists

A discussion of recent research on how REF misrepresents research and academic publications in the UK. Moki Groen-Xu And Peter Coveney the dispute is for the researcher, not for the rules focused on the REF.


All systems of governance—the state, parents, or research assessments—range from authoritarian to democratic. Authoritarian systems can better deal with common vices such as incompetence and nepotism. Democratic systems are better when actors are more diverse, better informed than the evaluator, and actions are difficult to measure.

REF is right on the authoritarian side of the spectrum: it imposes an exorbitantly complex system of rules on British academics at an incredible cost. £471m (about 1,000 research grants worth £500,000). Although scientists are better informed about their work than REF groups, they must present their work in a restrictive manner. It is based on measurable results, which limits new, basic and bold research. Finally, its universal structure punishes not only research that goes beyond existing paradigms, but also standard research that does not meet its narrow criteria (e.g. important but not original applications).

However, REF fails to take advantage of authoritarian systems because: Rather than rewarding competence, it promotes mediocre, fast-paced work and turns to “colleagues” with thousands of results for evaluation. Because it focuses on results, it distributes more funds to already well-endowed institutions and thereby strengthens the status quo. And in his “research environment” category, he fixes investments that correlate with funding and encourages predatory overheads.

IN recent co-authored article, one of us quantified the impact of REF on research productivity using 3,597,272 publications made by UK scientists within a given timeframe. Before the REF deadline, UK academics produce 4% more journal articles compared to the year after the deadline (29% more REF papers in the year before the deadline compared to the year after and 60% more books).

These bursts caused by REF are not the most innovative (Fig. 1). They receive 5% fewer links over the next five years compared to those published immediately thereafter (16% among REFs). 0.003% more recalls later (compared to years after the deadline), which is 79% of the average recall rate. The smaller variation in their citation rates (5% compared to post-deadline publications) and the impact factor of their journals (6% compared to post-deadline journals) suggests these are safe bets. The lower half-life of their journals (9% compared to post-deadline journals) suggests that they focus on shorter-term topics.

The constant rejection of new work manifests itself in the long term. 37 years after the introduction of the Assessment, the UK share of research in the UK Integrated Review priority areas of artificial intelligence, quantum technologies and engineering biology is only 7%. Tony Blair and William Haig report that “today’s experience in the most advanced AI technologies is mainly found in cutting-edge technology companies, rather than in the country’s established institutions.“.

The current REF consultation acknowledges these concerns, but it is not enough. It’s time to abandon the failed authoritarian approach and trust the scientists being judged.

Rules. The consultation seeks to solve problems by introducing new rules. We recommend replacing these rules with freedom for scientists to explain their work.

Stories. Cutting-edge research is difficult to assess without context. The proposed explanatory note is a simple addition to the existing rules. We propose to replace the result-based view with descriptions that can refer to one or more results, put them in context, and explain how they meet the REF criteria.

Evaluation periods. Evaluating just five years after publication risks overestimating metrics such as a journal’s impact factor. To encourage projects with long-term potential, allow results over more than one period. Long-term evaluation is also important for impact case studies, which naturally reward short-term applied work.

output weights. The high use (and provision) of double weighting highlights the disparity between rewards and costs for different types of research. Change the unit of presentation to packages of one or more results with a self-assessment of their merits (for example, as cutting-edge, applied or recurrent research) and corresponding weights (perhaps in units similar to current results – on FTE). This will allow teams to focus on the most important presentations and be more cost effective.

Recognition of non-research work. As a nationwide assessment, the REF should encourage work that complements and builds on cutting-edge research. Removing the minimum 2* for studies underlying impact case studies is a start, but inconsistent without recognizing such work as a whole. We propose a separate category to recognize capacity building and development work with lower remuneration reflecting their lower cost compared to pioneering work.

People, culture and environment. The metrics used for this category almost entirely (with the exception of the proposed EDI metrics) reflect reputation and funding, and therefore exacerbate existing inequalities. Estimates should focus on the actual opportunity invested, not the amount invested.

  • This category should reward a culture of exploration, including a tolerance for failure.
  • Provide higher achievement awards at lower ranked institutions.
  • If problems related to measurability are not resolved, the focus should be on knowledge accumulation.

Panels. The predominance of senior scientists in the groups contributes to the reinforcement and overestimation typical of “marking one’s own homework”. The proposal to appoint EDI advisers is not enough.

  • Teams should include Emerging Researchers (ECR) who are better informed about the most cutting-edge areas.
  • They should consult more with scientists outside the UK in order to evaluate the most innovative work and avoid trenches and inequalities.
  • Institutions should flag new and controversial material to minimize the time burden on ECR and external evaluators.

A consultation is a well-intentioned attempt to solve problems while avoiding disruption. But the goals don’t justify keeping the existing setup. Instead of imposing even more rules – if the REF is indeed to continue (controversial proposal) – it should give institutions the necessary leeway to get the UK back on track to not only participate but sometimes lead cutting-edge research at high speed. global level.


The content created on this blog is for informational purposes only. This article represents the views and opinions of the authors, but does not reflect the views and opinions of the Impact of Social Science blog (blog) or the London School of Economics and Political Science. Please see our comment policy if you have any concerns about posting a comment below.

Image credit: Laura Heyman via Unsplash.com.


Printable, PDF and Email