In recent decades, there has been a marked trend towards greater use of hyperbolic adjectives in academic writing. Based on a study of REF 2014 impact case studies, Ken Hyland considers this genre to include an even greater degree of hype than research writing and explores how this persuasive language is used in different ways in different areas of research.
The claim that scientists advertise their research is not news. The use of subjective or emotive words that embellish, publicize, embellish or exaggerate results and promote the merits of research has been noticed for some time and has drawn criticism from the researchers themselves. Some argue that the practice of promotion has reached a level where objectivity has been replaced sensationalism and artificial excitement. By exaggerating the importance of the results, the writers seem to undermine the impartiality of science, fuel skepticism and repel readers. In 2014 magazine editor International Cell Biologyfor example, lamented the increase in “dramatic words” such as sharp decline, new and exciting evidence And wonderful effectwho, he believed, turned science intotheatrical business’. This did not affect the humanities and social sciences either, as it was noticed that writers clearly emphasize the significance of their work in literary criticism and applied linguistics.
All of this, of course, is the result of a publishing boom fueled by intensive audit regimes where people are judged by the length of their resumes rather than the quality of their work. Metrics, financial rewards, and career prospects have come to overwhelm and dominate the lives of scientists around the planet, creating more pressure, more explicit incentives, and more intense competition for publication. The rise of hyperbole in medical journals has been illustrated Winkers, Tydink and Ottewho found a ninefold increase in 25 “positive-sounding words” such as new, amazing, innovative and unprecedented in PubMed journals between 1974 and 2014. Looking at the four disciplines and other popular terms, me and kevin jiang found twice as many hypes in each newspaper as compared to 50 years ago. While growth has been most noticeable in the exact sciences, the two social science fields studied, sociology and applied linguistics, have surpassed the sciences in terms of the use of these words.
While hype now seems to be commonplace in the battle for attention and recognition in research papers, we shouldn’t be surprised to find it reappearing in other genres where scholars are judged. These are primarily impact case studies submitted in applications for UK public funding. In 2014, research evaluation to determine university funding became familiar to scholars around the world. It has been expanded beyond “scientific quality” to include “impact” on the real world with the Research Excellence Framework. A more intrusive, time-consuming, subjective and costly process. The Impact Program aims not only to evaluate the contribution of published results, but also to ensure that funded research offers value for money to taxpayers in terms of social, economic, environmental, or other benefits.
For some observers, the fact that the impact is rated higher is a positive step away from the perception of the Ivory Tower of research for the sake of research. The big problem, however, is how the evaluating body, the University Grants Board, decided to define and “fix” the impact. The structure they stumbled upon, presenting a 4-page narrative case study supporting claims of positive research findings, was almost an invitation to sugarcoat the submissions. Offering over £4 billion, it’s an extremely highly competitive and high-stakes genre, so it’s no surprise that writers are rhetorically stepping up their bids.
Our analysis of the 800 “impact case studies” presented in the 2014 REF shows the degree of hype. Using cases on the REF website, we searched eight target disciplines for 400 hype words. We found that hype was significantly more common in these impact cases than in research articles: 2.11 terms per 100 words compared to 1.55 terms in editorials using the same set of elements. Our spectrum of the eight major disciplines shows that social science writers were prolific hyperactive, but by no means the most notorious criminals. In fact, there is a noticeable increase in scale hype as the most abstract and sparse fields put more effort into getting the raters to understand the message (Figure 1). For example, research in physics and chemistry is likely to be several steps further from informing real applications than social work and education.
We also found differences in ways that fields advertise their views. Terms that underline confidence the most frequently encountered elements predominate, accounting for almost half of the forms as a whole. Words like significant, important, strong And key serve to reinforce the effects of statements made with a commitment that almost compels one to agree. Confidence aside, Fields deviated in their preferred advertising styles. STEM disciplines have typically used novelty as a key aspect of their compelling arsenal, with references to the work’s originality and inventiveness (first, timely, novel, unique). Sociologists, on the other hand, have emphasized contribution made by their research, citing its value, results, or application in the real world (essential, effective, useful, critical, influential). So while sociologists may not be the most guilty players, they have, unsurprisingly, bought into the game and invested heavily in rhetorically promoting their work.
The study shows how impact criteria are interpreted by scholars and how they influence their narrative self-reports. The results point to serious problems with using individual, evidence-based case studies as a research impact assessment methodology, as it not only encourages the selection of particularly impressive examples, but also encourages their presentation. The obvious question is whether we should consider authors as better placed to claim their work. In essence, impact is the experience of the recipient: how the target users understand the effect of the intervention. We must ask ourselves how much, even in an ideal world, one should rely on the claims of those who have a vested interest in positive outcomes. While the impact program is undoubtedly well-intentioned, there may be something to be learned here. It may not be worth setting evaluative goals until you are sure how best to measure them.
This post is based on an article co-authored with the authors, REF promotion: advertising elements in spectacular materialspublished in Higher Education.
The content created on this blog is for informational purposes only. This article represents the views and opinions of the authors, but does not represent the views and opinions of the Impact of Social Science blog (blog) or the London School of Economics and Political Science. Please see our comment policy if you have any concerns about posting a comment below.
Image Credit: LSE Impact Blog via Canva.