When it comes to cultural research, why do fairy tales carry more weight than evidence?

When it comes to cultural research, why do fairy tales carry more weight than evidence?

The academy is nominally a fact-based profession, but higher education institutions are more likely to rely on systematic knowledge rather than on individual experience and anecdote when developing policies for academic careers and management. Discussing work on his recent book The Quantified Scholar, Juan Pablo Pardo Guerraargues that higher education still has a lot of work to do before it can implement the results of the “science of science”.

Not too long ago, I was engaged in an exciting conversation with fellow sociologists on the ever-important topic of how best to support doctoral students throughout their studies. The challenge for our exchanges was how best to support the research and publishing strategies of doctoral students. It was said that in order to prepare students for a competitive job market, we need to increase the level of cooperation between graduate students and faculty. This necessitated the creation of additional structural opportunities for students to work with us, on the assumption that this would lead to more robust research preparation, a more thorough reporting of publications, and therefore a higher chance of successful placement.

The conversation I had with other sociologists was not unique. I heard it and heard it at other times and places (at conferences, in busy corridors, and slowly over dinner). What was intriguing about this example, however, was how it clarified the scientists’ assumptions about how they should organize their workplace. Taking part in this conversation after several years of immersing myself in the literature on metrics, scientific careers, and the sociology of science in general, it became clear that the ideas proposed, however well-intentioned, are not connected with the growing body of data on how modern academies work. .

There may be research into what works and what doesn’t, but making that research relevant (getting science to really “believe in science”) becomes a Sisyphean struggle.

You see, contrary to our best intentions, research on collaboration between doctoral students and their supervisors tends to emphasize the importance independent Publication paths. IN serious research In a study of protégé-advisor relationships among 40,000 STEM scientists, Ma, Mukherjee, and Uzzi found that scientists who pursue research interests other than those of their immediate advisors and publish only a few papers with them, generally have more successful academic careers. It’s confirmed research by computer scientists Rosenfeld and Maksimov concluded that “young computer scientists should be reasonably encouraged to stop collaborating with their supervisors—the sooner the better.” This is not to say that mentoring is unimportant: as Wuestman et al. show, more numerous and varied mentoring relationships will have a positive impact on your academic career. This means, however, that betting on collaborations is not the smartest strategy.

For more than a century, scientists have studied their own labor and its products as legitimate objects of study. From the early bibliometric literature to the recent “science of science”, from philosophical discussions of boundary demarcation to recent contributions to scientific and technological research, from ethnography to major computational analyses, we have amassed a wealth of evidence about our craft, its problems, and its various possible points of intervention and reform. We know science as we know the mechanisms of social class. We know science as well as the genomes of living beings. We know the science as much as we know the complex climate patterns that portend our collective future. We know. Too often, however, the knowledge of our craft is not related to its regulation. There may be research into what works and what doesn’t, but making that research relevant (getting science to really “believe in science”) becomes a Sisyphean struggle.

Letter quantitative scientist, I came across this as a recurring topic. The book shows how research assessments are shaping the production of social scientific knowledge in the UK. The highest level claim is that grade after grade the epistemic and organizational diversity of the British social sciences has been declining due to multiple evaluations. I hoped the statement would serve as a tool for action. However, in understanding these effects, I find that colleagues are returning to something closer to the folk methods described by ethnomethodologists (general, implicit, and habitual ways of explaining the world) than to systematized knowledge about our fields. For example, when talking about how assessments have influenced our knowledge practice, scholars have described independence, while the literature notes a contested process of active formation. In thinking about how evaluation planning works in our organizations, scholars have spoken almost naturalistically without considering the possibilities of other documented best practices. And when I was shown evidence of how we, scientists with a certain professional view of the world, jointly produce the results of research assessments, I was faced with distrust. On one memorable occasion, when I was presenting a book to an audience, I was asked why I blame my troubles on scientists and not managers? “We, too, have a responsibility to give power to the numbers through which we are ruled,” I replied, but was met with grumblings of disapproval. Shortly thereafter, the next speaker was introduced, along with a list of highly cited journals in which they had been published. The numbers given life by the scientists in the room.

It’s not entirely surprising. As scientists, few know more options. We may be taught methods and theories, coding languages ​​and historiography, but we are rarely exposed to more reflective knowledge about ourselves as professional groups. Many scientists learn to be scientists by largely “getting ahead”, following what others have done, emulating what has worked (albeit imperfectly) in the past. Handbooks of academic life exist, but they are generally meant to be read by those who are already desperate, not those who are just starting out. This is, of course, ironic, especially for a professional group dedicated to the production of knowledge about the world. “Physician, heal thyself,” it is said, unless you are a real doctor.

We may be taught methods and theories, coding languages ​​and historiography, but we are rarely exposed to more reflective knowledge about ourselves as professional groups.

Central terminal quantitative scientist is not descriptive (that research assessments make fields more homogeneous) but prescriptive (knowing how assessments work and our profession is a powerful tool for change). The kind of reflective solidarity she calls for is exactly the kind of evidence-based intervention that could make our working lives more balanced and fair. For example, we may think that the best way to help graduate students is to cooperate more with them, but this will only serve as a prerequisite for our usual way of academic activity. The best form of solidarity, the best commitment to their work, would be achieved if they really thought about how to promote successful independent research supported through mentoring. This may include changes in training. This may require changes in program design. But this necessarily includes reviewing the data, evaluating what works, evaluating the evidence, and testing interventions.

As a new round of research is taking shape in the UK, reflective solidarity is becoming increasingly important. This is a formative period when decisions are made about internal institutional processes, the structure of fictitious estimates, the nature of administrative rules, and other important choices. But this formative moment can also be seen in the science of science, in what we know about how knowledge is produced, the roads and alternatives available to British scientists, both at the institutional and sectoral levels. What will happen to the increased emphasis on impact? Are there opportunities to introduce alternative indicators that mitigate some of the problems of those used in the past, if not explicitly then indirectly? Although there are no concrete simulations, we calculate with a wealth of experience. Indeed, the elements for a more reflective professional exercise are widely available—a prime example being work coordinated Research institute research University of Sheffield, as seen in a recent report by Curry, Gadd and Wilsden, Using Metric Wave. Being attentive to this and other knowledge about yourself is of fundamental importance. Let’s be scientific, with all the caveats.

The content created on this blog is for informational purposes only. This article represents the views and opinions of the authors, but does not represent the views and opinions of the Impact of Social Science blog (blog) or the London School of Economics and Political Science. Please see our comment policy if you have any concerns about posting a comment below.

Image credit: Ryoji Iwata via Unsplash.com.

Printable, PDF and Email