Are Universities Too Slow to Handle Generative AI?

Are Universities Too Slow to Handle Generative AI?

Like other trends in education technology, generative AI has led to a lively debate and division between those who support the potential disruptive nature of the technology and more critical viewpoints. Mark Carrigan argues that as generative AI is already being introduced into professional and student practice, the higher education sector cannot avoid involvement and must find new ways to respond more quickly to new developments.

How will universities cope with generative AI? Asking a question like this runs the risk of taking the hype at face value, even if the metaverse and blockchain are disappointing, it really is the “next big thing.” There are huge economic interests at work in advancing generative AI as the tech sector struggles to cope with the changing economic climate and the failure of its pandemic dream. ‘view new deal’, uses generative AI to maintain its influential position in society. However, if we do not investigate how universities are responding, there is a parallel risk that we will not be able to solve the practical problems associated with generative AI that universities are already beginning to face.

Machine production of cultural artifacts has a long history. procedural text generation practiced since at least 1305. Narrative Science offered systems for the production of business and sports journalism more than a decade ago. What has changed with ChatGPT is the speed and flexibility of the offering, as well as the phenomenally successful marketing campaign that has made it fastest growing consumer app in history. While the humanistic assumption that art and culture should emerge from human creativity has been undermined for some time, the current moment is of great importance due to its wide understanding and low barriers to entry for those who seek to experiment with cultural production. Thus.

if we don’t figure out how universities are responding, there is a parallel risk that we won’t be able to address the practical problems associated with generative AI that universities are already beginning to face.

A particular analogy compares ChatGPT to a calculator: it says that only ignorance of the technology makes us think that using the system is a substitute for creativity, not an expression of it; once we come to terms with its affordances, we will see it as using a calculator to do arithmetic more easily, to free ourselves for other important tasks. The problem with this analogy is that calculators are not an integral part of global computing architectures in a multi-billion dollar arms race to dominate our sociotechnical future. The practical challenges that universities will face in the near future, such as maintaining the integrity of grades and recognizing automatic contributions to publications, need to be seen in this broader ethical and political context.

Ignoring this issue risks creating chaos in grades and failing our students who will work in an environment where these systems are ubiquitous. However, to normalize it builds platform capitalism V main activity of the modern university. These systems are built on computing power and data collection as well as on scientific innovation, and their continued growth and development depend on the continuous expansion of user interaction and data extraction mechanisms. OpenAI were explicit about relying on “collective intelligence” to guide their deployment and system improvements, leaving higher education in the awkward position of institutionalizing its business model at the university.

“The struggle for thought leadership and narrative control is simply unbearable,” as Dana Boyd recently said. observable, with competing utopian and dystopian visions laced with a rich vein of usually unacknowledged self-interest. Not only is the university sector no different in this regard, there is a particular form of discursive explosion that the post-pandemic university is prone to; as a starting point for publications about COVID-19 seems to be weakening another one pulls up. Google Scholar is already logging 629 results for the exact search term “Chat GPT” despite the fact that the software only launched on November 30, 2022. It remains to be seen how generative AI can further accelerate this commentary and analysis. Obviously, this blog post is part of the explosion, although I feel sincere when I write it, as, no doubt, do the authors of each of these 629 articles.

Warning boyd The “struggle” proposals are timely because they leave “little room for deep reflective thinking, for subtle analysis” of the core issue we face: “How do we create important structures for understanding and appreciating the transformations that unfold as they unfold?” deployment? and send them back into development cycles?” This is what universities are currently facing in their attempts to address pressing practical issues (such as providing guidance to students on the use of ChatGPT in final and formative assessment) in a unified manner that lays the groundwork for responding to still unpredictable future events. Part of the problem is that even a single system like ChatGPT includes a dizzying array of use cases for scientists, students, and administrators who are still in the process of being discovered. Its core capabilities are expanding, seemingly faster than universities can handle, as evidenced by the launch of GPT-4 (and the all-important ChatGPT). connect architecture), while universities are still struggling with GPT-3.5. Also, generative AI is a broader category than ChatGPT, and images, video, code, music, and voice are likely to become popular with the general public in the coming months and years.

In what Philip Vostal and I have called the Accelerated Academy, the pace of working life is increasing (though uneven), but policy development is still too slow to manage. In disparate and centralized universities, there is a constant problem of distance from practice, when policies are formulated and procedures are developed without taking into account real realities. With the use cases of generative AI and the problems it generates being discovered on a daily basis, we urgently need mechanisms to identify and filter these problems across the university in order to respond in a way that goes beyond established learning time horizons. and bureaucracy training. If policy formulation and decision-making cannot be accelerated, there is a risk that institutional measures will actually exacerbate the problems by conveying expectations that are inconsistent with a rapidly changing situation, such as moving towards creative forms of assessment without taking into account the growth in text messages. systems for converting images and text into video.

If policy formulation and decision-making cannot be accelerated, there is a risk that institutional measures will actually exacerbate the problems.

There are many reasons why more flexible decision-making is needed, but one of them, in my opinion, the most important is the risk of increasing the burden on staff. For example, as Phil Brooker and I explored in our work on coding skills for sociologists, individual “digital professional development” models (either voluntary college education or private use of open resources) can usefully be replaced by group work. on real problems, with better intellectual results and less burden on scientists. If universities can’t develop structures that can deal with the consequences of generative AI, then academics and professional service workers will be out of work, like Beck and Beck-Gernsheim. put it down once, “to seek biographical solutions to systemic contradictions”. There are exciting creative opportunities and ethical challenges on the horizon. I would like to be more confident in the ability of universities to implement the former and adequately respond to the latter.

The content created on this blog is for informational purposes only. This article represents the views and opinions of the authors, but does not represent the views and opinions of the Impact of Social Science blog (blog) or the London School of Economics and Political Science. Please see our comment policy if you have any concerns about posting a comment below.

Image credit: deep mind via

Printable, PDF and Email