Preparing for the coming wave of generative AI in journalism

Preparing for the coming wave of generative AI in journalism

The release of popular generative AI technology to the public last year has created a range of responses, particularly in fields dependent on creativity and information processing. In this Q&A Maayan Arad talks to Charlie Beckett about how generative AI is reshaping journalism.


Readers can hear more from Charlie and other researchers on a recent LSE iQ podcast, Is AI coming for our jobs?


What made you interested in the relationship between technology and journalism and how is AI changing this debate?

I’d like to say it’s because I’m a brilliant predictor of the future and I knew that all this explosion of interest in AI was going to happen now. But, it’s been much more gradual. Prior to joining LSE, I was a journalist for 20 years and during that period technology has been a constant driver of change. I was in television during the shift from film to video, I saw first-hand how the arrival of mobile phones and then the internet changed journalism. In 2006 I joined LSE to set up Polis, a think tank based in LSE’s Media and Communications department focused on the intersection of journalism, research and policy, primarily to explore how social media was changing how we create and consume journalism.

I think in a way, we’re going through a kind of golden age for journalism, with generative AI being the latest stage in this development. Although, it may not feel like that if you’re a hard-pressed hack in an under-resourced newsroom. At Polis we have been working on journalism in AI since 2019 and have created a project that works globally to help journalists understand these technologies and how to use them in a responsible way that boosts independent journalism rather than undermining it. This was going great and then in the past year with the release of large language models and generative AI, like ChatGPT, DALL-E and Midjourney, there’s been a vast increase in interest and new kinds of questions being asked of journalism.

How do journalists use AI and how do you see new generative AI tools changing the sector?

There are very few newsrooms that have got bigger in recent years. Yet, they are creating more content than before and they’re reaching more people than ever before. Take the Guardian in the UK for example, in the print era it had an audience measured in the hundreds of thousands, it now has a global audience of 40-50 million and over a million members who make a voluntary payment.

To operate at this kind of volume and rapidity, trying to constantly come up with news stories and engage your audience, requires incredible efforts. Technology is already deployed to do this, it provides audience data, it helps personalise website and newsletters, it’s helping to look for stories in this tsunami of data and information out there. AI basically helps to do all of this that much more efficiently. It’s also more user friendly. Speaking to a video editor the other day, he described how it normally takes him about 45-60 minutes to cut a two-minute piece for social media, now AI enables him to do it in half the time.

(embed)https://www.youtube.com/watch?v=_BB5QmyWWA0(/embed)

So it’s an efficiency game. But, as Gina Chu, executive editor at Sema4, made clear in a big plea for the industry to use its imagination, journalists should not just be thinking, how can this make it easier to do what we’ve already done, but what new services and products can we build out of this technology. One that I have come across is Skift. It is a kind of travel business media, and they’ve just created a chatbot based on their own data. A similar example is a personal finance program on NPR, the public service radio media in the US. They’ve done the same thing. So, you can now interrogate through a chatbot all their data. That’s clever, that’s imaginative and it is the way of getting your journalism to do things that it couldn’t do before.

Is generative AI changing the practice of journalism itself?

Existing AI tools are used for tasks like automating reporting on high school football scores, or analysing leaked documents. These aspects are important, but so is how AI tools might generate a variety of content outright. The news organisations I’m speaking to, the better ones anyway, are saying it’s amazing, but we are not going to use it to publish because it makes mistakes. Yet, they are thinking about the strategy, how it’s going to change the structure of their business model. They’re also thinking hard about copyright. Who is responsible for the data that is being used in these large language models? Where do they get it from? Who is responsible if you create something with these tools? For instance, you want to see real pictures of the war in Ukraine, but would you mind if stock images or illustrations are AI generated?

I am more concerned with how you can become dependent on technologies and how they can distort your priorities

Journalists are also thinking about applications. AI tools can help summarise emails, draw up spreadsheets. It can create PowerPoint presentation slides and so on. For a journalist, who is essentially somebody who processes information, they can be a time saver. However, at the moment we simply might not know or underestimate the longer-term impacts. There is an optimistic scenario that says: if the AI can do lots of the more routine, formulaic, general news, then that opens up the possibility for journalists to do the more rigorous, critical, empathetic campaigning, ideological, value-driven news, perhaps even getting out and talking person to person.

I am more concerned with how you can become dependent on technologies and how they can distort your priorities. For example, if you’re doing stories based on data, then you are likely to be drawn towards stories that are more datafiable, stories with numbers in them or spreadsheet journalism. In this instance, you may be less likely to do stories that can’t be automated, stories that are perhaps more subtle, more complex, more in the grey zone, stories about humans. There is a parallel here with social media, where there has been a tendency for journalists to exaggerate the importance of social media and especially Twitter (now X). Social media has been wonderfully useful for network journalism, but it is not at all representative, it’s just a very narrow slice of people and current debate.

Does the ability to create news-like content present a challenge to established news media?

Journalism is part of a remarkable live experiment. We are in a kind of wild west at the moment as big corporates like Microsoft and Google race to see who can exploit it best and release a seemingly endless array of algorithms. Amongst some tech leaders there is also a view that we should simply wait and see when it starts doing bad things, so we can then correct it, which is an interesting take on public safety.

I think one of the key things is therefore going to be the way that the companies themselves introduce guardrails and, one hopes, transparency. The other is regulation. We have copyright laws, although they are territorial. We have online safety bills and so on in places like the UK and across Europe. We have privacy protection laws and so on that relate to anything that’s online. In the USA, there’s a set of tools which is more around market equity. Realistically the outcome is likely to fall somewhere between these two poles.

I think one of the key things is therefore going to be the way that the companies themselves introduce guardrails and, one hopes, transparency.

News media has proven remarkably resilient over the past decade, but it should be bracing for a storm. It will have to try and invest, because some of the tech companies have an interest in news media being relatively successful, but most don’t. Overall we’re a very small part of online content creation. So in that sense, news media has to look after itself.

I am likely biased, but I think the news media is disproportionately important because of the role that journalists can play in providing, if you like, a reliable brand, a place where at least there are rules in place, there’s institutional accountability, there’s a competitive onus upon news organizations to give people decent journalism that they find relevant. I would not go so far as to say it’s the journalism’s job to clean up the internet, but it can provide sources for people to get more accountable information.

On the other hand, there is no doubt that if generative AI can make good information spread quicker and more efficiently, then of course it can do it for mis and disinformation. I don’t want to sound  complacent and I don’t think that the technology in itself could fix this, but it can help. You can use AI for fact checking. You can automate fact checking. You can automate filters. You can create watermarks. You can signal to people what are more reliable sources.

Some people thought that the Pope was really wearing a Balenciaga coat the other day, but most people didn’t.

There are ways of mitigating bad actors, but we need to pay less attention to the volume of mis and disinformation and more to thinking strategically about media literacy resilience. Recent research shows that fake news is less effective than you might think. Some people thought that the Pope was really wearing a Balenciaga coat the other day, but most people didn’t. And within a few hours, it was very clear that it was fake, a delightful fake.

What is more concerning is when people try to flood certain online spaces with information that isn’t necessarily untrue, but is manufactured, potentially by AI. A good example is Russian media taking manufactured comments that imply public opinion in the UK is hostile to supporting Ukraine and then using it as part of a story saying: “Hey, look, British people don’t support the war in Ukraine.” There are real attempts to distort and swamp and these are going to continue. I think we have to be careful that we don’t just see this as a tech problem. I think this is a problem about our politics, it’s about our education, it’s about understanding how people feel about politics as much as the information they’re getting.

What skills do you think will be important for journalists grappling with these changes?

I think for everyone, but especially journalists, there are some obvious steps to take. The first thing is just making sure you know a bit about it. For example, our Journalism AI project has little introductory courses. We have an AI starter pack where people can explore this technology. You need to know something about it so that you’re not just repeating the myths and misunderstandings and hype, you have some sort of sense of what it’s about.

It is difficult though because the use cases for generative AI are incredibly variant. It would be easy for me to say that everyone should learn to code, but the joy of generative AI is that you don’t need to. You can just start experimenting with it. And I think that is the best way. Yet, I would consider two things: One, application. A tool can be brilliant, but if it doesn’t fit your workflow, if it doesn’t give the outcomes you want, then what’s the point? Two, celebrate what it might do and how it might do something differently. It might make you think differently. It might make you think that your journalism itself needs to change.

Curiosity rather than creativity is something that machines will struggle with.

In journalism the job is not to make the journalist’s job sustainable per se. It’s to make journalism sustainable. I think in practice, the good, creative, independent, critical journalist who is deeply curious will survive. Curiosity rather than creativity is something that machines will struggle with. Generative AI can respond to prompts, but the human quality of wanting to know something that you don’t necessarily know, or to know more and to investigate, that, I think is currently at least impossible to replicate through AI.

Generative AI will change journalism and journalists, whether it means a loss jobs overall, I don’t know. And that’s not the point. The point is whether you are still getting good journalism. There is a danger of being too dependent on the technology. This is why I keep raising this hopeful idea that it might allow us to do more, ‘human journalism’ and enable us to be more thoughtful and critical about the way that we use these technologies. This is not limited to journalism, generative AI is already changing how universities approach teaching and research. Journalism should have a strong role in the ongoing debate around these technologies. It remains a good place to have informed expert analysis, which is public-orientated, that tries to speak on behalf of citizens. Journalists are supposed to be experts in analysing information and understanding structures of power and information and communicating that to the public. We mustn’t leave this debate to the politicians and the tech people.

 


For more analysis of the impact of AI – listen to the latest IQ podcast, which includes input from the author. This Q&A is an updated and edited extract from an interview conducted for the podcast.

The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: Google Deepmind via Unsplash. 


 

Print Friendly, PDF & Email