Newsrooms have been using artificial intelligence for years to perform tasks such as automation, transcription or content personalization. But it is the advent of generative AI like ChatGPT that has reignited conversations about opportunities, risks, and ethics. Generative AI will disrupt the way journalism is produced. Rather than hope for the best, journalists must better understand how they can use this technology to their advantage and counter the threats it poses.
Summary
To help your newsroom prepare, FT Strategies, a consulting branch of the Financial Times,held a webinar.Journalism.co.uk brings you the highlights from George Montagu, senior manager and head of insights at FT Strategies, who spoke at the event.
The issue of intellectual property
Large language models (LLMs), such as ChatGPT or Bard, heavily depend on quality input, especially when accuracy matters. In other words, when it comes to training these models, it’s a case of “garbage in, garbage out.”
Read more: Eight tasks ChatGPT can do for journalists
But here’s the problem: these models are trained on hundreds of thousands of pieces of content from publishers such as the NYT, the Guardian or the Times. This also applies to content behind paywalls. Publishers have received no recognition or compensation for the use of their content, despite having invested heavily in its production.
If you want to see the glass half full, this is an opportunity for media organizations to highlight the value they are creating and try to find ways to demand some form of payment from generative AI model creators. For those who see the glass half empty, publishers urgently need to redefine the way they protect their intellectual property and address this situation, which isn’t going away. The principles of Digital Content Next are a good place to start.
Improve the user experience
For years, publishers have shifted from being simple content providers to platforms that offer users a certain freedom in how they consume news. This isn’t just about news providers: Spotify, for example, launched a personalization feature called AI DJ that takes the user from simply listening to songs to actively using a tool.
Publishers may eventually see their content separated from the format, which will be decided by the user. For example, users might choose the tone of an article, its length, or complexity.
But this isn’t necessarily good news. Generative AI could lead even more users to receive their news through secondary gateways instead of going directly to publishers. This could happen through ChatGPT or Bard summaries, and in such cases, fewer direct users will inevitably impact publishers’ profits and advertising revenue.
To avoid this, news providers need to prioritize direct relationships with their audiences and ensure they become a destination for their users—not just mere content producers.
Emphasis on trust
There have been some high-profile mistakes caused by generative AI, rightly called “hallucinations,” in which the tool makes something up and a news website publishes it without checking.
Fears that generative AI will flood the internet with low-quality content are justified and it’s already happening. Although disheartening, this is also an opportunity for news brands to become a trusted, quality content destination where audiences can turn to make sense of the world.
There’s no escaping the fact that the role of the media is changing: from helping people keep up to date with what’s happening in the world, to providing reliable, high-quality content verified by people that helps them make decisions in their daily lives.
But this torrent of AI-generated junk will most likely reduce quality and further damage audience engagement. So we must reconsider what’s truly valuable for our readers and viewers, and double down on quality, reliable, original content that will help us stand out from AI-generated material.
Source: journalism.co.uk










