As the influence of artificial intelligence and generative AI chatbots like ChatGPT in society continues to expand, concerns have mounted about their potential to displace workers. Given that OpenAI primarily designed its chatbot for writing, journalism jobs were believed to be among the first on the chopping block.
While newsroom layoffs are rife worldwide, the proliferation of AI tools so far has played a smaller part in them than expected. Most publications employing these AI tools still insist on retaining reporters on the payroll who can review outputs for accuracy, clarity, and legal compliance.
The New York Times approved AI use by editorial staff last month, for example. The Associated Press, The Guardian, and News Corp have also disclosed aspects of their AI strategies. According to a 2023 report from a London School of Economics think tank, over 75% of journalists, editors, and other media professionals use AI in news production.
Some Gannett-owned Massachusetts-based news outlets have begun using AI-assisted reporters who rely on generative AI tools to swiftly convert community announcements and press releases into articles. One of these outlets, MetroWest Daily News, reported that the objective is to bring stories to the public more rapidly while “freeing up our full-time multimedia journalists to focus on enterprising news coverage.”
While AI may streamline interview transcripts or suggest compelling headlines, it cannot discover enterprising news coverage. When an Italian newspaper published an insert entirely written by AI, its article on Donald Trump began with a vague statement about the president’s notoriety resembling an aggregation of existing content and lacking both an Italian perspective and a distinctive editorial hook. This is why human journalists will always remain integral to the journalism process.
AI summaries might promote better journalism, if big tech will pay for it
The question remains: If AI isn’t replacing journalists, why are so many struggling to keep their jobs? The business models of non-subscription-based publications depend on advertising revenue, which in turn hinges on readers clicking through to articles and being exposed to ads. With an increasing number of people consuming news via social media — and now through AI-generated summaries displayed at the top of Google search result pages — click-through rates are plummeting, pushing news outlets into dire financial straits.
This is the model that favoured clickbait articles and provocative headlines to maximise website traffic at any cost, ultimately alienating readers and eroding trust in the journalism industry. But with Google’s blue links dropping in relevance, news outlets may soon be competing for visibility within AI-generated summaries. This could paradoxically foster a return to quality journalism as the new ecosystem favors results that break news, deliver exclusives, or offer unique insights that incentivise substance over sensationalism.
Still, financial motivation for news outlets remains critical. Publishers including The New York Times and Condé Nast have initiated lawsuits against tech giants for unauthorised use of their content in AI training datasets, asserting that companies have not paid for the rights to repurpose their copyrighted material.
Should the courts side with these publishers, news organisations may still struggle to find sustainable revenue for high-quality journalism. Conversely, a ruling against them could force AI firms to purchase costly licenses, potentially rendering it financially unfeasible to train models on up-to-date news content and diminishing the quality of AI-generated summaries.
Other outlets, such as The Guardian, Associated Press, Reuters, LA Times, and Financial Times, have opted to sell their content to tech firms training their AI models. If this becomes the standard, AI-generated summaries will likely favour content from these partner publications and introduce a new layer of bias.
AI-generated news: Not as neutral as it could be?
A core principle in OpenAI’s latest Model Spec, which defines how it would like its AI models to behave, is to “seek the truth together” — meaning the AI and user should collaborate in uncovering accurate information. OpenAI supports this by having models default to objectivity. In theory, news summarised by AI should be politically neutral, regardless of their source material’s editorial stance.
But language choice can introduce bias into even inherently neutral models. Earlier this month, for example, an AI summarisation tool used by the LA Times misinterpreted a series of articles as expressing sympathy for the Ku Klux Klan, when in fact the coverage sought to highlight how historical narratives had downplayed the group’s ideological threat.
It is also possible to manipulate AI models so they lose their neutrality. In one notable case, NewsGuard found that the Moscow-based “Pravda” network deliberately flooded the internet with pro-Kremlin falsehoods in 2024, successfully influencing AI chatbots to disseminate Russian propaganda.
Where do we go from here?
How should journalism evolve in an era of unpaid AI summaries? Burt Herman, the cofounder of media nonprofit Hacks/Hackers, argues that newsrooms must take initiative by developing their own AI-powered products, ensuring their content works in synergy with “algorithmic feeds and conversational interfaces.”
“AI can help with the challenges that have made engagement features too expensive or risky in the past,” he told Nieman Lab, “pitching in to moderate community-generated content, or matching people with shared interests to have a constructive conversation.”