Imagine you’re on a website, reading the news about the recent passing of a major figure in the tech industry.
You discover details about when they passed, the names of some companies they were attached to, and which famous people shared their heartfelt condolences.
It’s only when you reach the end of the that you realize the author never mentioned their name.
Stories like this have started appearing on sites across the internet. At first glance, they seem like normal news pieces, but when you look closer, you realize something is off with the details.
The reason, of course, is that it was written by an AI.
Do Journalists Use AI?
AI has been participating in the newsroom for longer than we might expect.
According to What’s New In Publishing, newspapers such as the Washington Post have been using generative AI for the past several years to write hundreds of articles.
To be clear: rather than write whole articles, in-house bots have been condensing articles, creating short alerts of high impact news, and even composing the equivalent of Tweets about sports games.
In these cases, their use has largely been like that of a copy editor, helping churn out micro-content at a rate faster than most people could. Most us have have likely never noticed because the use was subtle and, if I may say so, appropriate.
After all, AI can be a powerful in the hands of the right person.
Due to its limitations at the present moment, there is the constant need for a human to either supervise the delivered product or set its operational parameters in such a way to constrain appropriately without letting it go off the rails.
Where things get tricky is when AI is used to write articles wholesale – either from a prompt and headline provided by the editorial team, or to generate everything (topic, title, content) from scratch.
For anyone who has played around with ChatGPT, we all know what can happen if we give an AI too much room to be “creative” – or end up with a hallucination as they are often calld.
In sum, we can easily get bonkers results.
These mishaps can range from misunderstanding the prompt and producing something irrelevant to making things up – such as imaginary guitar chords.
Of course, terming these missteps as hallucinations is a coy euphemism for what’s really going on: the language learning model simply making something up because it sounds, well, like something that a person might write.
Will AIs Be Responsible for Fake News?
At the moment, I’m hesitant to call hallucinations and made up nonsense produced by AIs as fake news.
Fake news is commonly used to describe what happens “news” sites either wholesale makeup stories or write them in a way that the takeaway is entirely misleading.
There’s an intentionality in fake news – whether it’s for creating divisions ahead of a political campaign, driving up support among tribal lines, or simply for exploitatively generating clicks and more ad revenue based on sensational headlines.
The AIs used for generating text do not have intentionality, so it is not as if these tools are responsible for everything they produce.
However, their operators certainly might be inclined to produce fake news, and in their hands these tools have the potential to supercharge their output.
A lot of the fake news out there is produced by the equivalent of sweatshops, where a team of numerous writers in a country with loose labour laws and low costs sits around and hammers out as much content as they can – effectively throwing all kinds of nonsense at the web and seeing what sticks.
Where the worst case examples were essentially limited by how fast a person can type – a bot such as ChatGPT could reduce the time to produce a fake article from say 30 minutes to 30 seconds.
There are some limitations on most language learning tools that will mean a human editor has to go in and shape them up after. But, there is already an “unethical ChatGPT” available on the darkweb that can likely be used to produce more harmful content faste than before.
Will AI Replace Journalists?
The truth of it is that AI is already being used to write news articles and replace journalists.
If it seems like more and more content out there, you’re not crazy – and the rate at which this content is published is likely accelerating.
The Guardian recently identified 50 news websites that appear to be composed of entirely (or close enough) AI generated content. Many of the articles are top 10 listicles, but others also contain advice columns and “news articles” that may not be grounded in actual events.
While most of the sites on that list look like quick traffic grabs or the kind of mid-to-low quality content a typical affiliate site offers, more reputable publishers are also posting articles written by machine learning tools.
The Daily Mirror and Express have both begun publishing articles written entirely by AI, despite claims neither itends to replace its human writers. German newspaper Bild, however, has begun laying off staff as it transitions to more AI.
Technology providers are also doing little to stop the trend.
Google has started testing its own proprietary tool Bard’s ability to generate news articles without human input while Microsoft is offering cash to outlets to begin implementing ChatGPT and other OpenAI tools in their newsrooms.
We’re not at the point where a bot can replace investigative journalism or original on the scene reporting, but the other duties that make up a journalist’s day – such as picking up stories from a wire or commenting on last night’s sport’s game – are already going the way of the dinosaur.