Journalists.
Photo credit: Alexas Fotos/Pixabay

Despite fears that the media is overhyping artificial intelligence, a new study reveals that professional journalists are surprisingly disciplined about stripping machines of personality.

Research from Iowa State University, published in Technical Communication Quarterly, examined more than 20 billion words of news content to assess whether writers were blurring the line between human and machine.

The study found that reporters rarely use “mental verbs” — words such as “think,” “know,” or “decide” — when describing AI, thereby refusing to imply consciousness where none exists.

‘Thinking’ machines

The researchers argue that using human language to describe software is dangerous. Words like “understand” or “want” suggest that AI has beliefs, desires, or an inner life, when in reality these systems merely generate outputs based on patterns.

“We use mental verbs all the time in our daily lives, so it makes sense that we might also use them when we talk about machines – it helps us relate to them,” said Jo Mackiewicz, a professor of English at Iowa State. “But at the same time… there’s also a risk of blurring the line between what humans and AI can do.”

Furthermore, saying “AI decided” acts as a shield for the real decision-makers: the humans who design and deploy the technology. By framing the AI as autonomous, responsibility is shifted away from the creators.

Restrained reporting

To test whether this was occurring, the team analysed the “News on the Web” (NOW) corpus, a large dataset of English-language news articles from 20 countries.

They expected to find rampant anthropomorphism. Instead, they found that news writers are largely sticking to functional descriptions.

  • The verb “needs” was the most common pairing with “AI,” appearing 661 times — but often in a mechanical sense, such as “AI needs data,” akin to “a car needs petrol.”
  • The verb “knows” was the most frequent pairing with “ChatGPT,” yet it appeared only 32 times in the entire dataset.

The researchers suggest that strict guidelines from organisations such as the Associated Press, which advise against attributing human emotions to machines, are successfully keeping coverage grounded.

The study concluded that, while anthropomorphism exists on a spectrum, the media largely avoid the extreme end.

“For writers, this nuance matters: the language we choose shapes how readers understand AI systems, their capabilities and the humans responsible for them,” said co-author Jeanine Aune.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Dog food ‘carbon pawprint’ can carry higher climate cost than owners’ diets

Feeding the family dog premium, meat-rich steaks and wet food may cause…

Artificial intelligence predicts 100 diseases from a single night’s sleep

A new artificial intelligence model can forecast a person’s risk of developing…

Brands urged to monitor Bluesky and Mastodon for ‘unfiltered’ consumer truth

Companies seeking honest feedback on their products should look beyond Facebook and…

AI fuels boom in scientific papers but floods journals with ‘mediocre’ research

Artificial intelligence is helping scientists write papers faster than ever before, but…

Scientists find ‘brake’ in the brain that stops us starting stressful tasks

We all know the feeling: staring at a tax return or a…