Journalists.
Photo credit: Alexas Fotos/Pixabay

Despite fears that the media is overhyping artificial intelligence, a new study reveals that professional journalists are surprisingly disciplined about stripping machines of personality.

Research from Iowa State University, published in Technical Communication Quarterly, examined more than 20 billion words of news content to assess whether writers were blurring the line between human and machine.

The study found that reporters rarely use “mental verbs” — words such as “think,” “know,” or “decide” — when describing AI, thereby refusing to imply consciousness where none exists.

‘Thinking’ machines

The researchers argue that using human language to describe software is dangerous. Words like “understand” or “want” suggest that AI has beliefs, desires, or an inner life, when in reality these systems merely generate outputs based on patterns.

“We use mental verbs all the time in our daily lives, so it makes sense that we might also use them when we talk about machines – it helps us relate to them,” said Jo Mackiewicz, a professor of English at Iowa State. “But at the same time… there’s also a risk of blurring the line between what humans and AI can do.”

Furthermore, saying “AI decided” acts as a shield for the real decision-makers: the humans who design and deploy the technology. By framing the AI as autonomous, responsibility is shifted away from the creators.

Restrained reporting

To test whether this was occurring, the team analysed the “News on the Web” (NOW) corpus, a large dataset of English-language news articles from 20 countries.

They expected to find rampant anthropomorphism. Instead, they found that news writers are largely sticking to functional descriptions.

  • The verb “needs” was the most common pairing with “AI,” appearing 661 times — but often in a mechanical sense, such as “AI needs data,” akin to “a car needs petrol.”
  • The verb “knows” was the most frequent pairing with “ChatGPT,” yet it appeared only 32 times in the entire dataset.

The researchers suggest that strict guidelines from organisations such as the Associated Press, which advise against attributing human emotions to machines, are successfully keeping coverage grounded.

The study concluded that, while anthropomorphism exists on a spectrum, the media largely avoid the extreme end.

“For writers, this nuance matters: the language we choose shapes how readers understand AI systems, their capabilities and the humans responsible for them,” said co-author Jeanine Aune.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Resilience by design: Protecting the North’s digital backbone

theFreesheet is the official media partner for Manchester Edge & Digital Infrastructure…

Digital sovereignty: Why 2026 is Europe’s make-or-break year for sovereign cloud

theFreesheet is the official media partner for Manchester Edge & Digital Infrastructure…

AI models master complex multitasking by learning to ‘talk’ to themselves

Artificial intelligence systems can significantly improve their ability to tackle unfamiliar problems…

Engineering leaders urge profession to adopt peace as core design standard

Engineers must actively design systems to reduce conflict rather than treating peace…

Perception of AI as job killer erodes trust in democracy and civic participation

Widespread fears that artificial intelligence will replace human labour are directly undermining…