Journalists.
Photo credit: Alexas Fotos/Pixabay

Despite fears that the media is overhyping artificial intelligence, a new study reveals that professional journalists are surprisingly disciplined about stripping machines of personality.

Research from Iowa State University, published in Technical Communication Quarterly, examined more than 20 billion words of news content to assess whether writers were blurring the line between human and machine.

The study found that reporters rarely use “mental verbs” — words such as “think,” “know,” or “decide” — when describing AI, thereby refusing to imply consciousness where none exists.

‘Thinking’ machines

The researchers argue that using human language to describe software is dangerous. Words like “understand” or “want” suggest that AI has beliefs, desires, or an inner life, when in reality these systems merely generate outputs based on patterns.

“We use mental verbs all the time in our daily lives, so it makes sense that we might also use them when we talk about machines – it helps us relate to them,” said Jo Mackiewicz, a professor of English at Iowa State. “But at the same time… there’s also a risk of blurring the line between what humans and AI can do.”

Furthermore, saying “AI decided” acts as a shield for the real decision-makers: the humans who design and deploy the technology. By framing the AI as autonomous, responsibility is shifted away from the creators.

Restrained reporting

To test whether this was occurring, the team analysed the “News on the Web” (NOW) corpus, a large dataset of English-language news articles from 20 countries.

They expected to find rampant anthropomorphism. Instead, they found that news writers are largely sticking to functional descriptions.

  • The verb “needs” was the most common pairing with “AI,” appearing 661 times — but often in a mechanical sense, such as “AI needs data,” akin to “a car needs petrol.”
  • The verb “knows” was the most frequent pairing with “ChatGPT,” yet it appeared only 32 times in the entire dataset.

The researchers suggest that strict guidelines from organisations such as the Associated Press, which advise against attributing human emotions to machines, are successfully keeping coverage grounded.

The study concluded that, while anthropomorphism exists on a spectrum, the media largely avoid the extreme end.

“For writers, this nuance matters: the language we choose shapes how readers understand AI systems, their capabilities and the humans responsible for them,” said co-author Jeanine Aune.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Massive AI study uncovers the secret GLP-1 side effects hidden on Reddit

Millions of patients are flocking to GLP-1 weight loss injections, but artificial…

Alarming new US survey shows half of patients rely on AI for medical choices

Across the United States, a dangerous new trend is emerging. Millions of…

One in four Americans now consult AI chatbots for medical advice

Millions of desperate patients are quietly abandoning the waiting room for a…

Why digital tears and online outrage fail to win modern political arguments

Scrolling through your social media feed today often feels like navigating a…

Global gambling firms rush to adopt AI despite severe lack of safety controls

The global gambling industry is racing to integrate artificial intelligence into its…

Tracking how war and energy policies dimmed night lights of Europe

While human civilisation is glowing brighter than ever before, the lights across…