Mohamed Hassan/Pixabay

Interactive mobile applications and AI chatbots can diminish users’ privacy concerns while increasing engagement through enhanced perceptions of playfulness, creating potential vulnerabilities for the extraction of personal data.

Penn State researchers studied 216 participants using simulated fitness applications to examine how interactivity affects privacy vigilance during registration processes, with findings published in Behaviour & Information Technology journal.

The study examined two distinct types of interactivity: message interactivity, ranging from basic question-answer formats to sophisticated conversational exchanges that build on previous responses, and modality interactivity, involving visual engagement features such as image manipulation controls.

Professor S. Shyam Sundar, who leads Penn State’s Centre for Socially Responsible Artificial Intelligence, warned that companies could exploit these behavioural patterns to extract private information without full user awareness.

“Interactivity does not make users pause and think, as we would expect, but rather makes them feel more immersed in the playful aspect of the app and be less concerned about privacy,” Sundar explained.

The research revealed that message interactivity, characteristic of modern AI chatbot operations, particularly distracted users from considering information disclosure risks. This finding challenges assumptions that conversational interfaces increase cognitive alertness to privacy concerns.

Lead researcher Jiaqi Agnes Bao, now at the University of South Dakota, noted that engaging AI conversations cause users to forget the need for vigilance regarding shared sensitive information.

The study suggests design solutions could balance engagement with privacy protection. Researchers found that combining both types of interactivity could prompt user reflection, such as implementing rating prompts during chatbot conversations to encourage consideration of information-sharing practices.

Co-author Yongnam Jung emphasised that platforms bear responsibility beyond merely offering sharing options, stating that helping users make informed choices represents a responsible approach for building trust between platforms and users.

The findings hold particular relevance for generative AI platforms, which primarily rely on conversational message interactivity. Sundar suggested that inserting modality elements like pop-ups during conversations could interrupt the “mesmerising, playful interaction” and restore user awareness.

The research builds upon previous Penn State studies revealing similar patterns, highlighting a critical trade-off where enhanced user experience through interactivity simultaneously reduces attention to privacy risks.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

AI consciousness claims are ‘existentially toxic’ and unprovable

The only scientifically justifiable position on artificial intelligence is “agnosticism”, meaning humans…

Tech-savvy millennials suffer most anxiety over digital privacy risks

Digital concerns regarding privacy, misinformation and work-life boundaries are highest among highly…

Social media ‘cocktail’ helps surgeons solve cases in three hours

A global social media community is helping neurosurgeons diagnose complex pathologies and…

Experts warn of emotional risks as one in three teens turn to AI for support

Medical experts warn that a generation is learning to form emotional bonds…

AI exposes alcohol screening ‘blind spot’, finds 60 times more at-risk patients

Artificial intelligence has revealed a staggering gap in the detection of dangerous…

AI predicts unknown brain neurons that scientists then find in real mice

Artificial intelligence has successfully predicted the existence of previously unknown neuron types…