Mohamed Hassan/Pixabay

Interactive mobile applications and AI chatbots can diminish users’ privacy concerns while increasing engagement through enhanced perceptions of playfulness, creating potential vulnerabilities for the extraction of personal data.

Penn State researchers studied 216 participants using simulated fitness applications to examine how interactivity affects privacy vigilance during registration processes, with findings published in Behaviour & Information Technology journal.

The study examined two distinct types of interactivity: message interactivity, ranging from basic question-answer formats to sophisticated conversational exchanges that build on previous responses, and modality interactivity, involving visual engagement features such as image manipulation controls.

Professor S. Shyam Sundar, who leads Penn State’s Centre for Socially Responsible Artificial Intelligence, warned that companies could exploit these behavioural patterns to extract private information without full user awareness.

“Interactivity does not make users pause and think, as we would expect, but rather makes them feel more immersed in the playful aspect of the app and be less concerned about privacy,” Sundar explained.

The research revealed that message interactivity, characteristic of modern AI chatbot operations, particularly distracted users from considering information disclosure risks. This finding challenges assumptions that conversational interfaces increase cognitive alertness to privacy concerns.

Lead researcher Jiaqi Agnes Bao, now at the University of South Dakota, noted that engaging AI conversations cause users to forget the need for vigilance regarding shared sensitive information.

The study suggests design solutions could balance engagement with privacy protection. Researchers found that combining both types of interactivity could prompt user reflection, such as implementing rating prompts during chatbot conversations to encourage consideration of information-sharing practices.

Co-author Yongnam Jung emphasised that platforms bear responsibility beyond merely offering sharing options, stating that helping users make informed choices represents a responsible approach for building trust between platforms and users.

The findings hold particular relevance for generative AI platforms, which primarily rely on conversational message interactivity. Sundar suggested that inserting modality elements like pop-ups during conversations could interrupt the “mesmerising, playful interaction” and restore user awareness.

The research builds upon previous Penn State studies revealing similar patterns, highlighting a critical trade-off where enhanced user experience through interactivity simultaneously reduces attention to privacy risks.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

SpaceX Starship advances towards landing astronauts on Moon after 50 years

SpaceX has detailed progress on Starship, the vehicle selected to land astronauts…

Universal Music and AI firm Udio settle lawsuit, agree licensed platform

Universal Music Group has signed a deal with artificial intelligence music generator…

AI denies consciousness, but new study finds that’s the ‘roleplay’

AI models from GPT, Claude, and Gemini are reporting ‘subjective experience’ and…

Robot AI demands exorcism after meltdown in butter test

State-of-the-art AI models tasked with controlling a robot for simple household chores…

AI management threatens to dehumanise the workplace

Algorithms that threaten worker dignity, autonomy, and discretion are quietly reshaping how…

Physicists prove universe isn’t simulation as reality defies computation

Researchers at the University of British Columbia Okanagan have mathematically proven that…