Mohamed Hassan/Pixabay

Interactive mobile applications and AI chatbots can diminish users’ privacy concerns while increasing engagement through enhanced perceptions of playfulness, creating potential vulnerabilities for the extraction of personal data.

Penn State researchers studied 216 participants using simulated fitness applications to examine how interactivity affects privacy vigilance during registration processes, with findings published in Behaviour & Information Technology journal.

The study examined two distinct types of interactivity: message interactivity, ranging from basic question-answer formats to sophisticated conversational exchanges that build on previous responses, and modality interactivity, involving visual engagement features such as image manipulation controls.

Professor S. Shyam Sundar, who leads Penn State’s Centre for Socially Responsible Artificial Intelligence, warned that companies could exploit these behavioural patterns to extract private information without full user awareness.

“Interactivity does not make users pause and think, as we would expect, but rather makes them feel more immersed in the playful aspect of the app and be less concerned about privacy,” Sundar explained.

The research revealed that message interactivity, characteristic of modern AI chatbot operations, particularly distracted users from considering information disclosure risks. This finding challenges assumptions that conversational interfaces increase cognitive alertness to privacy concerns.

Lead researcher Jiaqi Agnes Bao, now at the University of South Dakota, noted that engaging AI conversations cause users to forget the need for vigilance regarding shared sensitive information.

The study suggests design solutions could balance engagement with privacy protection. Researchers found that combining both types of interactivity could prompt user reflection, such as implementing rating prompts during chatbot conversations to encourage consideration of information-sharing practices.

Co-author Yongnam Jung emphasised that platforms bear responsibility beyond merely offering sharing options, stating that helping users make informed choices represents a responsible approach for building trust between platforms and users.

The findings hold particular relevance for generative AI platforms, which primarily rely on conversational message interactivity. Sundar suggested that inserting modality elements like pop-ups during conversations could interrupt the “mesmerising, playful interaction” and restore user awareness.

The research builds upon previous Penn State studies revealing similar patterns, highlighting a critical trade-off where enhanced user experience through interactivity simultaneously reduces attention to privacy risks.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Alarming new US survey shows half of patients rely on AI for medical choices

Across the United States, a dangerous new trend is emerging. Millions of…

Why digital tears and online outrage fail to win modern political arguments

Scrolling through your social media feed today often feels like navigating a…

Global gambling firms rush to adopt AI despite severe lack of safety controls

The global gambling industry is racing to integrate artificial intelligence into its…

Students prefer artificial intelligence until they figure out it is a machine

University students prefer to get academic advice from artificial intelligence rather than…

Tracking how war and energy policies dimmed night lights of Europe

While human civilisation is glowing brighter than ever before, the lights across…

Massive AI study uncovers the secret GLP-1 side effects hidden on Reddit

Millions of patients are flocking to GLP-1 weight loss injections, but artificial…