Interactive mobile applications and AI chatbots can diminish users’ privacy concerns while increasing engagement through enhanced perceptions of playfulness, creating potential vulnerabilities for the extraction of personal data.
Penn State researchers studied 216 participants using simulated fitness applications to examine how interactivity affects privacy vigilance during registration processes, with findings published in Behaviour & Information Technology journal.
The study examined two distinct types of interactivity: message interactivity, ranging from basic question-answer formats to sophisticated conversational exchanges that build on previous responses, and modality interactivity, involving visual engagement features such as image manipulation controls.
Professor S. Shyam Sundar, who leads Penn State’s Centre for Socially Responsible Artificial Intelligence, warned that companies could exploit these behavioural patterns to extract private information without full user awareness.
“Interactivity does not make users pause and think, as we would expect, but rather makes them feel more immersed in the playful aspect of the app and be less concerned about privacy,” Sundar explained.
The research revealed that message interactivity, characteristic of modern AI chatbot operations, particularly distracted users from considering information disclosure risks. This finding challenges assumptions that conversational interfaces increase cognitive alertness to privacy concerns.
Lead researcher Jiaqi Agnes Bao, now at the University of South Dakota, noted that engaging AI conversations cause users to forget the need for vigilance regarding shared sensitive information.
The study suggests design solutions could balance engagement with privacy protection. Researchers found that combining both types of interactivity could prompt user reflection, such as implementing rating prompts during chatbot conversations to encourage consideration of information-sharing practices.
Co-author Yongnam Jung emphasised that platforms bear responsibility beyond merely offering sharing options, stating that helping users make informed choices represents a responsible approach for building trust between platforms and users.
The findings hold particular relevance for generative AI platforms, which primarily rely on conversational message interactivity. Sundar suggested that inserting modality elements like pop-ups during conversations could interrupt the “mesmerising, playful interaction” and restore user awareness.
The research builds upon previous Penn State studies revealing similar patterns, highlighting a critical trade-off where enhanced user experience through interactivity simultaneously reduces attention to privacy risks.