Mohamed Hassan/Pixabay

Interactive mobile applications and AI chatbots can diminish users’ privacy concerns while increasing engagement through enhanced perceptions of playfulness, creating potential vulnerabilities for the extraction of personal data.

Penn State researchers studied 216 participants using simulated fitness applications to examine how interactivity affects privacy vigilance during registration processes, with findings published in Behaviour & Information Technology journal.

The study examined two distinct types of interactivity: message interactivity, ranging from basic question-answer formats to sophisticated conversational exchanges that build on previous responses, and modality interactivity, involving visual engagement features such as image manipulation controls.

Professor S. Shyam Sundar, who leads Penn State’s Centre for Socially Responsible Artificial Intelligence, warned that companies could exploit these behavioural patterns to extract private information without full user awareness.

“Interactivity does not make users pause and think, as we would expect, but rather makes them feel more immersed in the playful aspect of the app and be less concerned about privacy,” Sundar explained.

The research revealed that message interactivity, characteristic of modern AI chatbot operations, particularly distracted users from considering information disclosure risks. This finding challenges assumptions that conversational interfaces increase cognitive alertness to privacy concerns.

Lead researcher Jiaqi Agnes Bao, now at the University of South Dakota, noted that engaging AI conversations cause users to forget the need for vigilance regarding shared sensitive information.

The study suggests design solutions could balance engagement with privacy protection. Researchers found that combining both types of interactivity could prompt user reflection, such as implementing rating prompts during chatbot conversations to encourage consideration of information-sharing practices.

Co-author Yongnam Jung emphasised that platforms bear responsibility beyond merely offering sharing options, stating that helping users make informed choices represents a responsible approach for building trust between platforms and users.

The findings hold particular relevance for generative AI platforms, which primarily rely on conversational message interactivity. Sundar suggested that inserting modality elements like pop-ups during conversations could interrupt the “mesmerising, playful interaction” and restore user awareness.

The research builds upon previous Penn State studies revealing similar patterns, highlighting a critical trade-off where enhanced user experience through interactivity simultaneously reduces attention to privacy risks.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Meta launches ad-free subscriptions after ICO forces compliance changes

Meta will offer UK users paid subscriptions to use Facebook and Instagram…

World nears quarter million crypto millionaires in historic wealth boom

Global cryptocurrency millionaires have reached 241,700 individuals, marking a 40 per cent…

Wong warns AI nuclear weapons threaten future of humanity at UN

Australia’s Foreign Minister Penny Wong has warned that artificial intelligence’s potential use…

Mistral targets enterprise data as public AI training resources dry up

Europe’s leading artificial intelligence startup Mistral AI is turning to proprietary enterprise…

Legal scholar warns AI could devalue humanity without urgent regulatory action

Artificial intelligence systems pose worldwide threats to human dignity by potentially reducing…

MIT accelerator shows AI enhances startup building without replacing core principles

Entrepreneurs participating in MIT’s flagship summer programme are integrating artificial intelligence tools…

AI creates living viruses for first time as scientists make artificial “life”

Stanford University researchers have achieved a scientific milestone by creating the world’s…

Engineers create smarter artificial intelligence for power grids and autonomous vehicles

Researchers have developed an artificial intelligence system that manages complex networks where…