Tilly Norwood
Tilly Norwood. Photo credit: YouTube

Equity plans coordinated subject access requests from thousands of members to compel tech companies and producers to disclose whether they used performers’ data without consent in AI-generated content, escalating its response to the launch of synthetic actors.

The performing arts union said growing numbers of its 50,000 members have complained about infringements of copyright and misuse of personal data in AI material, with most complaints concerning AI-generated voice replicas. General secretary Paul W Fleming said the union would coordinate mass data requests to force companies resistant to transparency into collective rights agreements, reports The Guardian.

Fleming said Equity has already helped members make subject access requests to producers and tech companies that failed to provide satisfactory explanations about data sources for AI content. “AI companies need to know that we will be putting in these subject access requests en masse,” he said. “They have a statutory obligation to respond. If an individual member reasonably believes that their data is being used without their consent, we want to find out.”

The union last week confirmed support for Scottish actor Briony Monroe, 28, from East Renfrewshire, who believes her image was used to create AI actor Tilly Norwood. The digital character, launched at the Zurich Film Festival at the end of September by AI talent studio Xicoia, has prompted condemnation from both Equity and American union SAG-AFTRA, which represents 160,000 entertainment professionals.

In a statement, Equity said the creative process is a human prerogative and that generative AI must remain a tool to empower human creators rather than replace them. The union called for an end to the “Wild West” of AI development, stating that AI-generated performances are made by digitally imitating real work made by real people, with such training often done without performers’ permission.

SAG-AFTRA described Norwood as a character generated by a computer programme trained on the work of countless professional performers without permission or compensation, warning that it “doesn’t solve any ‘problem’ — it creates the problem of using stolen performances to put actors out of work, jeopardising performer livelihoods and devaluing human artistry.”

Actors including Emily Blunt and Melissa Barrera criticised the development, with Blunt calling it “really, really scary” and urging agencies to stop taking away human connection. Particle6, which launched Xicoia, denied Monroe’s claims and said Norwood was developed entirely from scratch using original creative design.

Under data protection law, individuals can request all information an organisation holds about them, with organisations normally required to respond within one month. Fleming said significant numbers of members making subject access requests would create a “hassle” for firms unwilling to negotiate, whilst companies that received earlier requests became willing to discuss compensation and usage.

Liam Budd, industrial official for recorded media at Equity UK, said the union was taking Monroe’s concerns seriously whilst noting that Norwood represented a new challenge as “we haven’t really seen the launch of a wholly synthetic actor” before. Voice replication technology has become more common because it requires fewer recordings to create digital replicas.

Equity UK has negotiated with UK production trade body Pact about AI, copyright and data protection for more than a year, demanding minimum standards for AI use across film and television whilst lobbying the UK government to strengthen performers’ rights. Fleming said producers privately admit using AI ethically is impossible because training data provenance remains unclear, with data typically used outside existing copyright and data protection frameworks.

Max Rumney, deputy chief executive of Pact, said members needed to use AI technology or face commercial disadvantage, but noted that tech companies provide no transparency on what content or data trained foundation models. “The foundational models have been trained without permission on the films and programmes of our members,” he said.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Political misinformation key reason for US divorces and breakups, study finds

Political misinformation or disinformation was the key reason for some US couples’…

Meta launches ad-free subscriptions after ICO forces compliance changes

Meta will offer UK users paid subscriptions to use Facebook and Instagram…

Wikimedia launches free AI vector database to challenge Big Tech dominance

Wikimedia Deutschland has launched a free vector database enabling developers to build…

Walmart continues developer hiring while expanding AI agent automation

Walmart will continue hiring software engineers despite deploying more than 200 AI…

Film union condemns AI actor as threat to human performers’ livelihoods

SAG-AFTRA has condemned AI-generated performer Tilly Norwood as a synthetic character trained…

Anthropic’s Claude Sonnet 4.5 detects testing scenarios, raising evaluation concerns

Anthropic’s latest AI model recognised it was being tested during safety evaluations,…

Mistral targets enterprise data as public AI training resources dry up

Europe’s leading artificial intelligence startup Mistral AI is turning to proprietary enterprise…

UK creates commission to make NHS world’s most AI-enabled health system

The UK government has established a new National Commission, bringing together clinical…