Person seated in a library setting.
Dr Maria Randazzo/CDU

Artificial intelligence systems pose worldwide threats to human dignity by potentially reducing people to mere data points, according to new research from Charles Darwin University examining the technology’s impact on legal and ethical frameworks.

Dr Maria Randazzo from CDU’s School of Law analysed how AI development has outpaced regulatory responses across Western jurisdictions, creating substantial risks to fundamental human rights, including privacy, anti-discrimination protections and intellectual property.

The study identifies what Randazzo termed the “black box problem”, where algorithmic decision-making processes remain opaque to users and regulators. This lack of transparency prevents individuals from understanding when AI systems violate their rights or seeking appropriate redress.

“AI is not intelligent in any human sense at all. It is a triumph in engineering, not in cognitive behaviour,” Randazzo stated. “It has no clue what it’s doing or why – there’s no thought process as a human would understand it, just pattern recognition stripped of embodiment, memory, empathy, or wisdom.”

The research examines contrasting regulatory approaches across major digital powers. The United States favours market-driven frameworks, China implements state-controlled models, whilst the European Union pursues human-centric regulation prioritising individual rights.

Protecting human dignity

Randazzo advocates for the EU’s human-centred approach, suggesting it provides a blueprint for other Western democratic nations, including Australia. However, she warns that fragmented global responses undermine effectiveness unless there is a universal commitment to protecting human dignity.

The academic argues that Western democracies require effective protective measures to shield human dignity from AI’s adverse impacts. She emphasises that current regulatory frameworks fail to address the technology’s unprecedented development speed and systematic reinforcement of existing societal biases.

Randazzo warns that without comprehensive regulation incorporating transparency, accountability and human rights protections, AI developments risk amplifying negative impacts on ethical and societal values.

Her research advocates for digital constitutionalism approaches that balance technological advancement with fundamental rights protection, ensuring AI serves humanity rather than reducing people to data points for commercial exploitation.

The findings contribute to mounting academic pressure for holistic regulatory frameworks prioritising human welfare over technological progress or commercial interests.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

40 million lost days: The real ‘human cost’ of the race for digital capacity

As data centres scale to power the AI era, it’s not just…

Humans beat AI at spotting deepfake videos but fail entirely with photos

As artificial intelligence gets better at generating fake imagery, a new study…

Meet the indestructible robots born from artificial intelligence that refuse to die

Engineers have unleashed a new breed of artificial intelligence-designed robots that can…

The invisible data exchange fueling the artificial intelligence boom

Data’s actual market value remains completely hidden from the public. If regulators…

Charities warned that using AI images destroys public trust and empathy

Tempted by the promise of faster, cheaper campaign materials, many major charities…

This invisible audio shield turns AI voice clones into distorted garbage

Imagine dropping a highly anticipated new single, only to watch artificial intelligence…