Person seated in a library setting.
Dr Maria Randazzo/CDU

Artificial intelligence systems pose worldwide threats to human dignity by potentially reducing people to mere data points, according to new research from Charles Darwin University examining the technology’s impact on legal and ethical frameworks.

Dr Maria Randazzo from CDU’s School of Law analysed how AI development has outpaced regulatory responses across Western jurisdictions, creating substantial risks to fundamental human rights, including privacy, anti-discrimination protections and intellectual property.

The study identifies what Randazzo termed the “black box problem”, where algorithmic decision-making processes remain opaque to users and regulators. This lack of transparency prevents individuals from understanding when AI systems violate their rights or seeking appropriate redress.

“AI is not intelligent in any human sense at all. It is a triumph in engineering, not in cognitive behaviour,” Randazzo stated. “It has no clue what it’s doing or why – there’s no thought process as a human would understand it, just pattern recognition stripped of embodiment, memory, empathy, or wisdom.”

The research examines contrasting regulatory approaches across major digital powers. The United States favours market-driven frameworks, China implements state-controlled models, whilst the European Union pursues human-centric regulation prioritising individual rights.

Protecting human dignity

Randazzo advocates for the EU’s human-centred approach, suggesting it provides a blueprint for other Western democratic nations, including Australia. However, she warns that fragmented global responses undermine effectiveness unless there is a universal commitment to protecting human dignity.

The academic argues that Western democracies require effective protective measures to shield human dignity from AI’s adverse impacts. She emphasises that current regulatory frameworks fail to address the technology’s unprecedented development speed and systematic reinforcement of existing societal biases.

Randazzo warns that without comprehensive regulation incorporating transparency, accountability and human rights protections, AI developments risk amplifying negative impacts on ethical and societal values.

Her research advocates for digital constitutionalism approaches that balance technological advancement with fundamental rights protection, ensuring AI serves humanity rather than reducing people to data points for commercial exploitation.

The findings contribute to mounting academic pressure for holistic regulatory frameworks prioritising human welfare over technological progress or commercial interests.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

AI consciousness claims are ‘existentially toxic’ and unprovable

The only scientifically justifiable position on artificial intelligence is “agnosticism”, meaning humans…

Tech-savvy millennials suffer most anxiety over digital privacy risks

Digital concerns regarding privacy, misinformation and work-life boundaries are highest among highly…

Experts warn of emotional risks as one in three teens turn to AI for support

Medical experts warn that a generation is learning to form emotional bonds…

Social media ‘cocktail’ helps surgeons solve cases in three hours

A global social media community is helping neurosurgeons diagnose complex pathologies and…

AI exposes alcohol screening ‘blind spot’, finds 60 times more at-risk patients

Artificial intelligence has revealed a staggering gap in the detection of dangerous…

Harari warns of ‘alien’ AI oligarchs and is your brand really 67 cool?

TL;DR: Yuval Noah Harari warns that AI has evolved into an “alien…