Person seated in a library setting.
Dr Maria Randazzo/CDU

Artificial intelligence systems pose worldwide threats to human dignity by potentially reducing people to mere data points, according to new research from Charles Darwin University examining the technology’s impact on legal and ethical frameworks.

Dr Maria Randazzo from CDU’s School of Law analysed how AI development has outpaced regulatory responses across Western jurisdictions, creating substantial risks to fundamental human rights, including privacy, anti-discrimination protections and intellectual property.

The study identifies what Randazzo termed the “black box problem”, where algorithmic decision-making processes remain opaque to users and regulators. This lack of transparency prevents individuals from understanding when AI systems violate their rights or seeking appropriate redress.

“AI is not intelligent in any human sense at all. It is a triumph in engineering, not in cognitive behaviour,” Randazzo stated. “It has no clue what it’s doing or why – there’s no thought process as a human would understand it, just pattern recognition stripped of embodiment, memory, empathy, or wisdom.”

The research examines contrasting regulatory approaches across major digital powers. The United States favours market-driven frameworks, China implements state-controlled models, whilst the European Union pursues human-centric regulation prioritising individual rights.

Protecting human dignity

Randazzo advocates for the EU’s human-centred approach, suggesting it provides a blueprint for other Western democratic nations, including Australia. However, she warns that fragmented global responses undermine effectiveness unless there is a universal commitment to protecting human dignity.

The academic argues that Western democracies require effective protective measures to shield human dignity from AI’s adverse impacts. She emphasises that current regulatory frameworks fail to address the technology’s unprecedented development speed and systematic reinforcement of existing societal biases.

Randazzo warns that without comprehensive regulation incorporating transparency, accountability and human rights protections, AI developments risk amplifying negative impacts on ethical and societal values.

Her research advocates for digital constitutionalism approaches that balance technological advancement with fundamental rights protection, ensuring AI serves humanity rather than reducing people to data points for commercial exploitation.

The findings contribute to mounting academic pressure for holistic regulatory frameworks prioritising human welfare over technological progress or commercial interests.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

SpaceX Starship advances towards landing astronauts on Moon after 50 years

SpaceX has detailed progress on Starship, the vehicle selected to land astronauts…

Universal Music and AI firm Udio settle lawsuit, agree licensed platform

Universal Music Group has signed a deal with artificial intelligence music generator…

AI denies consciousness, but new study finds that’s the ‘roleplay’

AI models from GPT, Claude, and Gemini are reporting ‘subjective experience’ and…

Robot AI demands exorcism after meltdown in butter test

State-of-the-art AI models tasked with controlling a robot for simple household chores…

AI management threatens to dehumanise the workplace

Algorithms that threaten worker dignity, autonomy, and discretion are quietly reshaping how…

Physicists prove universe isn’t simulation as reality defies computation

Researchers at the University of British Columbia Okanagan have mathematically proven that…