Artificial intelligence systems pose worldwide threats to human dignity by potentially reducing people to mere data points, according to new research from Charles Darwin University examining the technology’s impact on legal and ethical frameworks.
Dr Maria Randazzo from CDU’s School of Law analysed how AI development has outpaced regulatory responses across Western jurisdictions, creating substantial risks to fundamental human rights, including privacy, anti-discrimination protections and intellectual property.
The study identifies what Randazzo termed the “black box problem”, where algorithmic decision-making processes remain opaque to users and regulators. This lack of transparency prevents individuals from understanding when AI systems violate their rights or seeking appropriate redress.
“AI is not intelligent in any human sense at all. It is a triumph in engineering, not in cognitive behaviour,” Randazzo stated. “It has no clue what it’s doing or why – there’s no thought process as a human would understand it, just pattern recognition stripped of embodiment, memory, empathy, or wisdom.”
The research examines contrasting regulatory approaches across major digital powers. The United States favours market-driven frameworks, China implements state-controlled models, whilst the European Union pursues human-centric regulation prioritising individual rights.
Protecting human dignity
Randazzo advocates for the EU’s human-centred approach, suggesting it provides a blueprint for other Western democratic nations, including Australia. However, she warns that fragmented global responses undermine effectiveness unless there is a universal commitment to protecting human dignity.
The academic argues that Western democracies require effective protective measures to shield human dignity from AI’s adverse impacts. She emphasises that current regulatory frameworks fail to address the technology’s unprecedented development speed and systematic reinforcement of existing societal biases.
Randazzo warns that without comprehensive regulation incorporating transparency, accountability and human rights protections, AI developments risk amplifying negative impacts on ethical and societal values.
Her research advocates for digital constitutionalism approaches that balance technological advancement with fundamental rights protection, ensuring AI serves humanity rather than reducing people to data points for commercial exploitation.
The findings contribute to mounting academic pressure for holistic regulatory frameworks prioritising human welfare over technological progress or commercial interests.