peacocks eating ice cream
Photo credit: Carnegie Mellon University

Researchers at Carnegie Mellon University are developing artificial intelligence systems that can understand causality, enabling machines to identify why events occur rather than simply predicting what will happen based on historical patterns.

The interdisciplinary team, led by Kun Zhang and Peter Spirtes at CMU’s Department of Philosophy, aims to build AI that can determine causes from data analysis, with potential applications spanning healthcare, education and scientific discovery.

Current AI systems make educated guesses using patterns from the past to predict likely outcomes, an approach that works for many everyday tasks but can lead to misdiagnoses, ineffective treatments or misguided policies in critical areas where understanding the underlying cause proves essential.

The CMU-CLeaR Group is tackling diverse challenges to achieve causal understanding. Graduate students Shaoan Xie and Lingjing Kong are working to make AI models more efficient and precise using data-defined causal relationships, particularly in computer vision and generative technologies.

Kong explained the challenge of teaching AI to combine unfamiliar concepts: “If you request that a model generate an image of a peacock eating ice cream, this combination of concepts might never have shown up in the training data. The model might struggle to generate this.” She noted that humans can naturally make such compositions, suggesting an underlying data structure that enables this ability.

By utilising principles of causality, researchers can teach models to separate ideas or visual components from one another, resulting in more realistic outputs and accurate changes.

The approach extends beyond generative AI. Haoyue Dai, another CLeaR Group member, explored how causal discovery in genetic datasets can help treat disease. Humans possess approximately 20,000 genes that can affect each other, and discovering these relations enables interventions to change gene expression levels when specific combinations cause diseases like cancer.

The CLeaR group’s approach avoids mistaking correlation with causation by tapping into large datasets where cause-and-effect relationships have already been identified. With that foundation, models can simulate the removal of specific genes and predict outcomes with high accuracy.

Graduate students in the CLeaR group earned top prize in a 2023 competition that asked teams to mine massive education datasets for causal relationships that could help students navigate curricula more effectively. The CMU team’s approach identified where students might need targeted support or guidance.

Zhang, a professor in CMU’s Department of Philosophy, outlined his vision: “My dream is to develop an automated platform for scientific discovery which can take all of the observed data or metadata as input, and output plausible hypotheses: what entities should exist, what they look like, how to measure them and how to manipulate them.”

The World Economic Forum has called causal AI “the future of enterprise decision-making,” suggesting it can bring AI closer to humanlike but more powerful decision-making and artificial general intelligence.

Carnegie Mellon’s work builds on a legacy dating to the 1970s when researchers explored combining computer systems with human thought through projects like ACT-R, a cognitive architecture initiative.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Political misinformation key reason for US divorces and breakups, study finds

Political misinformation or disinformation was the key reason for some US couples’…

Meta launches ad-free subscriptions after ICO forces compliance changes

Meta will offer UK users paid subscriptions to use Facebook and Instagram…

Wikimedia launches free AI vector database to challenge Big Tech dominance

Wikimedia Deutschland has launched a free vector database enabling developers to build…

Walmart continues developer hiring while expanding AI agent automation

Walmart will continue hiring software engineers despite deploying more than 200 AI…

Anthropic’s Claude Sonnet 4.5 detects testing scenarios, raising evaluation concerns

Anthropic’s latest AI model recognised it was being tested during safety evaluations,…

Film union condemns AI actor as threat to human performers’ livelihoods

SAG-AFTRA has condemned AI-generated performer Tilly Norwood as a synthetic character trained…

Mistral targets enterprise data as public AI training resources dry up

Europe’s leading artificial intelligence startup Mistral AI is turning to proprietary enterprise…

UK creates commission to make NHS world’s most AI-enabled health system

The UK government has established a new National Commission, bringing together clinical…