Visually impaired coders.
University of Texas at Dallas computer science doctoral student Yili (Angel) Wen. Photo credit: University of Texas at Dallas

A new AI-assisted tool enables visually impaired programmers to create, edit, and verify complex 3D models independently, eliminating reliance on sighted colleagues for design tasks.

Researchers from the University of Texas at Dallas have developed “A11yShape,” a system that integrates with the open-source code editor OpenSCAD. The tool addresses a critical barrier for the estimated 1.7 per cent of programmers who have visual impairments, many of whom currently struggle with spatial design tasks despite using screen readers.

The system utilises GPT-4o to analyse digital images of 3D models from multiple angles, combining this visual data with the underlying code to generate detailed text descriptions. A11yShape tracks changes in real-time, synchronising the code with the descriptions and 3D renderings, whilst an integrated AI chatbot answers specific questions about the design.

“This is a first step toward a goal of providing people with visual impairments with equal access to creative tools, including 3D modelling,” says Dr Liang He, assistant professor of computer science at the Erik Jonsson School of Engineering and Computer Science.

Struggles with assignments

The project was inspired by Dr. He’s experience in graduate school, where he observed a blind classmate struggling with 3D modelling assignments.

“Every single time when he was working on his assignment, he had to ask someone to help him and verify the results,” says Dr. He.

To validate the new technology, the team tested A11yShape with four visually impaired programmers, who successfully created and modified models of robots, a rocket, and a helicopter without assistance. Gene S-H Kim, a blind PhD student at MIT and co-author of the study, provided user perspective during development.

“He used the first version of the system and gave us a lot of really good feedback, which helped us improve the system,” notes Dr. He.

The research, presented at the ASSETS 2025 conference, involved collaboration with the University of Washington, Purdue University, Stanford University, the University of Michigan, MIT, The Hong Kong University of Science and Technology, and Nvidia Corp.

The team now plans to expand the tool’s capabilities to support physical creation.

“The next step is to try to support this process — this pipeline from 3D modelling to fabrication,” adds Dr He.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Super-intelligent AI could ‘play dumb’ to trick evaluators and evade controls

The dream of an AI-integrated society could turn into a nightmare if…

AI guardrails defeated by poetry as ‘smarter’ models prove most gullible

The world’s most advanced artificial intelligence systems are being easily manipulated into…

Satellite dataset uses deep learning to map 9.2 million kilometres of roads

Researchers have combined deep-learning models with high-resolution satellite imagery to classify 9.2…

Researchers hijack X feed with ad blocker tech to cool political tempers

Scientists have successfully intercepted and reshaped live social media feeds using ad-blocker-style…

Universities quietly deploying GenAI to ‘game’ £2bn research funding system

UK universities are widely using generative AI to prepare submissions for the…

Doing good buys forgiveness as CSR becomes ‘insurance’ against layoffs

Companies planning to slash jobs or freeze pay should start saving the…