NIH Image Gallery

Researchers at Stanford Medicine have created CRISPR-GPT, an artificial intelligence tool designed to streamline gene-editing experiments and reduce the learning curve for scientists using CRISPR technology.

The AI system functions as a gene-editing assistant, helping researchers design experiments, analyse data and troubleshoot potential issues, even for those without extensive gene-editing experience.

Le Cong, assistant professor of pathology and genetics who led the development, aims to accelerate the production of lifesaving treatments through automated experimental design and refinement processes.

“The hope is that CRISPR-GPT will help us develop new drugs in months, instead of years,” said Cong. “In addition to helping students, trainees and scientists work together, having an AI agent that speeds up experiments could also eventually help save lives.”

The tool addresses traditional challenges in CRISPR training, which typically requires months of trial-and-error work for researchers to determine optimal DNA targeting strategies. CRISPR-GPT uses 11 years of published experimental data and expert discussions to predict successful approaches and identify potential off-target genetic effects.

A student in Cong’s laboratory successfully used the system to deactivate multiple genes in lung cancer cells on his first attempt, demonstrating the technology’s potential to flatten CRISPR’s steep learning curve.

The AI operates through three modes: beginner, expert and question-and-answer formats. Beginner mode provides detailed explanations alongside recommendations, whilst expert mode offers collaborative support for complex problems without additional context.

Yilong Zhou, a visiting undergraduate from Tsinghua University, described his experience using the system to activate genes in melanoma cancer cells.

“I could simply ask questions when I didn’t understand something, and it would explain or adjust the design to help me understand,” Zhou said. “Using CRISPR-GPT felt less like a tool and more like an ever-available lab partner.”

The research team has incorporated safety measures to prevent misuse, including warnings for unethical applications such as virus or human embryo editing. Cong plans to engage government agencies to ensure responsible deployment.

The study was published in Nature Biomedical Engineering on 30th July, with lead authors Yuanhao Qu from Stanford and Kaixuan Huang from Princeton University.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Meta launches ad-free subscriptions after ICO forces compliance changes

Meta will offer UK users paid subscriptions to use Facebook and Instagram…

World nears quarter million crypto millionaires in historic wealth boom

Global cryptocurrency millionaires have reached 241,700 individuals, marking a 40 per cent…

Wong warns AI nuclear weapons threaten future of humanity at UN

Australia’s Foreign Minister Penny Wong has warned that artificial intelligence’s potential use…

Mistral targets enterprise data as public AI training resources dry up

Europe’s leading artificial intelligence startup Mistral AI is turning to proprietary enterprise…

Legal scholar warns AI could devalue humanity without urgent regulatory action

Artificial intelligence systems pose worldwide threats to human dignity by potentially reducing…

MIT accelerator shows AI enhances startup building without replacing core principles

Entrepreneurs participating in MIT’s flagship summer programme are integrating artificial intelligence tools…

AI creates living viruses for first time as scientists make artificial “life”

Stanford University researchers have achieved a scientific milestone by creating the world’s…

Engineers create smarter artificial intelligence for power grids and autonomous vehicles

Researchers have developed an artificial intelligence system that manages complex networks where…