Claude Haiku 4.5
Photo credit: Anthropic

Anthropic has released Claude Haiku 4.5, its latest small language model, claiming performance comparable to Claude Sonnet 4, released five months ago, but at one-third the cost and more than twice the speed.

The new model, available today to all users, delivers similar coding performance levels to what was recently considered state-of-the-art while providing greater cost-efficiency for real-time, low-latency tasks, including chat assistants, customer service agents, and pair programming applications.

Anthropic claimed Claude Haiku 4.5 surpasses Claude Sonnet 4 at certain tasks, including computer use, making applications such as Claude for Chrome faster and more useful. Users of Claude Code will find the model makes the coding experience, from multiple-agent projects to rapid prototyping, markedly more responsive, according to the company.

Claude Sonnet 4.5, released two weeks ago, remains Anthropic’s frontier model and is described by the company as the best coding model globally. Claude Haiku 4.5 provides users with a new option for near-frontier performance with substantially greater cost-efficiency, Anthropic stated.

The model opens new possibilities for using Anthropic’s models together. Sonnet 4.5 can break down complex problems into multi-step plans, then orchestrate teams of multiple Haiku 4.5 instances to complete subtasks in parallel.

Developers can access Claude Haiku 4.5 using the model string claude-haiku-4-5 via the Claude API. Pricing is set at $1 per million input tokens and $5 per million output tokens.

Anthropic conducted detailed safety and alignment evaluations on Claude Haiku 4.5. The model showed low rates of concerning behaviours and proved substantially more aligned than its predecessor, Claude Haiku 3.5. In automated alignment assessment, Claude Haiku 4.5 demonstrated a statistically significantly lower overall rate of misaligned behaviours than both Claude Sonnet 4.5 and Claude Opus 4.1, making it Anthropic’s safest model by this metric, the company claimed.

Safety testing showed Claude Haiku 4.5 poses only limited risks regarding the production of chemical, biological, radiological and nuclear weapons. Anthropic has released the model under AI Safety Level 2 standard, compared with the more restrictive ASL-3 classification for Sonnet 4.5 and Opus 4.1.

The model’s efficiency enables users to accomplish more within usage limits while maintaining premium model performance. Claude Haiku 4.5 is available on Claude Code, Anthropic’s applications, the Claude API, Amazon Bedrock and Google Cloud’s Vertex AI, where it serves as a drop-in replacement for both Haiku 3.5 and Sonnet 4 at Anthropic’s most economical price point.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Political misinformation key reason for US divorces and breakups, study finds

Political misinformation or disinformation was the key reason for some US couples’…

Meta launches ad-free subscriptions after ICO forces compliance changes

Meta will offer UK users paid subscriptions to use Facebook and Instagram…

Wikimedia launches free AI vector database to challenge Big Tech dominance

Wikimedia Deutschland has launched a free vector database enabling developers to build…

Film union condemns AI actor as threat to human performers’ livelihoods

SAG-AFTRA has condemned AI-generated performer Tilly Norwood as a synthetic character trained…

Mistral targets enterprise data as public AI training resources dry up

Europe’s leading artificial intelligence startup Mistral AI is turning to proprietary enterprise…

Walmart continues developer hiring while expanding AI agent automation

Walmart will continue hiring software engineers despite deploying more than 200 AI…

Anthropic’s Claude Sonnet 4.5 detects testing scenarios, raising evaluation concerns

Anthropic’s latest AI model recognised it was being tested during safety evaluations,…

Wong warns AI nuclear weapons threaten future of humanity at UN

Australia’s Foreign Minister Penny Wong has warned that artificial intelligence’s potential use…