Industry Trends

Claude Opus 4.7 Released: What's New vs. Opus 4.6

Anthropic released Claude Opus 4.7 on April 16, 2026. On paper, the jump from Claude Opus 4.6 to 4.7 looks like a routine point upgrade. In practice, it is the most meaningful step forward the Opus line has taken in almost a year. It is also the first generally available Claude model to carry capability improvements learned from Anthropic's frontier model, Claude Mythos Preview.

If this sounds like another "the new model codes a little better" announcement, stick around. Claude Opus 4.7 changes what an AI model can reliably finish without a human looking over its shoulder. For anyone building with the Anthropic API, using Claude or Claude Code for real work, or deploying LLMs in customer experiences, this release is a genuine inflection point.

Here is every upgrade from Opus 4.6 to Opus 4.7, written in plain language, with ICX's take on what each change means for real-world work.

What Is New in Claude Opus 4.7

The headline specs first. Claude Opus 4.7 is a direct upgrade to Claude Opus 4.6. Pricing is identical: $5 per million input tokens and $25 per million output tokens. The model is live right now across every Claude product, the Claude API, Amazon Bedrock, Google Cloud's Vertex AI, and Microsoft Foundry. Developers can call it via the model ID claude-opus-4-7.

Inside that familiar wrapper, several things have changed meaningfully.

Much Better at Hard Software Engineering

This is the headline for developers. Opus 4.7 handles the kind of coding work that previously required a human watching every few minutes. Early testers are reporting that they now hand off their hardest, longest, most complex tasks to Opus 4.7 with confidence. The model pays close attention to instructions, stays on long-running tasks without drifting, and builds in its own verification steps before reporting back.

That last part is what enterprise engineering leaders have been asking for out loud for the last twelve months. Opus 4.7 does not just produce output. It checks its own work before handing it over.

Sharper Vision and High-Resolution Image Support

Claude's multimodal image processing just jumped more than three times in resolution. Claude Opus 4.7 accepts images up to 2,576 pixels on the long edge, roughly 3.75 megapixels, where prior Claude models topped out at about a third of that.

In plain terms, this means Claude can now actually read a dense screenshot. Computer-use agents that inspect user interfaces, data extraction from complex diagrams, product shots with fine print, technical schematics, receipt scans, anything that depends on pixel-level detail, all become dramatically more usable. No API parameter change is required. Images simply get processed at higher fidelity automatically.

Better Taste for Professional Work

Anthropic describes Opus 4.7 as "more tasteful and creative" on professional tasks, which shows up in higher-quality interfaces, slide decks, and documents. It is the kind of improvement that is hard to put on a benchmark chart and very easy to notice the first time a marketing team, a product designer, or a consultant runs it on a real deliverable. Opus 4.7 has better judgment about when a layout looks cluttered, when a sentence hits the wrong tone, and when something just is not quite right yet.

Real Memory Across Long Tasks

File-system memory is the quiet feature that may have the largest long-term impact. Opus 4.7 keeps meaningful notes across long, multi-session work, remembers what mattered in session three when it starts session four, and uses those notes to skip context it already understands. For agentic AI workflows that run for hours or across multiple days, this is a step change in usability. It also reduces the up-front context teams have to feed the model at the start of every session.

State-of-the-Art on Finance and Knowledge Work

Opus 4.7 scored state-of-the-art on the Finance Agent evaluation and on GDPval-AA, a third-party test of economically valuable knowledge work across finance, legal, and other professional domains. In ICX's view, the GDPval-AA result is the more interesting of the two. It suggests Opus 4.7 is measurably better at the kind of white-collar work enterprises actually pay people to do, not just at isolated coding problems.

Why Claude Opus 4.7 Is Better Than Opus 4.6

If the question is, "should I care about the Opus 4.6 to Opus 4.7 jump?" the answer depends on what the work looks like. For casual chat use, the difference is subtle and Opus 4.6 is still excellent. For heavy lifting, the difference is real, and the migration is worth the effort.

New xhigh Effort Level and Upgraded Claude Code

Claude Opus 4.7 introduces a new effort setting called xhigh ("extra high"), sitting between the previous high and max options. It gives developers finer control over the tradeoff between reasoning depth and latency. Claude Code has raised its default effort level to xhigh across all plans.

Claude Code also shipped two other upgrades worth flagging. The new /ultrareview slash command kicks off a dedicated code review session that reads through recent changes and flags bugs, design issues, and the sort of subtle problems a careful human reviewer would catch. Pro and Max users get three free ultrareviews to try it out. Separately, auto mode, which lets Claude make permission decisions automatically so long-running tasks do not stall out on approvals, is now available to Max users.

Task Budgets and Better Cost Control

On the Claude Platform API, task budgets just launched in public beta. Task budgets let developers guide how the model spends its tokens across a long job, which is especially valuable for agentic workflows where token cost can balloon if a model wanders. Combined with the new effort parameter, this gives teams better cost predictability on long-running tasks than they had a week ago.

Stricter Instruction Following (and Why It Matters)

One important catch. Claude Opus 4.7 follows instructions substantially more literally than prior Claude models. Prompts that worked fine on Opus 4.6 may produce unexpected results on 4.7, because the new model does what the prompt actually says rather than what an earlier model inferred the user probably wanted. Anthropic is explicitly recommending that teams re-tune their prompts and harnesses when upgrading.

For prompt engineering and prompt design teams, this is simultaneously a win and a project line item. Better instruction following is what prompt engineers have been asking for. It also means every existing prompt library needs a review after migration. ICX treats prompt tuning as first-class engineering work, and the Opus 4.7 release is a textbook reason why.

What Opus 4.7 Signals About Anthropic's Roadmap

Claude Opus 4.7 is not Anthropic's most capable model. That title belongs to Claude Mythos Preview, the frontier model announced earlier this year alongside Project Glasswing. Mythos is being released on a limited basis while Anthropic tests new safeguards on less capable models first.

Opus 4.7 is, in effect, the first model in the Opus line to carry capability improvements from what Anthropic learned while building Mythos. Its cybersecurity capabilities were intentionally scoped down during training and are paired with automatic safeguards that detect and block high-risk cybersecurity prompts. Security professionals with legitimate use cases (vulnerability research, penetration testing, and red-teaming) are invited to apply to the new Cyber Verification Program to unlock broader access.

For enterprise AI teams, agentic AI developers, and anyone building production systems on Claude, the direction of travel is now clear. Frontier capability is being layered down into generally available models as Anthropic gets comfortable with the safety posture. Opus 4.7 is the first layer. Future releases will likely pull more Mythos-class capability forward on a similar schedule. That is a big deal for planning any 2026 AI roadmap.

On the safety side, Anthropic's published evaluation describes Opus 4.7 as "largely well-aligned and trustworthy, though not fully ideal." It shows lower rates of concerning behaviors such as deception and sycophancy than prior models, and is meaningfully more resistant to prompt injection attacks. Mythos Preview remains the best-aligned model Anthropic has trained to date. Full details are in the Claude Opus 4.7 System Card.

Should You Upgrade to Claude Opus 4.7? A Practical Checklist

A few things to know before migrating production workloads.

Opus 4.7 uses an updated tokenizer that processes text more efficiently but maps the same input to more tokens, roughly 1.0 to 1.35 times what Opus 4.6 produced, depending on content type. The model also thinks more at higher effort levels, particularly on later turns of agentic tasks. The combined effect is more output tokens on the same workload.

Anthropic's internal coding evaluation shows Opus 4.7 using tokens more effectively than 4.6 at every effort level, meaning the quality-per-token ratio improved even where absolute token counts rise. Every workload is different, though. Measure on real traffic before committing.

The short version of an Opus 4.7 migration checklist for most teams:

  1. Run Opus 4.6 and Opus 4.7 side by side on representative production traffic. Measure quality, latency, and token cost.
  2. Re-tune prompts for Opus 4.7's stricter instruction following. Existing prompts may underperform until rewritten.
  3. Review effort levels. If current workloads use high on Opus 4.6, test both high and xhigh on 4.7 to find the new sweet spot.
  4. For agentic workflows, test the new task budgets feature to control token spend on long runs.
  5. For computer-use or image-heavy workflows, re-evaluate whether the new high-resolution image support opens up use cases that were previously not feasible.

Anthropic has published a formal migration guide, and the full safety evaluation is in the Claude Opus 4.7 System Card. Both are worth reading before any production rollout.

ICX works with enterprise teams on Claude integrations, prompt engineering, agentic workflow architecture, and the UX strategy that makes AI deployments actually work in customer-facing contexts. Claude Opus 4.7 shifts the calculus on what agentic AI can reliably do in production. Teams that have been holding back on agentic deployments because of reliability concerns should take another look this quarter.

To discuss a Claude Opus 4.7 deployment, prompt migration from Opus 4.6, or agentic architecture design, visit the services page, read the FAQ, browse the resources library, learn more about ICX, or book a free discovery call. For Christi's background and portfolio, visit christi.io.

AI Transparency Disclosure

This article was created with the assistance of AI technology (Anthropic Claude) and reviewed, edited, and approved by Christi Akinwumi, Founder of Intelligent CX Consulting. All insights, opinions, and strategic recommendations reflect ICX's professional expertise and real-world consulting experience.

ICX believes in radical transparency about AI usage. As an AI consulting firm, it would be contradictory to hide the tools that make this work possible. Anthropic's Transparency Framework advocates for clear disclosure of AI practices to build public trust and accountability. ICX applies this same standard to its own content. Read more about why AI transparency matters.

Have a project in mind?

Book a Call