Anthropic announced on Thursday the release of Claude Opus 4.7, an updated iteration of its Opus line that the company says brings tangible gains in software engineering and visual processing. The new build is positioned as an enhancement over Opus 4.6, with particular emphasis on handling more complex coding tasks and higher-resolution images.
According to Anthropic, Opus 4.7 can process images with a long edge up to 2,576 pixels, a capability more than three times that of earlier Claude models. The company says this enlarged image capacity supports a broader set of vision-related use cases and helps the model navigate coding and debugging scenarios that previously required tighter human oversight.
Anthropic cautioned that, despite the improvements, Opus 4.7 does not match the capabilities of Claude Mythos Preview, which remains the firm’s most powerful model. Mythos Preview continues to have a limited release because of safety concerns described in Project Glasswing, which Anthropic announced last week.
Security and cyber use were explicit priorities for the Opus 4.7 release. The model includes automated safeguards that detect and block requests that indicate prohibited or high-risk cybersecurity activity. Anthropic also reports it reduced the model’s cyber capabilities during training relative to Mythos Preview. For legitimate cybersecurity work, the company has established a Cyber Verification Program to grant security professionals access.
Opus 4.7 is being distributed across Anthropic’s product suite and on third-party cloud platforms. The model is available through Claude products, the Claude API, Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry. Anthropic said pricing remains unchanged from Opus 4.6 - $5 per million input tokens and $25 per million output tokens.
In benchmark testing, Opus 4.7 outperformed its predecessor on several measures, including finance agent evaluations and GDPval-AA, a metric designed to assess economically valuable knowledge work in finance and legal domains. Anthropic also reported improved instruction following, while noting that some users may need to adapt prompts that were written for earlier models.
The release adds new user controls and developer tools. A new effort level, labeled "xhigh," sits between the existing high and max settings to give users additional control over the trade-off between reasoning depth and response speed. For API users, Anthropic launched task budgets in public beta, and it added an ultrareview command in Claude Code intended to aid bug detection.
Opus 4.7 uses an updated tokenizer. Anthropic said the new tokenizer can produce between 1.0 and 1.35 times more tokens for the same input, depending on content type, which may affect token accounting and processing behavior.
Separately, promotional material included with the release highlights ProPicks AI, which evaluates Microsoft and thousands of other companies each month using more than 100 financial metrics. The promotional text notes the AI analyzes fundamentals, momentum, and valuation, and references prior winners listed as Super Micro Computer (+185%) and AppLovin (+157%).
Key points
- Opus 4.7 improves software engineering and vision capabilities and handles images up to 2,576 pixels on the long edge, more than three times prior Claude models.
- The model includes automated cybersecurity safeguards, reduced cyber capabilities in training versus Mythos Preview, and a Cyber Verification Program for legitimate security work.
- Availability spans Claude products, the Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry, with pricing unchanged from Opus 4.6.
Risks and uncertainties
- Mythos Preview remains limited in release because of safety concerns outlined in Project Glasswing, limiting immediate access to Anthropic’s most capable model - a factor for research and advanced deployments.
- Changes in instruction following and the updated tokenizer may require prompt adjustments and could alter token counts for given inputs, affecting development workflows and token-based costs.
- Automated blocks on high-risk cybersecurity requests may restrict legitimate security testing unless users are approved through the Cyber Verification Program.