Economy April 23, 2026 02:12 PM

OpenAI debuts GPT-5.5 with stronger coding, research and operational capabilities

New release promises higher benchmark performance, larger context window and tiered API pricing for professional users

By Priya Menon
OpenAI debuts GPT-5.5 with stronger coding, research and operational capabilities

OpenAI on Thursday released GPT-5.5, making the model available to Plus, Pro, Business and Enterprise customers via ChatGPT and Codex. The update delivers higher scores across several coding, research and professional benchmarks, preserves per-token latency of the previous iteration while using fewer tokens for comparable tasks, and introduces a 1-million token context window. OpenAI also published API pricing and described safety testing and controlled access measures for sensitive capabilities.

Key Points

  • GPT-5.5 is now available to Plus, Pro, Business and Enterprise users via ChatGPT and Codex, with a 1-million token context window - impacts cloud providers, enterprise software and developer tooling.
  • Benchmark gains include 82.7% on Terminal-Bench 2.0, 58.6% on SWE-Bench Pro and 73.1% on Expert-SWE; professional and operational benchmarks show strong scores - relevant for software development, knowledge-work automation and IT operations.
  • Model development used NVIDIA GB200 and GB300 NVL72 systems; pricing tiers announced for API access may influence adoption across startups, enterprises and cloud infrastructure suppliers.

OpenAI announced Thursday that GPT-5.5 is now available to Plus, Pro, Business and Enterprise subscribers through the ChatGPT and Codex platforms. The company provided benchmark results and technical details that frame the model as optimized for coding, research and knowledge-work scenarios.


Benchmark performance and developer metrics

OpenAI reported that GPT-5.5 reached 82.7% accuracy on Terminal-Bench 2.0, a benchmark designed to evaluate command-line workflows. On SWE-Bench Pro, which measures GitHub issue resolution, the model scored 58.6%. For Expert-SWE - the company’s internal coding evaluation for tasks estimated to take about 20 hours - GPT-5.5 achieved a score of 73.1%.

The company also highlighted strong results on tests oriented to professional and operational tasks. GPT-5.5 scored 84.9% on GDPval, a benchmark spanning knowledge work across 44 occupations, and 78.7% on OSWorld-Verified, which evaluates capabilities for operating within computer environments.


Scientific and bioinformatics results

OpenAI stated the model shows improved performance in scientific research applications. GPT-5.5 obtained 25.0% on GeneBench, up from GPT-5.4’s 19.0% on the same test, and scored 80.5% on BixBench for bioinformatics analysis.


Latency, token efficiency and context

The release notes indicate GPT-5.5 retains the same per-token latency as GPT-5.4 while using fewer tokens to complete comparable tasks. The model supports a 1-million token context window, expanding the span of information it can consider within a single interaction.


Infrastructure and development

OpenAI said GPT-5.5 was developed using NVIDIA GB200 and GB300 NVL72 systems. The company emphasized hardware used in training but did not provide additional infrastructure metrics in the release material.


Safety, access and specialized programs

Under its Preparedness Framework, OpenAI rated GPT-5.5’s biological and cybersecurity capabilities as "High." The company indicated it conducted safety evaluations that included external testing and feedback from approximately 200 early-access partners prior to the public release. For verified security professionals, OpenAI is offering specialized access through a "Trusted Access for Cyber" program.


Pricing and tiers

OpenAI published API pricing for GPT-5.5. The standard API rate is $5 per million input tokens and $30 per million output tokens. A Pro tier for GPT-5.5 is priced at $30 per million input tokens and $180 per million output tokens. The announced context window for the model is 1 million tokens.


Third-party evaluation and rollout

The company reported external testing and partner feedback from roughly 200 early-access participants as part of its safety review before general availability. OpenAI did not publish further details about the identities of those partners or the specific scope of external testing in the release material.


Financial product evaluation note

Separately, OpenAI-related product messaging referenced an AI-driven portfolio tool that evaluates individual stocks using more than 100 financial metrics and has highlighted prior selections such as Super Micro Computer and AppLovin. Those references noted historical examples of performance, including Super Micro Computer at +185% and AppLovin at +157%.

Risks

  • OpenAI’s own classification of the model’s biological and cybersecurity capabilities as "High" and the decision to offer specialized "Trusted Access for Cyber" suggests potential risks around misuse or the need for controlled access - relevant to cybersecurity and bioinformatics sectors.
  • Safety evaluations relied on external testing and feedback from roughly 200 early-access partners; the company did not disclose further details, leaving uncertainty about the comprehensiveness of pre-release validation - relevant to regulators, enterprise adopters and risk teams.
  • Tiered API pricing and the high cost of Pro output tokens may limit access for smaller developers or research teams, potentially affecting adoption patterns in startups and academic research.

More from Economy

Griffin Reconsiders $6 Billion Midtown Development After Mayor's Pied-à-Terre Video Apr 23, 2026 Uruguay Confirms Private Pension Managers Will Remain Central to System Apr 23, 2026 Italy Poised to Become Euro Zone’s Most Indebted Country in 2026, Budget Data Shows Apr 23, 2026 White House to Reveal Drug Pricing Agreement with Regeneron Apr 23, 2026 Commerce Secretary: Only One Applicant Approved So Far for $1 Million 'Gold Card' Visa Apr 23, 2026