OpenAI locks GPT-5.5-Cyber behind velvet rope despite slamming Anthropic for doing exactly that
OpenAI is launching a restricted release of its new GPT-5.5-Cyber model for a select group of 'cyber defenders,' despite previously criticizing Anthropic for similar access controls. The move comes shortly after CEO Sam Altman publicly questioned the motives behind limiting AI model access, accusing rivals of profiting under the guise of safety. OpenAI claims the model will help proactively identify security vulnerabilities and strengthen critical infrastructure.
- ▪OpenAI is releasing GPT-5.5-Cyber to a limited group of trusted cyber defenders rather than making it publicly available.
- ▪CEO Sam Altman previously criticized Anthropic for restricting access to its Claude Mythos model, calling it fear-based marketing.
- ▪GPT-5.5-Cyber is designed to perform penetration testing, detect bugs, exploit vulnerabilities, and analyze malware.
- ▪Altman stated that OpenAI will collaborate with governments and the broader ecosystem to determine trusted access for cybersecurity applications.
- ▪Anthropic released its cyber-focused model Claude Mythos to about 50 organizations under strict controls, prompting OpenAI's initial criticism.
Opening excerpt (first ~120 words) tap to expand
Security OpenAI locks GPT-5.5-Cyber behind velvet rope despite slamming Anthropic for doing exactly that Altman's crew now doing the same gatekeeping it recently mocked Carly Page Fri 1 May 2026 // 11:42 UTC OpenAI is lining up a limited release of its new GPT-5.5-Cyber model to a handpicked circle of "cyber defenders," just weeks after taking a swipe at Anthropic for doing almost exactly the same thing. CEO Sam Altman said in a post on X that the rollout will begin "in the next few days," with access restricted to a group he described as trusted defenders working to secure critical systems.
…
Excerpt limited to ~120 words for fair-use compliance. The full article is at The Register.