OpenAI has launched GPT-5.4-Cyber, a new cybersecurity-focused model, expanding access to advanced tools for vetted defenders while adequately regulating how they are used.
GPT-5.4-Cyber is a version of OpenAI’s latest model adjusted for defensive security work and will not be widely available at launch. Instead, OpenAI is giving early access to selected security firms, organisations and researchers.
The release follows Anthropic’s recent launch of its own frontier model, Mythos. That system is being tested under a restricted programme known as Project Glasswing, where only approved groups can use it for cybersecurity tasks.
According to Anthropic, the model has already identified thousands of serious weaknesses across software systems.
OpenAI is taking a comparable route but with a wider rollout plan over time. The company is expanding its Trusted Access for Cyber programme, which it introduced earlier this year. This scheme verifies users before granting them access to more capable tools.
Under the updated structure, more individuals and teams will be admitted, but access depends on how much information they provide to confirm their identity and role. Those in the highest tier will be allowed to use GPT-5.4-Cyber.
The company said the model has fewer restrictions when handling sensitive tasks such as vulnerability research and code analysis. It is designed to support security professionals who need to examine software more deeply, including analysing compiled programmes without access to their source code.
At the same time, OpenAI is carefully monitoring how the system is used. Because the model allows more freedom, the company is limiting its release and adjusting safeguards as it learns from real-world use.
Tools like GPT-5.4-Cyber can be used for both defence and attack, OpenAI acknowledged that risk, noting that threat actors are already experimenting with artificial intelligence to find new ways into systems.
To manage that, the company said access will not just depend on the model itself, but on who is using it and for what purpose. Strong identity checks and clearer signals of intent are being built into the process.
The aim is to make security tools more widely available without opening the door to misuse. OpenAI said it does not want to decide centrally who gets to defend systems, but it still needs controls that can scale.
This latest release builds on earlier initiatives, including its cybersecurity grant programme and tools designed to scan and fix software vulnerabilities. The company said these systems have already helped address thousands of high-risk issues.
OpenAI expects both risks and benefits to grow, saying future models will likely require stronger protection, even as they provide more advanced support for those working to protect digital infrastructure.






