OpenAI is developing a cybersecurity model and plans a limited release, mirroring Anthropic's approach with its Mythos model. Both companies are concerned about the potential for misuse of advanced AI in hacking and cyberattacks. Anthropic's Mythos, in a preview phase, is only accessible to a select group of cyber security and technology companies. OpenAI has a similar pilot program, "Trusted Access for Cyber," with a select group of participants, using its most cyber-capable model, GPT-5.3-Codex. The move to restrict access stems from concerns expressed by security experts about AI's ability to autonomously disrupt critical infrastructure. Experts believe AI models can now perform actions that could previously only be done by humans. These models can already identify vulnerabilities, even with restricted access. The restricted releases echo the process cybersecurity vendors use when disclosing software vulnerabilities. Releasing new models is like the debate around responsible vulnerability disclosure. However, similar capabilities will likely appear in other, widely available AI models soon. Anthropic might release other Mythos models later, with appropriate safeguards in place.
axios.com
axios.com
Create attached notes ...
