OpenAI has cautioned that as its AI systems grow more advanced in cybersecurity capabilities, the company expects future models to carry a “high” risk in cybersecurity contexts, according to a recent blog post. The firm says these powerful models could eventually be capable of creating real, working zero-day exploits against well-defended systems or even helping coordinate sophisticated enterprise-level intrusion operations with real-world impact. Cybernews
To address these risks, OpenAI is ramping up investments in defensive tools and safeguards. This includes developing features that help cybersecurity teams audit code and patch vulnerabilities more efficiently, as well as strengthening internal protections like access controls, hardened infrastructure, egress monitoring, and ongoing threat detection.