A Defining Moment at the Intersection of AI and Cyber Security

Anthropic’s launch of Project Glasswing represents precisely the kind of development that the AI and Cyber Security Association (AICSA) was founded to examine, contextualise, and help our community understand. At the centre of the initiative is Claude Mythos Preview, Anthropic’s most advanced AI model to date, which has already identified thousands of previously unknown zero-day vulnerabilities across every major operating system and every major web browser. The coalition supporting the project includes Apple, Microsoft, Google, AWS, CrowdStrike, Palo Alto Networks, Cisco, Broadcom, NVIDIA, JPMorganChase, and the Linux Foundation.

From an AICSA perspective, this is not simply a technology announcement. It is a signal that the relationship between artificial intelligence and cyber security has entered a new and consequential phase. There is also a risk that the coaltion of large organisations will create a huge amount of FOMO, or “fear of missing out”, for the smaller ones and ones who can’t compete with the big ones through the aggressive marketing around Project Glasswing.

The Promise: AI as a Force Multiplier for Defenders

The results from Project Glasswing’s early work are remarkable. Thousands of critical zero-day vulnerabilities, some of which had existed undetected for decades, have been surfaced by a single AI model in a matter of weeks. The oldest bug discovered had been sitting in OpenBSD, widely regarded as one of the most security focused operating systems in existence, for 27 years.

This demonstrates something the AICSA has long advocated: that AI has the potential to fundamentally shift the balance between attackers and defenders. When applied responsibly and at scale, AI driven vulnerability discovery can achieve what decades of human led code review have not. It can find flaws that are too subtle, too deeply embedded, or simply too numerous for human analysts to catch within realistic timeframes.

The collaborative model behind Glasswing is equally noteworthy. By bringing together major technology companies, open-source foundations, and financial institutions under a shared defensive mission, Anthropic has created a framework for responsible AI deployment in cyber security that others would do well to study and replicate.

The Tension: Capability and Risk Are Two Sides of the Same Coin

The AICSA has always taken a balanced view of AI’s role in cyber security, and Project Glasswing illustrates exactly why that balance matters. The same capabilities that make Claude Mythos Preview so effective at finding and fixing vulnerabilities could, if they were to reach malicious actors, be equally effective at exploiting them. Anthropic have been commendably transparent about this, acknowledging that frontier AI capabilities are likely to advance substantially in the coming months and that the defensive window Glasswing provides is finite.

This is not a reason for alarm, but it is a reason for urgency. The AICSA believes that the cyber security community must engage seriously with the dual use nature of frontier AI models. Governance frameworks, responsible disclosure practices, and access controls for the most capable models are not optional extras. They are essential components of any strategy that seeks to use AI for defensive advantage without simultaneously accelerating offensive capabilities.

The Gap That Still Needs Closing

While the AICSA welcomes the ambition and early results of Project Glasswing, we also recognise that AI powered vulnerability discovery is only one part of a much larger picture. The organisations most at risk from the next generation of AI enabled threats are not the enterprises with mature security operations centres and well-resourced patching pipelines. They are the organisations running legacy operational technology, unmanaged IoT devices, and clinical or industrial systems that were never designed to be networked.

For these environments, the fundamental challenge is not whether AI can find a vulnerability. It is whether the organisation knows the vulnerable asset exists in the first place. Asset visibility, network segmentation, and accurate inventories remain the bedrock upon which all other security capabilities, including AI driven ones, must be built. Without that foundation, the intelligence generated by initiatives like Glasswing has nowhere to land.

What the AICSA Believes Should Come Next

Project Glasswing sets an important precedent, but it also raises questions that the industry needs to address collectively. The AICSA would like to see continued and expanded investment in open-source security, building on the Linux Foundation donations that Anthropic has made as part of this initiative. Open-source software underpins much of the world’s critical infrastructure, and its maintainers have historically lacked the resources and tooling that large enterprises take for granted.

We would also encourage the development of clear, sector specific guidance on how organisations should prepare for the increase in vulnerability disclosures that AI powered discovery will generate. Patching cycles, risk prioritisation frameworks, and incident response plans will all need to evolve to keep pace with the volume and speed of AI driven findings.

Finally, the AICSA calls for a broader industry conversation about governance and access controls for frontier AI models with significant cyber security capabilities. The decision not to make Claude Mythos Preview generally available is the right one, but as models of comparable capability emerge from other providers, the industry will need shared standards and norms for managing access responsibly.

A Call to Engage

Project Glasswing is a genuinely significant development, and the AICSA commends Anthropic and its partners for taking a proactive, collaborative approach to AI driven cyber defence. But the work does not stop here. The cyber security community, from enterprise CISOs to open-source maintainers, from policy makers to AI researchers, must engage with these developments thoughtfully, urgently, and with a clear eye on both the opportunities and the risks.

The AICSA will continue to provide analysis, commentary, and community discussion on AI’s evolving role in cyber security. To join the conversation, please contact us via hello@aisec.org.uk.