Canada has spent the past decade building world-class

artificial intelligence

governance frameworks, but has yet to translate that into the kind of future-proofed preparedness that the arrival of

Anthropic PBC

‘s Mythos now makes urgently necessary in the financial sector.

The

cybersecurity

risks facing financial institutions have been steadily growing for years as data and technologies have become central to how the system works. This month, those risks took on a new and specific form, one that regulators in Washington, D.C., and Ottawa considered serious enough to convene emergency meetings with their country’s top bank executives.

The meetings were about Mythos and for good reason. Mythos is not a better version of the AI models that came before it. What Anthropic’s testing has revealed is something directionally different: it performs highly skilled tasks in an autonomous way. That is not an incremental development in cybersecurity; it is a structural one.

According to Anthropic’s early testing, its model autonomously identifies and exploits previously unknown software vulnerabilities across every major operating system and web browser without human involvement after the initial instruction.

In one documented case, it independently discovered and exploited a 27-year-old flaw in an operating system known primarily for its security. It successfully reproduces such exploits on the first attempt more than 75 per cent of the time. It has found critical vulnerabilities at a scale and complexity that exceed what even the most skilled human security professionals can do.

The

Bank of Canada’s meeting in April

brought together the Canadian Financial Sector Resiliency Group, a public-private body that includes the Big Six banks, TMX Group Ltd., the Department of Finance and the Office of the Superintendent of Financial Institutions (OSFI), among others.

The group’s composition reflects the nature of Canada’s financial system — integrated, interdependent and historically concentrated — which is why the banking sector was able to navigate the 2008 global financial crisis with resilience.

But our major institutions share cloud infrastructure, third-party service providers and technology platforms, so a successful AI-assisted breach into any one of those shared layers does not stay contained. AI agents capable of weaponizing vulnerabilities across consolidated cloud providers could, in a worst case, trigger cascading exploits across the entire banking system. That risk is amplified in a highly concentrated system.

Those who have worked at the intersection of AI and financial regulation are not so much surprised by these developments as they are by how fast they have happened. AI governance cannot be an afterthought, layered onto systems once they are already deployed. It must be embedded from the design stage and built into the architecture of how models are developed, validated and monitored.

The EDGE (explainability, data governance, governance structures and ethics) principles that emerged from Canada’s Financial Industry Forum on Artificial Intelligence were calibrated for the risks of AI adoption within institutions. Mythos introduces a related, but distinct challenge: AI capabilities in the hands of external malicious actors.

Institutions without clear accountability structures and board-level oversight of AI exposure will not necessarily have weaker systems, but they will make slower decisions by design. In this environment, slow becomes the vulnerability.

Regulatory bodies have been attentive to these developments and that’s a positive. For example, the OSFI’s 2026-27 Annual Risk Outlook identifies AI adoption as a source of emerging institutional risk and it commits to targeted cyber-preparedness reviews, intelligence-led cyber-resilience testing and ongoing assessments of how institutions manage technology and third-party risk.

OSFI has specifically mentioned Mythos

, with cyber-resilience testing and risk reviews among the measures it has planned in response. It has also committed to sharing threat and mitigation information in coordination with the Canadian Centre for Cyber Security. These are substantial commitments.

Nonetheless, OSFI has said it does not plan short-term changes to its existing guidelines, an historical reflex to prioritize deliberation over reaction. But the traditional sequence of observe, consult and then act can become a liability when an AI system can identify and exploit a critical vulnerability faster than a supervisory review cycle can be completed.

We hope to see three things happening in the near future and they need to happen in parallel.

First, the Canadian Financial Sector Resiliency Group could adopt an explicit AI cyber mandate with dedicated resources behind it. Coordination, information sharing and multidisciplinary collaboration are valuable, but not a substitute for structured, funded preparedness when the threat is systemic.

Canadian institutions must also start actively engaging with available AI-powered defensive tools. Anthropic has committed significant resources through Project Glasswing to help critical infrastructure operators use Mythos-class capabilities for defence. The same technology that creates the risk can be turned towards managing it, but engaging early is a necessity.

Furthermore, Ottawa and OSFI should match the pace of the threat with regulatory response. Canada’s financial sector has strong data foundations, a grounded risk culture and some of the world’s most respected AI governance expertise, but this advantage must be sustained with leadership that matches the pace of AI innovation.

Canada has built a world-class financial system based on forward-looking evolution. The AI era requires no less.

Manuel Morales, Ph.D., is an associate professor at Université de Montréal.