How Do I Run AI Workloads When My Regulator Says the Data Can’t Leave the Country?

This article was written by Dave Manning, our Director of Architecture. It was first published on LinkedIn.

How Do I Run AI Workloads When My Regulator Says the Data Can’t Leave the Country?

I often find myself in conversations with Canadian executives who are wrestling with the same question: “We want to deploy AI. Our regulator says the data stays in Canada. Now what?”

It’s a fair question. And it’s one that most of the public conversation around AI is failing to answer clearly.

The vendor keynotes talk about transformation. The compliance team talks about risk. The board wants both. The people in the middle, the Architects, CIOs, VPs of IT, are left trying to reconcile two mandates that feel like they’re on a collision course.

I lead an architecture team that designs and delivers cloud and AI environments for regulated Canadian organizations. We’re not theorizing about what’s possible. We’re building what works today, within the constraints that actually exist. This is the ground-level view.

“Data Can’t Leave the Country” Is More Complicated Than It Sounds

The first thing executives need to understand is that “Canadian data residency” isn’t a single rule. It’s a stack of overlapping obligations that vary by province, sector, and data classification.

At the federal level, PIPEDA doesn’t actually mandate that data stay in Canada. It requires organizations to inform individuals about cross-border transfers and obtain meaningful consent. That’s an important distinction, but it’s also just the baseline.

The real teeth come from the provinces and sector regulators. Quebec’s Law 25 imposes the strictest regime in the country: any transfer of personal information outside the province requires a privacy impact assessment, and the destination jurisdiction must offer protection “essentially equivalent” to Quebec law. The penalties, up to $10 million or 2% of worldwide revenue, are not theoretical. Ontario’s PHIPA is widely believed to mandate that personal health information stay in Canada, but the reality is more nuanced. PHIPA restricts unconsented disclosures to independent third parties outside the province, not the hosting of data by a cloud provider acting as an agent under contract. A hospital using Azure to run an AI diagnostic model isn’t “disclosing” data to Microsoft in the PHIPA sense, provided the right contractual safeguards are in place. The distinction matters because it opens architectural options that many healthcare organizations assume are off the table. Alberta’s PIPA adds yet another layer.

Then there are the sector regulators. OSFI’s Guideline B-13 (Technology and Cyber Risk Management) is often cited as requiring in-country processing, but it’s actually a principles-based risk management framework, not a geographic embargo. It requires rigorous oversight of third-party vendors, robust incident reporting, and comprehensive risk documentation. Many financial institutions choose to localize data processing because it simplifies their risk posture, but B-13 itself doesn’t forbid foreign processing if the controls are in place. What it does mean is that if you process data outside Canada, you own the burden of proving you’ve managed every associated risk.

Separately, OSFI finalized the updated Guideline E-23 (Model Risk Management) in late 2025, with a mandatory effective date of May 1, 2027. This is the one that changes the game for AI. E-23 now explicitly covers AI and machine learning systems, requiring enterprise-wide model identification, risk assessment, deployment controls, continuous monitoring, and formal decommissioning processes. If you’re in financial services, your AI models will need the same governance rigor as your credit risk models. This isn’t guidance. It’s regulatory expectation with a hard deadline.

The point is: when a regulator says “the data stays here,” the specific meaning depends entirely on who your regulator is, what province you’re in, and what kind of data you’re handling. Most organizations underestimate this complexity until they’re already deep into an AI initiative.

PBMM: The Standard That Gates Public Sector, and Increasingly Everything Else

If you work with the Canadian federal government, you already know Protected B, Medium Integrity, Medium Availability (PBMM). It’s the cloud security standard defined in ITSG-33 and evolved into the CCCS Medium Cloud Profile. It’s the bar your cloud environment has to clear to handle non-classified government information.

What’s changed is that PBMM is no longer just a public sector concern. We see it referenced in provincial procurement, in RFPs from regulated private sector organizations, and as a proxy standard for “serious about security” in industries that don’t have their own cloud certification framework.

The major cloud providers have all invested in PBMM assessments. Azure was one of the first to qualify. AWS reports 162 services assessed against CCCS Medium requirements as of late 2025. Google Cloud is approved for Protected B Medium and High Value Asset workloads. IBM maintains PBMM in their Toronto and Montreal facilities.

But here’s the gap that matters: PBMM certification for AI-specific services (Azure OpenAI, Cognitive Services, the inference endpoints you actually need) is not explicitly published in the same way that general IaaS and PaaS services are. General cloud PBMM is not the same as AI service PBMM. This distinction is where deals stall, and projects get delayed. If you’re building an AI workload for a regulated environment, you need to understand exactly which services have been assessed and which are running on assumption.

What You Can Actually Run Today

Let me be direct about the current state, because I think the market needs more honesty here and less aspiration.

Azure OpenAI offers models in Canada Central and Canada East, but model availability varies significantly by deployment type. Established models like GPT-4 and GPT-4o support standard regional deployments with strict data residency: both inference and data processing happen exclusively within Canadian data centres. However, newer frontier models (the GPT-4.1 series, advanced reasoning models like o1 and o3) are currently unavailable for standard regional deployment in Canada Central. To access these, architects are pushed toward “Global Standard” or “Data Zone” deployment types, which dynamically route inference across Azure’s global infrastructure, fundamentally breaking strict data residency. This is a critical distinction: you can run production AI in-region today, but the most capable models may require you to choose between cognitive performance and geographic isolation.

AWS Bedrock offers foundation model access from ca-central-1 (Toronto) and ca-west-1 (Calgary), including Anthropic’s Claude models. But there’s a critical nuance: AWS achieves this through cross-region inference. Your data at rest (logs, knowledge bases, stored configurations) stays in Canada, but the inference processing itself may route to US regions. For workloads where the regulator cares about where compute happens, not just where data sits, this distinction matters. You need to understand whether your compliance framework draws the line at data residency or processing residency.

Anthropic’s Claude API (direct, not through a cloud provider) currently processes all data in the United States. There is no Canadian data residency option. Anthropic has introduced EU data residency for API customers, which suggests regional expansion is on the roadmap, but nothing is announced for Canada. For organizations that need Claude’s capabilities within Canadian borders today, the path runs through Bedrock, with the cross-region inference caveat above.

Google Vertex AI offers Claude and Gemini models in the Toronto region, with regional endpoints that provide guaranteed data routing through specific geographic regions. This adds another option, though at a 10% pricing premium over global endpoints.

The honest summary: you can run production AI inference in Canada today, with real data residency guarantees, if you choose the right platform, the right models, and the right deployment configuration. But the most capable frontier models often require global routing that breaks strict residency, and the managed orchestration services are either deprecated or unavailable in-country. The gap is narrowing fast, but if you’re making architecture decisions right now, make them based on what’s certified and available, not what’s on a roadmap.

The Architecture Decisions That Actually Matter

These are the conversations that separate a successful sovereign AI deployment from one that stalls in procurement review.

Region selection is a governance decision, not a performance decision. Lock Canada Central as your AI compute region. Document the rationale. Make the decision auditable. This sounds obvious, but I’ve seen organizations leave region selection to a developer’s default configuration and then scramble to explain it to an auditor six months later.

Classify your data before you select your models. Know what’s Protected B, what’s sensitive personal information, what’s proprietary but not regulated, and what’s public. Then map each classification to the services that are certified to handle it. Most organizations want to start with the model and work backward to the data. That’s exactly the wrong order for regulated environments.

Understand that “available in Canada” means different things on different platforms. Azure OpenAI in Canada Central runs inference in-region for supported models, but frontier models may require Global routing that breaks residency guarantees. Bedrock cross-region inference from ca-central-1 keeps data at rest in Canada but may process inference in the US. Anthropic’s direct API runs entirely in the US. These are not equivalent from a compliance perspective, even though all three give you access to powerful models. Your architecture needs to reflect the actual data flow, not the marketing summary.

Stay current on platform deprecations. If you’ve been planning agent-style orchestration around Azure OpenAI’s Assistants API, stop. Microsoft has officially deprecated the Assistants API, with complete retirement scheduled for August 26, 2026. It has been replaced by the Microsoft Foundry Agents Service, a modernized framework with stronger observability, version control, and enterprise governance. Any organization building custom middleware while waiting for Assistants to land in Canada is investing in a dead end. Architects should evaluate Foundry Agents now and plan migration paths for any existing Assistants-based work. More broadly, this is a reminder that the AI platform landscape is moving fast enough to invalidate architectural assumptions within months. Build modularly, stay close to the deprecation notices, and don’t bet your architecture on a single service remaining stable.

Contractual data residency is not the same as technical data residency. Azure’s contractual commitments around Canadian data processing are strong. But your architecture has to enforce them. Private Endpoints, Azure Policy definitions that deny non-Canadian resource creation, network security groups, diagnostic settings that keep logs in-region: these are the controls your auditor will actually ask about. A contract tells your regulator you intend to keep data in Canada. Your technical controls prove you did.

Treat model risk management as a first-class governance discipline. This one is specifically for financial services today, but it’s heading everywhere. OSFI E-23 takes full effect May 1, 2027, and it covers AI/ML explicitly. Model identification, risk assessment, deployment controls, performance monitoring, bias testing, explainability, and formal decommissioning are all in scope. Build these into your AI platform from day one. Retrofitting governance to satisfy an OSFI audit after the fact will be far more expensive and disruptive than doing it right up front.

The underlying principle: sovereign AI is not a product you buy. It’s an architecture you design. The decisions that matter most are the unglamorous ones: region locks, data classification, policy enforcement, model governance.

The CLOUD Act – Elephant in the Room

I’d be doing a disservice if I didn’t address this directly, because every informed client raises it and too many advisors dodge it.

The U.S. CLOUD Act gives American authorities the legal right to compel U.S.-headquartered companies to produce data in their custody, regardless of where that data is physically stored. Microsoft, AWS, Google, and Anthropic are all U.S.-headquartered. Your data can be sitting in a Toronto data centre, running on Azure, with a Canadian data residency contract, and the U.S. government can still, in principle, issue a lawful demand for it.

Microsoft has responded with a five-point Digital Sovereignty Plan: a Threat Intelligence Hub in Ottawa, expanded confidential computing capabilities, enhanced data residency commitments, sovereign landing zone architectures, and contractual protections designed to keep Canadian data under Canadian legal authority. These are meaningful steps. AWS and Google have made similar commitments around data sovereignty, though with different implementation approaches.

Here’s what honest advisors tell their clients: the combination of Canadian data residency, encryption at rest and in transit, customer-managed keys, access controls, and contractual commitments significantly reduces the practical risk. For the vast majority of regulated workloads, this combination meets the bar that Canadian regulators are setting. For a narrow set of the highest-sensitivity workloads (think national security, certain government classified environments), it may not, and those organizations may need sovereign cloud options or on-premises deployments.

The pragmatic position: don’t pretend the CLOUD Act doesn’t exist. Don’t pretend it blocks everything, either. Understand where your data sits on the sensitivity spectrum and architect to the actual risk, not the headline.

What the Next 12 Months Look Like

The investment wave is real. Microsoft’s $7.5 billion. The federal government’s $925 million. The AI Compute Challenge calling for 100+ megawatt sovereign data centres. The University of Toronto receiving $42.5 million for AI compute infrastructure. This isn’t speculative. It’s funded and in motion.

But there are things the money doesn’t solve yet. Canada still has no comprehensive AI legislation. Bill C-27 and AIDA died when Parliament dissolved in 2025, and while a successor is expected, the timing is uncertain. In the meantime, sector regulators (OSFI, Health Canada, Transport Canada) are filling the gap with guidance that has practical regulatory weight even without a new law.

The AI platform landscape is also converging on Canada. Anthropic is expanding its regional data residency options (EU is live, other regions are expected to follow). AWS is broadening Bedrock’s in-region capabilities. Microsoft is pouring infrastructure dollars into Canadian capacity that comes online in H2 2026. Cohere, a Canadian-founded AI company, has models available on Azure, adding a sovereign-origin option to the platform.

What this means for executives making decisions today: the infrastructure is coming, the regulatory framework is converging, and the service gaps are closing. But you should not architect for next year’s availability. Build on what’s certified and available now. Design your governance framework so it can absorb new services and new regulations as they arrive. The organizations moving now, with proper controls, will have 12 to 18 months of operational learning and institutional knowledge that their competitors won’t.

Waiting for perfect regulatory clarity before deploying AI in a regulated Canadian environment means waiting indefinitely. The organizations that will lead didn’t wait for perfection. They moved with governance.

Reframing the Question

The question I started with (“How do I run AI workloads when my regulator says the data can’t leave the country?”) is actually the wrong question, or at least an incomplete one.

The better question is: How do I architect AI workloads that satisfy my regulator, my board, and my users, simultaneously?

The answer isn’t a single product or a single certification. It’s an architecture designed with intention: the right region, the right data classifications, the right technical controls, the right governance model, and an honest assessment of what’s certified today versus what’s on someone’s roadmap.

The infrastructure investment is happening. The regulatory frameworks are converging. The AI services are landing in Canadian regions. The window for competitive advantage is open right now, for the organizations willing to build thoughtfully rather than wait for someone else to make it easy.

I’d like to hear what you’re seeing in your own environments. If you’re navigating sovereign AI in a regulated Canadian industry, what’s working? Where are you stuck? The more practitioners share what they’re learning, the faster this market matures for everyone.

Centrilogic - AI Discovery Assessment
AI Discovery Assessment

This assessments helps companies identify and understand where AI can deliver meaningful value across all aspects of your operations.. We identify realistic use cases, assess data readiness, and outline practical steps to safely adopt agentic AI.

Are you ready to realize your digital potential?

Start with Centrilogic today.

Contact Us