By the end of 2024, over 70 countries had already published or were drafting AI-specific regulations — and their definitions of “responsible use” can vary dramatically. What’s encouraged innovation in one market may invite enforcement in another.
The result is a growing patchwork of laws that global organizations must navigate as they scale AI across borders.
For example, the current US government’s AI strategy emphasizes the responsible adoption of AI across the economy, focusing on compliance with existing laws rather than creating new regulations; there is a preference for the organic development of standards and response to demonstrated harms rather than preemptive regulation. Meanwhile, the EU AI Act introduces sweeping, risk-based classifications and imposes strict obligations for providers, deployers and users. A system compliant in California could fail the EU’s transparency tests; an algorithm trained in New York might trigger “high-risk” scrutiny in Brussels.
As AI systems, data and decisions travel across jurisdictions, compliance must be built into governance — from development to deployment — to avoid regulatory blind spots that cross continents.
Here are five key strategies for cross-jurisdictional AI risk management.
1. Map your regulatory footprint
Global AI governance begins with visibility not just into where your tools are developed but also where their outputs and data flow. An AI model built in one country may be deployed, retrained or reused in another, without anyone realizing it has entered a new regulatory regime.
Organizations that operate across regions should maintain an AI inventory that captures every use case, vendor relationship and dataset, tagged by geography and business function. This exercise not only clarifies which laws apply but also exposes dependencies and risks. For example, when a model trained on U.S. consumer data informs decisions about European customers.
Think of it as building a compliance map for AI, a living document that evolves as your technology stack and global footprint change.
2. Understand the divides that matter most
The most significant compliance risks stem from assuming AI is regulated the same way everywhere. The EU AI Act classifies systems by risk level — minimal, limited, high or unacceptable — and imposes detailed requirements for “high-risk” applications, such as hiring, lending, healthcare and public services. Failing to comply can result in fines of up to €35 million or 7% of global annual revenue.
In contrast, the US does not have a single federal framework in place, so some individual states, such as California, Colorado and Illinois, have opted to implement policies focused on transparency, consumer privacy and bias mitigation. Federal agencies, including the Equal Employment Opportunity Commission (EEOC) and the Federal Trade Commission (FTC), are also using existing laws to police AI-related discrimination and deceptive practices.
For multinational organizations, this means one product may need multiple compliance models. A generative AI assistant rolled out to a US sales team might be low risk under local law but classified as “high-risk” when used in Europe’s customer-facing environment.
3. Ditch the one-size-fits-all policy
AI policies should establish universal principles — fairness, transparency, accountability — but not identical controls. Overly rigid frameworks can hinder innovation in some regions while still missing key compliance requirements in others.
Instead, design governance that scales by intent and geography. Set global standards for ethical AI, then layer in regional guidance and implementation rules. This approach creates consistency without ignoring nuance: the flexibility to meet EU documentation demands, the agility to adapt to state laws and the clarity to operate confidently in markets that haven’t yet defined their own AI regulations.
A “high watermark” approach — one that meets the strictest applicable standard — can help avoid costly rework when other jurisdictions catch up.
4. Engage legal and risk teams early and often
AI compliance is moving too fast for legal to be a final checkpoint. Embedding counsel and risk leaders at the start of AI design and deployment helps ensure emerging requirements are anticipated, not retrofitted.
Cross-functional collaboration is now essential: Technology, legal and risk teams must share a common language for assessing AI use, data sources and vendor dependencies. Too often, definitions of “AI,” “training,” or “deployment” differ between departments — a misalignment that creates governance blind spots.
By integrating legal perspectives into model development, organizations can make informed decisions about documentation, explainability and third-party exposure long before regulators start asking questions.
5. Treat AI governance as a living system
AI regulation won’t become stagnant anytime soon. As the EU AI Act takes shape, US states draft their own rules, and countries like Canada, Japan and Brazil introduce competing frameworks, compliance remains a moving target.
The organizations that stay ahead don’t treat governance as a one-time project — they treat it as an evolving ecosystem. Monitoring, testing and adaptation become part of everyday operations, not annual reviews. Cross-functional teams share intelligence between compliance, technology and business units so that controls evolve as quickly as the technology itself.
The bottom line
AI’s reach is global, but its risks are intensely local. Each jurisdiction introduces new variables that can compound quickly if left unmanaged. Treating compliance as a static requirement is like treating risk as a one-time audit: It misses the moving parts.
The organizations best positioned for what’s next are those that see AI governance as risk management in motion — a strategy that identifies exposures early, mitigates them through clear controls and builds resilience into every stage of design and deployment.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Read More from This Article: 5 strategies for cross-jurisdictional AI risk management
Source: News

