The Reach of Brussels: Extraterritoriality and Jurisdictional Conflict Under the EU Artificial Intelligence Act
Key Takeaways for Practitioners
The EU AI Act asserts jurisdiction over any entity whose AI system's output is used within the EU — regardless of domicile, servers, or intent to target European users.
This 'use of output' trigger goes further than the GDPR, which required active targeting or monitoring of EU individuals.
US companies with no EU presence, employees, or servers can be classified as AI 'providers' subject to conformity assessments and fines up to 7% of global annual turnover.
The Brussels Effect: by conditioning EU market access on compliance, the AI Act effectively sets global standards for US companies serving any international users.
The AI Act creates potential conflicts with US constitutional principles of free expression and due process for companies regulated without meaningful US nexus.
Compliance timeline: prohibitions applied February 2025; GPAI model rules August 2025; high-risk system requirements August 2027.
The Texas Startup Hypothetical That Is Now a Legal Reality
Consider a Texas-based software startup that develops an AI writing assistant. No EU presence. No European employees. No servers in Europe. Marketing focused exclusively on North American consumers.
A researcher in Vienna accesses the tool via a standard web interface and uses it to generate a political op-ed during a European Parliament campaign.
Under the EU Artificial Intelligence Act — Regulation (EU) 2024/1689, which entered into force August 1, 2024 — the startup may suddenly find itself classified as a "provider" of a high-risk AI system, subject to pre-market conformity assessments and potential fines reaching 7% of its global annual turnover.
This is not a hypothetical warning about future risk. It is the current state of EU law. And it is the central challenge facing US AI practitioners, compliance officers, and in-house counsel advising technology companies with any international user base.
The Jurisdictional Architecture of the AI Act
The GDPR Model — and Why the AI Act Goes Further
The EU's extraterritorial regulatory approach is not new. The General Data Protection Regulation established a precedent: any company processing personal data of EU residents, regardless of domicile, must comply with European data protection law. US companies spent billions on GDPR compliance between 2016 and 2018.
The AI Act's jurisdictional reach is structured differently — and more broadly.
The GDPR generally requires active targeting or monitoring of EU individuals. A US company that passively makes a service available online, without directing marketing toward Europe, could potentially argue it falls outside GDPR scope.
The AI Act contains no such limiting principle. Article 2 establishes that the Regulation applies to:
- Providers placing AI systems on the EU market or putting them into service in the EU
- Providers and deployers of AI systems whose output is used in the Union
That second clause — "output is used in the Union" — is the critical expansion. It does not require that the provider intended to serve EU users. It does not require EU market presence. It requires only that someone in the EU used the output. A passive web interface accessible globally satisfies this trigger.
The Risk Classification Framework
The AI Act organizes AI systems into four tiers:
Prohibited systems (Article 5): Banned outright as of February 2, 2025. Includes social scoring by public authorities, real-time remote biometric identification in public spaces (with narrow exceptions), subliminal manipulation, and systems exploiting vulnerabilities of specific groups.
High-risk systems (Articles 6–51, Annex III): Subject to the most stringent requirements — conformity assessments, technical documentation, human oversight, registration in EU database. Applies to AI used in critical infrastructure, employment decisions, education, essential services, law enforcement, migration, and administration of justice.
Limited-risk systems (Article 52): Transparency obligations only — disclosure when users interact with AI, labeling of AI-generated content.
Minimal-risk systems: No specific requirements.
For US companies, the critical question is classification. A legal research AI, HR screening tool, or credit scoring system may qualify as high-risk under Annex III — triggering the full compliance burden regardless of whether the provider has ever set foot in Europe.
ISSUE: Is the AI Act's Extraterritorial Reach Consistent with International Law and US Constitutional Principles?
Rule
Under international law, a state may assert prescriptive jurisdiction over foreign conduct on several bases: territoriality (conduct occurring on the state's territory), nationality (conduct by the state's nationals), passive personality (harm to the state's nationals), and the effects doctrine (conduct abroad producing substantial effects within the state's territory). The effects doctrine — most prominently used in US antitrust law since United States v. Aluminum Co. of America, 148 F.2d 416 (2d Cir. 1945) — permits jurisdiction when foreign conduct has direct, substantial, and reasonably foreseeable effects within the regulating state.
Application
The EU's Jurisdictional Theory
The EU relies on a market access theory: access to the EU single market is conditioned on compliance with EU standards. This is consistent with established international law — states have broad authority to set conditions on market entry.
The AI Act's "use of output" trigger, however, stretches this theory. When a US company makes no affirmative effort to enter the EU market and a European user unilaterally accesses the service, it is unclear whether the "market access" rationale applies. The provider has not sought access to the EU market — the EU user has sought access to a US service.
The Brussels Effect in Practice
Regardless of the legal theory's validity, the practical consequence is well-documented: when a major regulatory jurisdiction like the EU conditions market access on compliance with specific standards, companies serving any international users find it more efficient to apply those standards globally than to maintain separate compliance tracks. This is the Brussels Effect — European regulation becomes de facto global regulation.
For AI companies, this means: a US startup that wants any possibility of European users must design its system to EU standards. The Texas startup in our hypothetical faces a binary choice — comply with EU requirements for its entire product, or implement geographic blocking so rigorous that no European user can ever access the service.
US Constitutional Tensions
The AI Act's prohibitions on certain AI applications — including systems that "manipulate" users through "subliminal techniques" — raise First Amendment concerns when applied to US companies serving primarily US audiences. The definition of "manipulation" in the AI Act is broader than any category of unprotected speech under US First Amendment doctrine.
Similarly, the conformity assessment requirements — which demand pre-market documentation and third-party audits for high-risk systems — impose procedural burdens with no US constitutional analog for software products.
Conclusion
The AI Act's extraterritorial reach is legally defensible under the market access theory but operationally problematic for US companies without EU presence. The "use of output" trigger creates compliance obligations without meaningful nexus. US companies should assume the AI Act applies to any product accessible to EU users and conduct risk classification analysis accordingly.
Compliance Timeline: What US Companies Must Do Now
EU AI Act Compliance Deadlines for US Companies
Already in effect — February 2, 2025: Prohibited AI practices (Article 5) are banned. US companies must audit products for social scoring, real-time biometric identification, subliminal manipulation, and exploitation of vulnerabilities. Violations: fines up to €35 million or 7% of global annual turnover.
Already in effect — August 2, 2025: General-Purpose AI (GPAI) model obligations (Articles 51–55). US foundation model providers (OpenAI, Anthropic, Google, Meta) must comply with transparency, copyright policy disclosure, and — for models with "systemic risk" — additional safety evaluations.
Coming — August 2, 2027: High-risk AI system requirements fully applicable. This is the deadline that most directly affects US enterprise software, HR tech, legal AI, and healthcare AI companies.
Action items now:
- Classify your AI products under the Annex III high-risk categories
- Assess whether the "use of output" trigger applies to your user base
- Review GPAI obligations if you develop or fine-tune foundation models
- Implement EU AI Act clauses in vendor and customer contracts
- Monitor the preemption debate in the US — the proposed National AI Framework may limit state-level AI regulation but does not address EU compliance obligations
The Intersection with US AI Policy
The EU AI Act's extraterritorial reach now intersects directly with the Trump administration's proposed National AI Framework (March 2026), which seeks to preempt state AI regulation in favor of federal standards. The Framework's deregulatory approach — removing "policy barriers" to US AI innovation — is in direct tension with the compliance burdens the EU AI Act imposes on US companies serving European users.
US in-house counsel advising technology companies now face a genuine regulatory conflict: comply with EU standards to access European markets, or optimize for the deregulatory domestic environment the Trump administration is building. For most large US AI companies, the answer is both — but managing the tension will require explicit compliance strategies rather than passive alignment.
Legal Citation
EU Artificial Intelligence Act, Regulation (EU) 2024/1689, European Parliament and Council (June 13, 2024) (Regulation laying down harmonised rules on artificial intelligence) (Docket No. OJ L, 12.7.2024)
This analysis is based on publicly available legislative and regulatory materials. It does not constitute legal advice. Companies should consult qualified EU law counsel for specific compliance guidance.