The AI Governance Trilemma: Why We Can't Have It All

2026-02-09 · 3,059 words · Singular Grit Substack · View on Substack

Regulators want to protect copyright, safeguard privacy, and demand transparency from AI companies.

Regulators want to protect copyright, safeguard privacy, and demand transparency from AI companies. Here’s why that’s structurally impossible — and what it means for the future of artificial intelligence.


This post is adapted from my forthcoming article. The full article, withsubstantially more legal detail, will be available upon publication.


Let me start with a thought experiment.

You’re the CEO of an AI company. Your engineers have just finished training a new large language model on a dataset that includes millions of books, articles, images, and web pages. Some of that data is copyrighted. Some of it contains personal information — names, photographs, biographical details scraped from the open internet. Your model is powerful, commercially valuable, and ready to ship.

Now here’s your problem. Three different regulators show up at your door on the same morning.

The first is from the copyright office. She wants to know exactly what copyrighted works you used to train your model, and she’d like you to compensate the rights holders. The second is from the data protection authority. He wants you to delete every piece of personal data from your model — and prove you’ve done it. The third is from the AI transparency office. She wants you to publish a detailed account of your training data, your model architecture, and how your system makes decisions.

Each of these requests sounds reasonable in isolation. Each serves a legitimate regulatory objective that enjoys broad public support. But here’s the thing: you cannot fully comply with all three simultaneously. And that’s not because you’re lazy or evasive. It’s because the three objectives are structurally incompatible.

Welcome to the AI Governance Trilemma.


What Is a Trilemma?

Economists will recognise the structure immediately. In international finance, there’s a famous “impossible trinity”: you can’t simultaneously maintain a fixed exchange rate, free capital movement, and an independent monetary policy. You can have any two, but pursuing all three creates irreconcilable contradictions. Central bankers have known this for decades, and the insight has shaped monetary policy around the world.

Political scientists have identified analogous trilemmas in the governance of globalisation. Dani Rodrik’s famous argument is that you can’t simultaneously have deep economic integration, national sovereignty, and democratic politics — and that every country must choose which of the three to compromise.

I argue that AI governance faces the same kind of structural constraint. Three regulatory objectives — intellectual property protection, data privacy, and algorithmic transparency — are each desirable on their own terms. But maximising all three at once is impossible. Advancing any pair inevitably undermines the third.

This isn’t just an abstract theoretical claim. It explains why the United States and the European Union have adopted such radically different approaches to AI regulation — and why both approaches are generating frustration, litigation, and compliance nightmares. Each jurisdiction has implicitly chosen which two objectives to prioritise, and each is paying the price for what it’s sacrificed.


The Three Vertices

Before we get to the trade-offs, let’s be precise about what each objective involves.

Intellectual property protection encompasses several distinct but related concerns. First, there’s the copyright question: is it legal to train AI systems on copyrighted works without permission or payment? The US courts are currently wrestling with this through a cascade of lawsuits — The New York Times v. Microsoft, Andersen v. Stability AI, Concord Music Group v. Anthropic, Tremblay v. OpenAI, and many others. The legal arguments on both sides are sophisticated. AI companies invoke the fair use doctrine, arguing that training is “transformative” — the model doesn’t reproduce works, it learns statistical patterns from them. Rights holders counter that the sheer scale and commercial purpose of AI training distinguishes it from any prior fair use precedent, and that permitting unlimited copying will destroy licensing markets.

Then there’s the trade secret dimension. AI companies protect their model architectures, training datasets, curation methods, and hyperparameter configurations as proprietary information. The Defend Trade Secrets Act of 2016 and state equivalents have become the de facto intellectual property protection regime for AI — more important, in practice, than patent or copyright law. This has profound implications for the trilemma, because trade secrets are, by definition, incompatible with transparency. A system that must be kept secret to retain legal protection cannot simultaneously be made open to public scrutiny.

And there’s the output question: are AI-generated works copyrightable? The US Copyright Office says no — at least for purely AI-generated content — and the court in Thaler v. Perlmutter agreed that copyright requires a human author. This creates perverse incentives: companies may overstate the degree of human involvement in AI-assisted creation to secure copyright protection, undermining both transparency and accurate attribution.

Data privacy is about individual control over personal information. In Europe, the General Data Protection Regulation gives people the right to know what data is being processed about them, the right to have it corrected, and — crucially — the right to have it deleted (the famous “right to be forgotten”). In the United States, a patchwork of state laws (California’s CCPA/CPRA, Virginia, Colorado, Connecticut, Texas, and a rapidly growing list of others) provides analogous but less comprehensive protections. The Federal Trade Commission has filled some of the gaps through enforcement, developing the remedy of “algorithmic disgorgement” — ordering companies to destroy not just improperly collected data but the AI models trained on it.

The fundamental problem is technical. Modern AI systems are trained on datasets that inevitably contain personal information — names, faces, biographical details, health data, financial records — scraped from the open internet or obtained through data partnerships. And “deleting” someone’s data from a trained neural network isn’t like deleting a row from a database. The information isn’t stored in discrete, retrievable units. It’s encoded in the statistical weights of the model, distributed across billions of parameters. Research has shown that large language models can memorise and regurgitate verbatim passages from their training data, including personal information. Meaningful erasure might require retraining the entire model from scratch — a process costing millions of dollars and weeks of computation. The gap between what privacy law demands and what AI technology can deliver is enormous.

Algorithmic transparency is the demand that AI systems be explainable, auditable, and accountable. When AI makes decisions that affect people’s lives — denying bail, rejecting loan applications, filtering job candidates, diagnosing diseases, moderating speech — affected individuals and society at large have a legitimate interest in understanding why. Scholars like Frank Pasquale have written powerfully about the dangers of a “black box society” in which consequential decisions are made by opaque algorithms that no one can inspect or challenge. Researchers have documented case after case in which AI systems deployed in criminal justice, social services, and hiring have produced racially discriminatory outcomes — often without the knowledge of the officials relying on them.

The EU AI Act, which took effect in 2024, imposes the most ambitious transparency requirements in the world. High-risk AI systems must come with detailed documentation of training data, model architecture, design choices, and evaluation procedures. Providers of large language models must publish summaries of the content used for training. The penalties for non-compliance are formidable: up to €35 million or 7% of global annual turnover, exceeding even the GDPR’s already fearsome maximum fines.

Each of these three objectives is, on its own, entirely reasonable. The question is what happens when you try to pursue all three at once.


Why You Can’t Have All Three

Here’s where the trilemma bites. Consider the three possible pairings.

If you prioritise IP protection and privacy — you sacrifice transparency. Protecting trade secrets means keeping model architectures and training data confidential. Protecting privacy means restricting access to personal information in training sets. Together, these create a regime in which AI systems are impenetrable “black boxes” — shielded from scrutiny by overlapping layers of legal protection. You can’t audit what you can’t see. You can’t hold accountable what you can’t examine. This is roughly the current trajectory of the United States, where trade secret claims and the absence of comprehensive transparency legislation have produced a system in which the most consequential AI technologies are largely opaque to the public, to regulators, and even to the courts. In employment discrimination cases involving AI hiring tools, for example, defendants have resisted discovery requests by invoking trade secret protections — leaving plaintiffs unable to demonstrate how an algorithmic decision was made.

If you prioritise privacy and transparency — you sacrifice IP protection. If you require AI companies to be transparent about their training data and methods, you force disclosure of information that companies would otherwise protect as trade secrets. Their competitive advantage — the specific datasets they curated, the methods they used to clean and weight the data, the architectural innovations they developed — becomes visible to competitors. If you simultaneously enforce strong privacy rights — requiring deletion of personal data and restricting the repurposing of data for AI training — you constrain the data available for training, further reducing both the commercial value and the legal protectability of AI systems. This is approximately the EU’s trajectory: the combination of the AI Act’s transparency mandates and the GDPR’s privacy protections creates enormous compliance challenges for AI developers, particularly around the requirement to document training data whilst simultaneously minimising personal data processing. The interaction between these two regulatory regimes is itself a source of tension — complying with one can create conflicts with the other.

If you prioritise IP protection and transparency — you sacrifice privacy. If you protect AI innovations through strong IP rights whilst simultaneously requiring transparency about training data and methods, the transparency disclosures will inevitably reveal personal information embedded in training sets. You can’t publish a detailed account of your training data — what books, articles, websites, social media posts, images, and databases you ingested — without exposing the personal information those sources contain. Robust IP regimes that permit broad data collection for training override individual consent and erasure rights, because the data has been transformed into a commercially valuable asset whose integrity the IP system is designed to protect.

None of these outcomes is satisfactory. But one of them is inevitable. That’s the trilemma.


Two Continents, Two Choices

Comparative law reveals choices that might otherwise remain invisible. The United States and the European Union have both, whether consciously or not, made their choices — and the comparison is instructive.

The American Approach: IP and Innovation First. The US has essentially chosen intellectual property protection and commercial freedom at the expense of transparency. On copyright, it relies on the fair use doctrine — a flexible, case-by-case judicial standard. The trend in the courts has been towards finding fair use where AI training is “transformative,” building on the Supreme Court’s reasoning in Google v. Oracle. But the Thomson Reuters v. Ross Intelligence verdict — the first real binding determination on AI training and fair use — went against the AI company, suggesting that transformative use has limits, particularly where the AI system competes directly in the market of the copied works. On trade secrets, the Defend Trade Secrets Act has become the primary shield for AI innovation. On privacy, the absence of a federal privacy law has produced a fragmented patchwork. On transparency, there are essentially no binding requirements — just voluntary frameworks like the NIST AI Risk Management Framework and executive orders that shift with each administration.

The European Approach: Transparency and Rights First. The EU has chosen transparency and privacy, accepting the costs to IP protection and commercial flexibility. The regulatory architecture is formidable. The AI Act imposes risk-based transparency requirements, demanding detailed documentation for high-risk systems and even general-purpose AI models. Providers of large language models must publish training data summaries — a requirement that directly conflicts with trade secret protection. The GDPR provides robust individual privacy rights that apply with full force to AI training data — as the Italian Garante’s temporary ban on ChatGPT demonstrated. The Copyright Directive takes a legislative approach to the training question, providing a text and data mining exception with an opt-out mechanism for rights holders. In theory, this is more predictable than the US fair use approach. In practice, the opt-out mechanism has proven extraordinarily difficult to implement — there’s no standardised protocol, no centralised registry, and no clear enforcement path. The Digital Services Act adds further transparency requirements for algorithmic recommendation systems on platforms.

Taken together, the EU’s regulatory stack is the most comprehensive AI governance framework in the world. But it also represents a particular resolution of the trilemma that prioritises the public’s right to understand and control AI systems, even at significant cost to commercial interests.

The “Brussels Effect” adds a critical dynamic. Companies serving European customers must comply with EU rules regardless of where they’re based. This means the EU’s resolution of the trilemma — transparency and privacy over IP — may become the de facto global standard, not because other countries formally adopt it, but because multinational companies find it impractical to maintain separate systems for different markets.


The Practical Consequences

The trilemma has immediate, practical consequences for everyone.

For AI companies, operating across the Atlantic is an exercise in navigating contradictory demands. Training practices that are legally defensible under US fair use may violate the Copyright Directive’s opt-out mechanism. Protecting your model as a trade secret is essential under US law, but disclosing your training data is mandatory under the EU AI Act. Complying with a GDPR erasure request might require retraining your model, destroying commercial value protected by US trade secret law. The compliance cost of navigating these contradictions is staggering — and falls disproportionately on smaller companies lacking the legal resources of the tech giants.

For regulators, well-intentioned regulation in one domain creates unintended consequences in others. Requiring transparency about training data may expose personal information. Enforcing privacy rights may undermine documentation requirements. Strengthening copyright protection may reduce data diversity and worsen model bias.

For individuals, the trilemma means your rights are in tension with each other. Your right as a creator to control your copyrighted works may conflict with your right as a citizen to understand AI systems that affect your life. Your right to have your data deleted may conflict with the public interest in maintaining auditable AI systems.


Toward a Way Forward

If the trilemma can’t be solved, can it at least be managed? I believe so, and in the full article I propose a graduated framework built on four pillars.

First, regulatory interoperability. The US and EU don’t need to harmonise — that’s politically impossible — but they need mechanisms for mutual recognition and cooperative enforcement. The EU-US Trade and Technology Council provides a starting point. Specific mechanisms could include mutual recognition of AI auditing standards, coordinated enforcement actions, shared risk classification taxonomies, and bilateral agreements on cross-border training data.

Second, graduated transparency. Rather than imposing one-size-fits-all transparency requirements, we should calibrate disclosure to risk. Basic labelling requirements for all AI systems — minimal burden on anyone. Confidential regulatory disclosures for high-risk systems — a “transparency firewall” that gives regulators what they need without exposing trade secrets or personal data to the public. Full public transparency only for systems implicating fundamental rights — criminal justice, immigration, child welfare — where accountability must take priority.

Third, reformed IP doctrines. Congress should enact a statutory AI training exception with three components: a mandatory opt-out mechanism with standardised technical protocols and a centralised registry (solving the Copyright Directive’s implementation chaos); a mandatory reporting requirement giving rights holders aggregate information about use of their works; and a compulsory licensing regime for high-value training data, with royalties distributed through collecting societies. Trade secret law should be modified to create a “regulatory disclosure” exception, so companies can comply with transparency mandates without forfeiting trade secret protection.

Fourth, a privacy-by-design safe harbour. Companies that invest in privacy-preserving techniques — differential privacy, federated learning, machine unlearning, synthetic data generation — should receive concrete regulatory benefits: reduced documentation burdens, streamlined approvals, and partial exemptions from erasure requests where technical measures provide equivalent protection. This creates a market incentive for privacy-preserving innovation, rather than treating privacy and AI development as zero-sum.

None of this eliminates the trilemma. Nothing can. But it manages the trade-offs deliberately rather than accidentally, and creates mechanisms for adjusting the balance as technology evolves.


Why This Matters Now

There’s a tendency in technology policy to assume we have time — that we can study the problem, convene stakeholders, and develop considered responses before the truly consequential decisions need to be made. With AI, that assumption is dangerously wrong.

Large language models are already making decisions about hiring, lending, healthcare, criminal justice, and content moderation. Generative AI is already producing text, images, audio, and video indistinguishable from human-created content. Deepfakes are already being deployed for fraud, harassment, and political manipulation. International bodies — the OECD, the G7, UNESCO — have articulated principles of transparency, fairness, and accountability, but these instruments are non-binding and don’t address the trilemma’s structural tensions.

The regulatory choices being made right now — in courtrooms, legislative chambers, and regulatory agencies on both sides of the Atlantic — will shape the AI landscape for decades. The EU AI Act’s implementing regulations are being drafted as you read this. US courts are issuing rulings in AI copyright cases that will set precedents for a generation. State legislatures are passing AI transparency and privacy laws at a pace that Congress cannot match.

The trilemma won’t resolve itself. But by understanding its structure — by seeing these apparently separate debates about copyright, privacy, and transparency as manifestations of a single structural constraint — we can at least make our choices honestly. We can acknowledge what we’re giving up in exchange for what we’re gaining, rather than pretending we can have everything at once.

And that honesty is itself a form of governance.


Craig Wright is a member of the University of Leicester School of Law who recently completed his PhD in law. His article “The AI Governance Trilemma: Copyright, Privacy, and Transparency in the Regulation of Artificial Intelligence — A US-EU Comparative Analysis” is forthcoming.

If you found this analysis useful, please share it and subscribe for more writing at the intersection of law, technology, and policy.


← Back to Substack Archive