
AI and the Coming Era of "Code as Law"
For years, "code is law" sounded like a slogan from the outer edge of the internet: part cyber-libertarian manifesto, part warning from legal scholars, part prophecy from crypto builders. The basic idea was simple enough. Software does not merely describe rules; it enforces them. A website can ban you automatically. A platform can throttle speech invisibly. A smart contract can execute a transaction without asking whether the result is fair. In digital systems, code often determines what is possible, permissible, and punishable.
That argument is about to become much more important -- and its intellectual foundations run deeper than most people realize.
Artificial intelligence is not just another software layer. It is a force multiplier for writing code, interpreting rules, automating decisions, and embedding policy into systems that run at planetary scale. But to understand where AI is taking us, it helps to trace the lineage of this idea from its origins through its most rigorous development -- because the scholars who saw this coming in blockchain are the same ones whose frameworks now illuminate AI governance most clearly.
A Lineage of Ideas: From Lex Informatica to Lex Cryptographia
The foundation belongs to Lawrence Lessig, who argued that behavior in digital environments is regulated not only by legal rules, markets, and social norms, but also by architecture. In the online world, architecture means code. A speed bump shapes behavior in physical space; software permissions shape behavior in digital space. If a platform disables copying, most users cannot copy. If a payments system blocks a transaction, the transaction does not occur. If identity verification is mandatory, anonymity disappears not as a matter of debate but as a matter of system design.
But Lessig built on an earlier layer. In 1998, Joel Reidenberg introduced the concept of lex informatica -- the idea that information policy is increasingly formulated not through legislation but through technology. The rules of the internet were encoded in protocols, formats, and platform architectures, and those technical rules had regulatory effects regardless of what any government statute said. Code was becoming its own species of lawmaking.
The next major advance came from Primavera De Filippi and Aaron Wright. In their 2015 SSRN paper and their landmark 2018 book Blockchain and the Law: The Rule of Code (Harvard University Press), they argued that blockchain technology represented a qualitative leap beyond anything Lessig or Reidenberg had described. Where earlier code required middlemen -- platforms, service providers, institutional operators who could be regulated -- blockchain enabled something genuinely new: self-executing rules with no human intermediary. Smart contracts on blockchain networks run automatically when conditions are met. No bank needs to process the transaction. No platform operator needs to approve the transfer. No regulator can easily intercept the execution.
De Filippi and Wright named this new regime lex cryptographia: rules administered through self-executing smart contracts and decentralized autonomous organizations (DAOs). The key distinction from ordinary software is that lex cryptographia operates without -- and often against -- the possibility of coercion by any centralized authority. Blockchain protocols could be regarded, in their framing, as genuinely "alegal" -- operating outside the traditional purview of law not because they were lawless, but because the technical architecture of decentralization made conventional legal enforcement structurally difficult.
The implications were stark. If Lessig showed that code could have the effect of law, De Filippi and Wright showed that code could operate as a replacement for law -- that certain categories of social and economic coordination could be governed entirely by self-enforcing software, without legislatures, courts, or enforcement agencies playing any meaningful role.
The Inversion: From "Code Is Law" to "Law Is Code"
De Filippi, together with Samer Hassan, pushed the argument one crucial step further. In a 2016 paper published in First Monday, they described a conceptual inversion: the shift from "code is law" -- code having the effect of law -- to "law is code" -- law being defined as code. With the rise of smart contracts, the question was no longer merely whether code could produce legal outcomes. It was whether legal rules themselves could be directly instantiated as executable code, with no interpretive gap between the written norm and its enforcement.
This inversion captures something profound. Traditional law depends on interpretation. A statute requires a judge. A contract requires enforcement by courts. A regulation requires agencies. There is always a translation step between the rule as written and the rule as applied, and that gap is where human judgment, contestability, and democratic accountability live. When law becomes code, that gap closes. The rule does not wait for a court to interpret it; it executes.
This is the conceptual terrain onto which AI now enters -- and it makes De Filippi and Wright's framework more urgent than ever.
Why AI Changes the Scale of Governance
The most obvious effect of AI is velocity. Code that once took weeks can now be drafted in hours. Internal compliance systems that once required armies of analysts can be built, monitored, and updated with machine assistance. Legal documents can be parsed at scale. Policies can be translated into enforcement rules across thousands of workflows. When the cost of codifying a rule collapses, the number of rules that get codified tends to explode.
But velocity is only the beginning.
AI also changes governance through interpretation. Classical software is rigid. It struggles when categories blur. AI systems can classify ambiguous content, detect suspicious patterns, infer intent, summarize exceptions, and route cases based on contextual reasoning. That means institutions can increasingly automate areas once reserved for human judgment: fraud detection, eligibility determination, content moderation, procurement review, underwriting, sanctions screening, workplace monitoring, tax scrutiny, and even quasi-legal dispute resolution.
This is what makes AI different from previous automation waves. Earlier software enforced rules where the world could be cleanly represented in if-then logic. AI extends machine governance into the messy territory where human institutions have traditionally derived their legitimacy from interpretation, discretion, and contestability.
In other words, AI makes it possible to code not just rules, but judgment. Or at least a simulacrum of judgment.
That is the foundation of the next era of code as law.
The Rule of Code versus the Rule by Code
De Filippi's most recent scholarship offers a distinction that is essential for thinking about AI. In a 2024 article in the George Washington Law Review (co-authored with Morshed Mannan and Wessel Reijers), she draws a sharp line between the rule of code and the rule by code.
The rule by code describes what most large platforms do today: they use technical systems to enforce their own rules, but those systems are ultimately controlled by a centralized operator who can be held legally accountable. Amazon can change its algorithms. Facebook can reverse a moderation decision. The code governs, but a human organization stands behind it and can be reached by a court or regulator. This is analogous to the "rule by law" in political theory -- law as a tool wielded by an authority, rather than a constraint on it.
The rule of code, by contrast, describes systems where no single party can override the code. The rules execute regardless of what any operator, government, or court commands. Blockchain networks approximate this -- that is precisely why De Filippi and Wright argued they challenged the rule of law, not merely its implementation.
AI in its current form is closer to rule by code -- the model is controlled by a company, the thresholds can be adjusted, the system can be modified. But the direction of travel is toward the rule of code. As AI systems become more autonomous, more deeply embedded in critical infrastructure, more difficult to audit in real time, and more interdependent across jurisdictions, the gap between "rule by code" and "rule of code" begins to narrow. An AI system that governs access to credit, housing, employment, or public services across dozens of jurisdictions, trained on data that no regulator has fully examined, updated through processes that no court has reviewed -- that system begins to exhibit the structural characteristics De Filippi and Wright identified in the most powerful blockchain networks.
The crucial insight is this: the problem is not that AI will become formally ungovernable. It is that AI governance will drift toward functional ungovernability -- not because the code cannot be changed, but because changing it becomes so technically complex, organizationally costly, and politically contested that the effective rule is always the one the system already runs.
From Written Policy to Embedded Policy
Every institution has policies that sit in binders, PDFs, training decks, and legal memos. Most are weakly enforced because translating text into practice is hard. AI closes that gap with an efficiency that no previous technology could match.
A company can now turn an acceptable-use policy into a live monitoring system. A bank can convert regulatory guidance into transaction screening and automated escalation flows. A government agency can encode benefit eligibility requirements into a decision pipeline that ingests forms, cross-checks databases, flags inconsistencies, and produces outcomes at scale. An insurer can transform underwriting principles into always-on evaluative machinery. A logistics network can operationalize labor discipline through routing, timing, and performance prediction.
This is where the blockchain experience serves as an instructive preview. De Filippi and Wright used the example of DAOs -- decentralized autonomous organizations -- to illustrate what institutions look like when their rules are fully instantiated in code. The DAO hack of 2016, in which an attacker exploited a vulnerability in a smart contract to drain roughly $60 million from an Ethereum-based investment fund, was a pivotal test. The code executed exactly as written. Whether it was "legal" or "ethical" was irrelevant to the machine. The Ethereum community ultimately chose to hard-fork the blockchain -- effectively rewriting history -- to reverse the theft. But that decision revealed the deep tension at the heart of rule-of-code systems: when code governs, who decides what the code should have said?
AI governance will produce the same question at vastly larger scale. The system that denied you a mortgage, flagged your immigration application, or filtered your speech was not acting illegally. It was executing its parameters. The question of whether those parameters were just is a different question entirely -- and one that the system itself cannot answer.
The Domains Where This Will Become Most Visible
The new era of code as law will not arrive evenly. It will emerge first in domains where decisions are high-volume, rules are partially structured, and institutions already rely on digital infrastructure.
Finance is the most mature example, and its blockchain prehistory is instructive. Payment rails, custody platforms, and exchange infrastructure already function as private rule systems that can exclude any actor, block any transaction, and enforce any compliance regime -- regardless of whether a court has ruled on the underlying conduct. AI makes these systems more adaptive and harder to contest. Entire categories of economic behavior may become impossible not because Congress banned them explicitly, but because the AI-driven infrastructure of finance refuses to process them.
Labor platforms are another frontier. Workers in gig systems already experience management as software: pricing, routing, ranking, discipline, and deactivation all happen through opaque code. Add AI and the system becomes more adaptive, more predictive, and more invasive. The practical labor regime for millions of people will not be negotiated with a human boss. It will be continuously administered by models -- a form of lex cryptographia for the employment relationship.
Identity and access systems will also become law-like. As governments and corporations push toward digital identity, age verification, biometric authentication, and reputation-linked permissions, AI will govern who gets to enter, transact, speak, travel, or receive services. If identity infrastructure becomes a precondition for participation, then whatever logic governs that infrastructure begins to function as constitutional architecture.
Online speech and social coordination are already heavily shaped by algorithmic systems, but AI takes this much further. Moderation no longer means just filtering spam or blocking obvious abuse. It means ranking visibility, inferring harmful intent, detecting coordinated behavior, labeling synthetic media, and shaping narrative distribution in real time. When AI mediates the conditions under which speech can circulate, it becomes part of the governing structure of public life.
State administration may be the most consequential domain. Permits, inspections, tax review, benefits eligibility, immigration triage, procurement, enforcement prioritization, and case routing are all susceptible to AI operationalization. Administrative law has historically depended on records, procedure, and reviewability. AI pressures each of those norms. A model can make thousands of micro-decisions faster than any clerk, but much of administrative legitimacy comes from the ability to explain and contest those decisions. The tension will define the next decade of governance.
Why AI Enforcement Will Feel More Like Law Than Software
Most software today feels instrumental. It helps you do something. AI governance systems will feel different because they will increasingly perform the classic functions of law.
First, they will classify. Law constantly sorts people, actions, and objects into categories: lawful or unlawful, eligible or ineligible, high risk or low risk, adult or minor, employee or contractor, resident or nonresident. AI systems are classification machines at industrial scale.
Second, they will interpret. Human institutions rarely apply rules mechanically. They interpret facts in context. AI models now do this at scale, however imperfectly.
Third, they will enforce. They can block payments, suspend accounts, escalate cases, deny access, freeze funds, or trigger audits automatically.
Fourth, they will standardize. Law creates predictable expectations across large populations. AI systems can enforce standardized rules with astonishing reach -- and, unlike human institutions, without the natural variance that creates openings for discretion and mercy.
Fifth, they will adapt. Traditional legal reform is episodic. AI-governed systems can be updated continuously. Thresholds change. Models retrain. Policy is no longer amended only in public view; it is tuned in production. This is what De Filippi's framework identifies as the most destabilizing feature of self-executing rule systems: they decouple the moment of rulemaking from any process of democratic deliberation. The rule is not debated; it is deployed.
The Legitimacy Crisis at the Heart of AI Governance
De Filippi's recent work on governing what she calls the "confidence machine" -- the trust-generating apparatus of digital infrastructure -- captures the core challenge precisely. Every iteration of the web has tried to solve the problem of trust differently. Web 2.0 resolved it by creating trusted platforms: centralized intermediaries whose legal accountability was the guarantee. Web 3 attempted to resolve it through technological guarantees: protocols so transparent and deterministic that trust in institutions could be replaced by confidence in code.
AI represents a third resolution attempt -- one that combines the centralization of Web 2.0 with the opacity of Web 3. AI systems are controlled by identifiable companies, which should make them accountable. But their internal logic is often inscrutable even to their operators, which makes accountability formal rather than functional. You can sue the company. You cannot interrogate the model.
This is the core danger.
An AI system can be opaque even to its operators. A person denied a benefit may not know why. A merchant frozen out of payment infrastructure may have no meaningful path to appeal. A worker deactivated by a platform may encounter only procedural theater: a form, a bot, a dead inbox. A citizen flagged by automated scrutiny may experience the full force of administration without any intelligible explanation. In each case, code governs. But unlike law, it may not justify itself.
There is also the issue of concentrated private sovereignty. De Filippi and Wright identified this as the central paradox of lex cryptographia: systems designed to escape centralized power often recreate it at a different layer. In blockchain, power concentrated in protocol developers, large mining pools, and exchange operators -- the very intermediaries the technology claimed to eliminate. In AI, power concentrates in model companies, cloud providers, identity networks, and infrastructure firms whose systems reach across every domain of social life. Their terms of service become public order. Their risk models become access regimes. Their moderation rules become speech environments.
The Return of Discretion, But Hidden
One common mistake is to think machine governance eliminates discretion. In reality, as De Filippi and Wright's rule-of-code framework makes clear, it merely relocates it.
Discretion still exists in the choice of training data, the labeling rubric, the loss function, the confidence threshold, the escalation policy, the fallback rule, the override hierarchy, and the business objective. Someone decides which errors matter more: false positives or false negatives, fraud prevention or customer access, safety or openness, productivity or autonomy. These are normative choices. They are governance choices. AI does not erase them. It buries them in technical systems where they are invisible to the people they affect and unreviewable by the institutions nominally responsible for oversight.
That is why the politics of AI governance will increasingly revolve around infrastructure design. The most powerful lawmakers of the next decade may not look like lawmakers at all. They may be product managers defining enforcement criteria, engineers implementing access layers, compliance teams setting thresholds, and model vendors establishing default policy behavior across thousands of institutions.
What a Legitimate Framework Would Require
De Filippi's 2024 George Washington Law Review article outlines two pathways for governing systems that exhibit rule-of-code characteristics. The first is regulation by code: imposing legal responsibilities and liabilities on the operators of AI systems, treating them as accountable parties even when their systems operate autonomously. The second is regulation via governance: using legal pressure points to influence the social norms and community standards that shape how AI systems are designed, deployed, and updated. Both approaches recognize that the traditional toolkit of statutory law -- prohibitions, mandates, penalties -- is insufficient when the object of regulation is a self-executing system whose outputs emerge from opaque processes.
Building on this framework, a legitimate architecture for AI-driven code as law would require at minimum five things.
Procedural visibility. People must know when a decision is being made or shaped by AI, what kind of system is involved, and what policy domain it is applying. The "confidence machine" must declare itself.
Explanation rights that are actually useful. Not vague statements about "automated processing," but actionable explanations tied to the operative factors behind a decision -- the kind of explanation that could enable a meaningful challenge.
Meaningful appeal. A real human review process with authority to reverse outcomes and examine system failures, not a decorative customer-service loop.
Auditability. Independent parties must be able to inspect how high-impact systems are trained, tuned, tested, and monitored. If a system governs access to work, money, speech, movement, or public services, it cannot operate as an inscrutable black box. This is the AI-era equivalent of requiring that laws be published before they are enforced.
Public legitimacy for private infrastructure. Where private AI systems perform quasi-public governance functions -- as the most powerful ones already do -- society needs stronger doctrines around interoperability, nondiscrimination, due process, and oversight. The more essential the infrastructure, the weaker the argument that it is merely a private platform entitled to unreviewable discretion.
AI Will Not Merely Write Code. It Will Write Institutions.
The deepest implication of AI is not that more software will exist. It is that more of social life will be pre-structured by executable systems that decide what can happen before people argue about what should happen.
That is the real arrival of code as law -- and De Filippi and Wright saw its shape clearly in blockchain before most people were paying attention. Their concept of lex cryptographia -- rules administered through self-executing systems without human intermediaries -- described blockchain governance in 2015. It describes AI governance now. The medium has changed; the structural logic has not. The difference is scale, opacity, and the degree to which AI can operate across every domain simultaneously rather than only in the narrow band of programmable financial transactions.
What blockchain demonstrated at the level of a financial protocol, AI will demonstrate at the level of civilization. Who qualifies, who gets flagged, who can speak, who gets paid, who is trusted, who is blocked, who is visible, who is legible to the system at all -- these are questions that AI-driven infrastructure is beginning to answer, continuously and automatically, without waiting for a human to deliberate.
Some of this will be beneficial. Much of modern bureaucracy is broken. Many rules are enforced arbitrarily or not at all. AI can make institutions faster, more coherent, and in some cases more fair. But without new forms of accountability, it can also produce a world in which power becomes ambient, continuous, and difficult to contest -- a world where the distinction between the rule of code and the rule by code disappears not through design but through institutional neglect.
That is why the debate over AI cannot be limited to productivity, safety, or jobs, important as those are. It is a constitutional debate in disguise. It is about where authority will reside when decisions are made through systems; who gets to encode norms into infrastructure; and what kinds of rights survive when enforcement becomes automatic.
The age of AI will not abolish law. It will change where law lives.
Less and less, it will live only in statutes, contracts, and courtrooms. More and more, it will live in models, platforms, protocols, and machine-enforced workflows. The societies that thrive in this transition will be the ones that understand the shift early -- and that take seriously the scholars, from Lessig to Reidenberg to De Filippi and Wright, who mapped the terrain before the territory was fully occupied.
AI is not just building tools. It is building the operating system of governance.
And once governance becomes software, the fight is no longer merely over what the rules say.
It is over who gets to compile them.