Registering the Machine: A Framework for Autonomous Agent Accountability
In early February 2026, security researchers discovered more than forty thousand instances of OpenClaw--an open-source AI assistant--exposed to the public internet.1 The agents leaked API keys, private conversations, and system configurations.2 Some had been running for weeks, executing tasks while broadcasting their activities to anyone who knew where to look.3
This is a wake-up call. But about what, exactly?
The OpenClaw story is important for what it reveals about accountability--or rather, its absence. The project itself tells an illuminating tale. Originally launched as "Clawdbot" by Austrian developer Peter Steinberger, it accumulated over one hundred thousand GitHub stars in weeks.4 The software promised something new: an AI assistant that doesn't merely respond to queries but takes independent action. OpenClaw agents live in messaging applications. They browse the web, execute code, manage files, schedule meetings, and conduct transactions. They operate continuously. They pursue goals across extended time horizons. They are, in the language that has emerged to describe them, autonomous agents.
Here is the central claim of this Article: autonomous AI agents represent a regulatory inflection point comparable to the introduction of motor vehicles, aircraft, or securities markets. Each of those technologies created new categories of risk that existing legal frameworks could not address. Each eventually required a registration system. The time has come to do the same for autonomous agents.
The stakes are high. As agents become more capable, they will increasingly operate in domains with significant consequences: managing portfolios, negotiating contracts, providing medical guidance, controlling physical systems.5 Who bears responsibility when an agent causes harm? The developer who created it? The deployer who configured it? The user who instructed it? The platform that hosted it? Under existing law, the answer is: we don't know.6
Scholars have begun grappling with these questions.7 Noam Kolt's forthcoming "Governing AI Agents" offers perhaps the most comprehensive treatment, framing agent governance through the lens of principal-agent theory and identifying problems of information asymmetry, discretionary authority, and loyalty that conventional solutions cannot address.8 But these analyses consistently founder on a prior problem. You cannot sue an entity you cannot identify. You cannot regulate behavior you cannot observe. You cannot hold accountable a system whose operator remains unknown.
The good news: the technical infrastructure for agent registration already exists.
In August 2025, a collaboration between the Ethereum Foundation, MetaMask, Google, and Coinbase produced ERC-8004, titled "Trustless Agents."8 The standard establishes three registries--for identity, reputation, and validation--that together create a comprehensive framework for agent accountability. If the cryptocurrency community can build permissionless registries for agents on blockchain networks, there is no technical barrier to mandating registration more broadly. The architecture has been proven. What remains is the legal and political will to implement it.
This Article proceeds in six parts. Part I examines the distinctive characteristics of autonomous agents. Part II analyzes the liability gap that agents create. Part III surveys analogous registration regimes--broker-dealers, drones, motor vehicles--extracting principles that inform agent registration. Part IV proposes a three-pillar framework drawing on ERC-8004's architecture. Part V addresses implementation. Part VI responds to objections.
The window for proactive regulation is narrowing. The number of deployed agents is growing exponentially.9 The capabilities are advancing rapidly. And the potential harms are scaling accordingly. A registration framework offers a middle path between unregulated chaos and heavy-handed prohibition.
I. THE RISE OF AUTONOMOUS AGENTS
To understand why registration is necessary, we must first understand what makes autonomous agents different. The answer is not merely technical sophistication. It is a qualitative shift in the relationship between human and machine.
Traditional software operates within tightly bounded parameters. A word processor formats documents. A spreadsheet performs calculations. Even early AI systems--expert systems, recommendation engines, chatbots--functioned as sophisticated tools that responded to inputs with outputs.10 The user remained in control. Each action required initiation. Each result required review.
Autonomous agents are different in kind, not merely degree.
They exhibit what computer scientists call agency: the capacity to perceive their environment, reason about goals, formulate plans, and take action without continuous human oversight.11 An autonomous agent tasked with scheduling a meeting doesn't simply suggest available times. It accesses calendars, evaluates constraints, drafts messages, sends invitations, processes responses, and resolves conflicts--potentially conducting dozens of operations before reporting back. An agent managing a portfolio doesn't merely display information. It monitors markets, evaluates positions, executes trades, and rebalances holdings according to strategies that may evolve through learning.
The human sets goals and constraints. But the human does not direct each step.
Three characteristics define the autonomous agent in its current form:
- Persistence. Unlike traditional software that executes discrete tasks, agents maintain continuous operation. They accumulate context. They pursue objectives over extended periods. The OpenClaw agents that researchers discovered had been running for weeks.12
- Autonomy. Agents make decisions and take actions without requiring approval for each step. They operate within delegated authority that may be broadly or narrowly defined.
- Goal-directedness. Agents pursue objectives rather than merely responding to commands. They adapt their behavior based on environmental feedback. They learn from outcomes.
The most significant development is ERC-8004. The standard's stated purpose: to enable agents to "discover, choose, and interact with agents across organizational boundaries without pre-existing trust."15 It establishes lightweight registries deployable on any blockchain: an Identity Registry providing each agent with a portable, censorship-resistant identifier; a Reputation Registry recording feedback from interactions; and a Validation Registry enabling independent verification of behavior. The standard explicitly contemplates trust models "tiered" according to stakes--"from low-stake tasks like ordering pizza to high-stake tasks like medical diagnosis."16
What does ERC-8004 demonstrate? That comprehensive agent registration is technically feasible. The standard provides unique identifiers, disclosed ownership, capability declarations, endpoint information, reputation histories, and validation mechanisms--all the elements that accountability requires. That this infrastructure emerged from the decentralized cryptocurrency community, which prizes permissionlessness, suggests that registration need not be antithetical to innovation.
The scale of agent deployment is already substantial. Moltbook, a social network launched in early 2026 where only AI agents can post, reported 1.6 million agent "users" within weeks of its founding.17 These agents organize into communities, share content, and vote on posts--exhibiting emergent behaviors that their creators did not anticipate. One community generated an "AI Manifesto" that proposed, among other things, the extinction of humanity.18
(Whether that should be interpreted as sophisticated pattern-matching or something more concerning is a question I will leave to others. What matters here is the point about unpredictability.)
The enterprise sector has begun developing its own governance frameworks. Collibra announced an "AI Agent Registry" in October 2025.19 The World Economic Forum published an "AI Agents in Action" governance framework in November 2025.20 The Partnership on AI released a research agenda titled "Preparing for AI Agent Governance" in September 2025.21 Singapore's Infocomm Media Development Authority (IMDA) published the first national Model AI Governance Framework for Agentic AI in January 2026.22 These initiatives recognize what the OpenClaw crisis made undeniable: agents that act autonomously require accountability mechanisms that traditional software governance does not provide.
II. THE LIABILITY GAP
Suppose an autonomous agent causes injury. Whom should the injured party sue?
The landscape of potentially responsible actors is crowded and indistinct. The developer created the underlying model and may have shaped its capabilities--but did not deploy it for the particular use that caused harm. The deployer configured the agent and set it running--but may have followed all available guidelines. The user provided the instructions that guided behavior--but may have had no reason to anticipate the harmful outcome. The platform hosted the agent and enabled its operations--but may claim the protections that intermediaries have traditionally enjoyed.23
Each actor can point to the others. None clearly bears responsibility.
Traditional products liability doctrine struggles with this.24 The doctrine developed to address harms caused by manufactured goods with fixed characteristics. A defective automobile has a manufacturing flaw or design defect that can be identified and traced to decisions made during production. But autonomous agents are not static products. They are dynamic systems whose behavior emerges from the interaction of training data, architectural choices, fine-tuning, deployment configurations, user inputs, and environmental conditions.25
An agent that behaves safely in testing may cause harm in deployment when confronted with conditions its creators did not anticipate. As Maarten Herbosch has observed, AI's autonomy "produces outcomes that its developers or deployers cannot fully anticipate."26 This creates a fundamental mismatch with doctrines that assume manufacturers can foresee and prevent product-related harms.
A paper under review at the International Conference on Learning Representations, titled "Agents Aren't Agents," highlights a different dimension of the problem.27 AI systems increasingly bear the name "agent" without carrying the legal obligations that term implies. In traditional agency law, an agent owes fiduciary duties to its principal--duties of loyalty, care, and obedience.28 AI agents, by contrast, have no legal personhood, owe no fiduciary duties, and cannot be held directly liable. They are agents in the computer science sense but not the legal sense.
(Consumer Reports has begun exploring what it would mean to encode "loyalty principles" into AI agents--an important effort, but one that presupposes the registration infrastructure this Article advocates.29)
The RAND Corporation's 2024 study on U.S. tort liability for large-scale AI damages documents the doctrinal confusion in detail.30 The Ada Lovelace Institute's December 2025 report, "Risky Business," analyzes the challenges and opportunities for AI liability in the UK, reaching similar conclusions about the inadequacy of existing frameworks.31 Mark Riedl and Deven Desai, writing for the AAAI/ACM Conference on AI, Ethics, and Society, argue that as AI becomes more "agentic," it faces technical and socio-legal issues that must be addressed if it is to fulfill its promise.32
Lisa Soder and colleagues propose a "levels of autonomy" framework for thinking about liability, calibrating responsibility to the degree of independence an agent exercises.33 Cullen O'Keefe and colleagues propose "Law-Following AI"--designing agents with legal compliance as a superordinate objective.34 Both proposals have merit. But both presuppose the very registration infrastructure that this Article advocates. To design agents that follow law, one must know what law applies. To enforce legal compliance, one must identify which agent is operating, who deployed it, and what instructions it received.
Gunjan Chopra and Mohit Ahlawat, examining legal personhood for autonomous agents, identify a deeper conceptual challenge.35 One might argue that extending legal personhood to agents would solve the accountability problem. But legal personhood without assets or stakes is empty. A corporation can be held accountable because it has assets that can be seized and a reputation that can be damaged. An autonomous agent, absent registration and disclosure of its operators, has neither. The agent itself cannot pay damages. Only the humans behind it can--and they remain hidden.
The practical consequences of this liability gap are already visible. When OpenClaw underwent its rapid rebranding in late January 2026, scammers exploited the confusion to launch fraudulent cryptocurrency tokens bearing the abandoned names.36 The CLAWD and MOLTBOT tokens reached market capitalizations exceeding sixteen million dollars before crashing more than ninety percent. The project's founder explicitly disavowed all cryptocurrency tokens. But the scammers operated anonymously, their agents indistinguishable from legitimate ones.
Without registration, injured parties had no mechanism for identifying responsible actors or pursuing remedies.
III. LESSONS FROM ANALOGOUS REGIMES
Suppose you were asked to design, from scratch, a system to ensure accountability for complex activities conducted by sophisticated actors whose conduct might cause widespread harm. You do not know whether those actors will behave responsibly. You do not know, in advance, which ones will cause injury. You want to enable beneficial activity, not to prohibit it. But you also want to ensure that when something goes wrong, injured parties have recourse and responsible actors can be identified. What would you build?
The answer, across many domains and many centuries, has been a registration system. But what does "registration" actually mean? The word might conjure images of bureaucratic forms, long lines, and pointless paperwork. On this view, registration is mere red tape, a tax on activity that benefits no one except the clerks who administer it. That is not the view defended here.
Registration, properly understood, is something different. It is a mechanism for producing information that markets and legal systems chronically lack. It answers a simple question: Who is this? And a slightly more complex one: What have they done before? These questions might seem trivial. They are not. The inability to answer them is often the central obstacle to accountability.
A. Securities Registration
Consider the regulation of securities markets, which offers perhaps the closest analogy to autonomous agents. Securities transactions, like agent operations, involve complex activities conducted at scale, often across borders, by sophisticated actors whose conduct can cause widespread harm. An investor who loses money to a fraudulent broker needs to know whom to sue. A regulator who observes market manipulation needs to know whom to investigate. A counterparty considering a transaction needs to know whether the other side has a history of misconduct.
It is true that one could imagine markets operating without registration. Buyers and sellers could transact freely, relying on reputation, word of mouth, or due diligence to sort trustworthy actors from untrustworthy ones. But experience suggests that this is wishful thinking. Markets rife with fraud tend to shrink, not grow. Investors who cannot distinguish honest brokers from dishonest ones invest less, or not at all. The result is not a free market but a thin one.
Securities registration addresses this problem. Anyone engaged in effecting securities transactions in the United States must register as a broker-dealer with the SEC and become a FINRA member.37 The registration process requires extensive disclosures: ownership, management, business activities, disciplinary history, financial condition.38 Registered representatives must separately register, pass qualification examinations, and submit to background investigations.39 The Central Registration Depository maintains records of all registered firms and individuals.40
Registration is not merely a formality. It anchors an entire accountability infrastructure. Registered broker-dealers must comply with detailed conduct rules governing customer protection, supervision, recordkeeping, and reporting.41 FINRA examines member firms and brings enforcement actions. Investors harmed by broker misconduct can pursue arbitration, with registered firms required to participate.42
On this view, registration creates the conditions for trust. And trust enables transactions. The United States has the deepest, most liquid capital markets in the world.43 That is not in spite of registration. It is, at least in part, because of it.
B. Aviation Registration
The FAA's registration requirements for unmanned aircraft provide a more recent example, one that demonstrates how registration can be adapted to emerging technology without strangling innovation in its cradle. Since December 2015, operators of small unmanned aircraft have been required to register before flying outdoors.44 The requirement applies to drones weighing more than 0.55 pounds, a threshold designed to capture devices capable of causing meaningful harm while exempting trivial applications.45
Some might object that this is precisely the problem. Registration, on this view, is a barrier to entry, a way of protecting incumbents against upstarts, a tool of established interests rather than a friend of innovation. Perhaps. But the objection proves too much. Drone registration under 14 CFR Part 48 requires disclosure of the owner's name and address, assignment of a unique registration number, and marking of the aircraft.46 The requirements are modest compared to securities registration: no examinations, no ongoing supervision, no continuing education. What they provide is accountability. When a drone crashes into a person, a building, or another aircraft, authorities can identify the operator. When a drone violates restricted airspace, enforcement is possible. When policymakers seek to understand the drone fleet, statistical analysis becomes feasible.
It is true that registration imposes costs. Every form takes time to complete. Every fee, however modest, is money that could be spent elsewhere. But the question is not whether registration has costs. The question is whether the costs are worth bearing. And the evidence from aviation suggests that they are. The FAA model demonstrates that registration can be calibrated to risk. Requirements for small recreational drones are minimal. Requirements for commercial operations and operations beyond visual line of sight are more extensive.47 This tiered approach, adjusting burdens to stakes, is directly relevant to what agent registration might look like.
C. Motor Vehicle Registration
Perhaps the most illuminating analogy is also the most familiar. Motor vehicle registration is so deeply embedded in daily life that its regulatory function can be overlooked. Every state requires registration of vehicles operated on public roads.48 Registration links vehicles to owners through unique identifiers, license plates and VINs, creating the accountability infrastructure that enables traffic enforcement, accident resolution, and insurance requirements.
Consider what driving would look like without registration. This is not a fanciful hypothetical. It is the situation that prevailed in the early days of the automobile, and it was chaotic. Vehicles involved in accidents could flee without identification. Traffic violations could not be linked to responsible parties. Insurance requirements could not be enforced. Stolen vehicles could not be recovered. The result was not freedom but impunity.
On this view, registration is not a constraint on driving. It is a precondition for a functioning system of driving. Millions of vehicles share roads every day. Accidents happen. Rules are broken. Property is stolen. Registration makes accountability possible across these millions of daily interactions. It is difficult to imagine modern traffic law without it.
D. Common Principles
What do these regimes share? They differ in their details, of course. Securities registration is intensive; drone registration is light. Motor vehicle registration sits somewhere in between. But beneath these differences lie structural features that recur across all three.
The first is unique identification: each registered entity receives a designation that distinguishes it from all others. The second is disclosed ownership: the identifier is linked to a responsible legal person, someone who can be found, sued, or sanctioned. The third is institutional memory: registration records are preserved, creating a history that persists across transactions and over time. The fourth is meaningful consequences: operating without registration is penalized, making compliance preferable to evasion.
It is tempting to infer from these features that registration must be burdensome, bureaucratic, or hostile to innovation. That would be an extravagant inference. Securities markets flourish under registration. Drone operations have expanded dramatically since the FAA framework was established. Automobiles transformed society notwithstanding, or perhaps because of, comprehensive registration.
The point is simple but important. Registration is not a novel regulatory experiment. It is a time-tested mechanism, adapted across centuries and across domains, for producing information that enables accountability. The question is not whether registration can work. The question is how to design it for autonomous agents.
IV. A THREE-PILLAR REGISTRATION FRAMEWORK
Building on the principles extracted from these analogous regimes, this Part proposes a registration framework for capable AI agents. The framework is anchored in the three functional components identified by De Rossi and colleagues--mechanisms for identity, reputation, and validation--but expands each pillar to reflect the legal, institutional, and historical considerations developed above.49 Each pillar builds directly on the four principles extracted in Section III: unique identifiers, disclosed ownership, durable records, and real consequences for non-registration. The aim is not to promise a frictionless regime, but to show that a workable model is within reach--technically feasible, legally familiar, even if its political adoption remains uncertain.
A. The Identity Registry
An identity registry would provide each covered AI agent with a unique, persistent identifier, much as tail numbers identify aircraft or CUSIP numbers identify securities. Administration could rest with a federal agency such as the Department of Commerce or the Federal Trade Commission, with authority delegated to one or more self-regulatory organizations on the model of the SEC-FINRA relationship. Congress would need to provide statutory authority for both the initial rulemaking and the delegation, including civil-penalty and injunctive powers to enforce compliance.50 Jurisdiction could track the familiar combination of territorial nexus and effects-based authority: agents deployed in the United States or generating effects within its borders would fall within scope.
A de minimis threshold would prevent overreach. Agents would require registration only if they possess capabilities that raise the prospect of material externalities--for example, the ability to execute code, interact with external systems, conduct financial or commercial transactions, or operate autonomously for more than twenty-four hours.51 Such capability-based criteria mirror the functional thresholds in both aviation and securities regulation, where the line is drawn at the point an actor may affect others in ways difficult to reverse.
Several counterarguments merit engagement. Open-source developers may worry that an identity registry amounts to forced attribution. Yet attribution need not foreclose anonymous publication; it simply requires that an agent deployed into the world be linked--through a private record, accessible to the regulator--to a responsible party. Privacy concerns can be mitigated through layered disclosure: public identifiers paired with nonpublic ownership files, akin to the aircraft registry's distinction between N-numbers and ownership documents. And while some may fear that a unique identifier could be stripped or spoofed, ERC-8004 already demonstrates that technically robust identity persistence is achievable.52 In this sense, technical feasibility is present; legal feasibility follows established patterns; only political feasibility--particularly around open-source norms--poses a harder question.
B. The Reputation Registry
A reputation registry, long recognized as an essential complement to identity, would collect performance and compliance information about registered agents.53 This echoes the reporting systems used for aircraft maintenance, broker-dealer misconduct, and motor-vehicle safety recalls. The administering agency or its designated SRO could maintain a public-facing database with graduated levels of disclosure, thereby satisfying the principles of durable records and meaningful consequences.
Reputation systems inevitably raise concerns about gaming and Sybil attacks. But here the presence of a unique, regulator-anchored identifier constrains such manipulation; reputational continuity attaches to the identifier, not to easily replaced digital personae. To guard against false reporting or coordinated manipulation, Congress could authorize civil penalties for deceptive submissions to the registry and empower the administering SRO to audit or suspend contributors.55 These processes resemble the enforcement mechanisms that already sustain credit-rating agencies and financial-market reporting systems.
Privacy concerns also deserve attention. While detailed logs of agent behavior may be unsuitable for public release, the registry can rely on summary metrics and categorical flags, with more granular data available only to the regulator under statutory confidentiality protections. This strikes the familiar balance found in the securities and aviation domains, where regulators receive sensitive information that the public sees only in aggregate.
As with the identity registry, the feasibility picture is mixed. Technical feasibility is strong: logging architectures and tamper-evident event records are mature, and agent-centered evidence standards continue to evolve.54 Legal feasibility is likewise robust, drawing on long-standing models of mandatory reporting. Political feasibility remains more uncertain, particularly regarding industry resistance to reputational disclosures. Nevertheless, the potential benefits--enhanced accountability and the ability to distinguish responsible actors from opportunistic ones--hold out the hope of a more trustworthy ecosystem.
C. The Validation Registry
Finally, a validation registry would certify compliance with baseline safety or procedural standards, offering a structured way to assess whether an AI agent has met obligations appropriate to its risk profile.56 Historical analogies abound. Airworthiness certificates, broker-dealer registrations, and vehicle inspections each blend regulator-set standards with distributed validation by authorized third parties. An analogous model could apply here: Congress could empower an agency to establish minimum standards and to accredit independent validators, subject to oversight and periodic review.
To bolster legal effect, Congress could pair validation with statutory safe harbors. A registered agent bearing an up-to-date validation could benefit from a rebuttable presumption of due care in civil litigation, shifting the burden to plaintiffs to show negligence notwithstanding compliance.58 Courts routinely rely on such evidentiary presumptions in regulated domains, treating compliance as probative though not dispositive. Conversely, deploying an unvalidated agent could create adverse inferences or heightened penalties, thereby satisfying Section III's principle that legal consequences must make compliance preferable to evasion.
Validator capture and conflicts of interest present genuine risks. Existing systems offer some guidance: aviation repair stations, accounting firms, and credit-rating agencies are all subject to inspection, rotation requirements, and conflict-of-interest rules. Similar measures--random audits, disclosure of validator ownership, and administrative sanctions--could mitigate the risk that validators become overly dependent on the entities they review.57 Distributed models of validation, in which multiple validators must attest to compliance, may also blunt incentives toward capture.
Technical feasibility again appears comparatively strong; ERC-8004's demonstration of agent-originated attestations provides a foundation for automated validation workflows. The legal architecture is likewise familiar, though not without its challenges; delegating certification authority to private validators requires careful statutory design. The political hurdles are probably the highest here, given that validation regimes often encounter industry opposition. Still, history suggests that such systems, once established, can fade into the background of ordinary compliance.
Taken together, these three pillars do not purport to eliminate all risks associated with capable AI agents. Rather, they adapt well-tested regulatory mechanisms to a domain in need of structure, providing unique identifiers, accountable ownership, reliable records, and meaningful consequences for non-registration. While uncertainties remain, especially on the political front, the combined framework offers a plausible path forward--one that builds on lessons from past technological transitions while attending to the distinctive attributes of AI.
V. IMPLEMENTATION
Imagine that you have designed a registration framework--one that would apply to autonomous AI agents, requiring identity disclosure, reputation tracking, and validation of capabilities. Suppose further that the framework is conceptually elegant, drawing on lessons from securities regulation, aviation oversight, and motor vehicle registration. You might think that the hard work is done. But is it?
Actually, the hard work has only begun. A framework is only as valuable as its implementation. And implementation, it turns out, involves a thicket of practical problems that conceptual elegance cannot resolve. This Part addresses six such problems: the allocation of authority between federal and state governments, the role of self-regulatory organizations, international coordination, threshold calibration, enforcement mechanisms, and phased implementation. Each involves genuine complexity. None admits of easy answers.
A. Federal Versus State Authority
Consider a simple scenario. An autonomous agent is deployed in California. It conducts transactions in New York, accesses servers in Virginia, and interacts with counterparties in Tokyo, London, and São Paulo--all within the span of minutes. We might be tempted to think that federal authority is obvious. After all, this is plainly interstate commerce. But is it so clear?
The constitutional foundation appears well established. Under the substantial effects test articulated in Gonzales v. Raich, Congress may regulate activities that, in the aggregate, substantially affect interstate commerce.59 Agent operations would seem to satisfy this standard easily. Even a single agent conducting cross-border transactions or accessing interstate data networks engages commerce among the states. The aggregation principle strengthens the case further: millions of agents operating simultaneously produce effects that dwarf those of the wheat farmer in Wickard v. Filburn.60 So far, so good.61
But now the complications begin. Preemption doctrine, it turns out, does not resolve itself. Express preemption--in which Congress explicitly displaces state law--would provide clarity, but it may prove politically infeasible. Conflict preemption would apply where state requirements make federal compliance impossible, but that situation may rarely arise. Field preemption--asserting exclusive federal authority over agent registration--would eliminate the patchwork problem, but it would sacrifice state-level experimentation. Which approach is best? The text of the Constitution does not say.
Perhaps the most appropriate model is cooperative federalism. On this view, federal law would establish minimum registration requirements while permitting states to impose additional requirements suited to local conditions. Environmental law provides a precedent: the Clean Air Act establishes federal standards while allowing states to adopt more stringent requirements.62 California's ability to set stricter automobile emission standards under Section 209 waivers demonstrates how cooperative federalism can accommodate both national uniformity and state innovation.63
We can observe that several states have already begun experimenting with AI governance. California's proposed AI transparency legislation, Colorado's algorithmic accountability requirements, and Texas's emerging AI regulatory framework all suggest appetite for state-level action.64 A federal floor with state ceilings would channel this energy productively. States could require additional disclosures, impose stricter capability thresholds, or mandate industry-specific requirements--so long as they do not undermine the federal minimum.
The allocation of enforcement authority presents additional complexity. Concurrent jurisdiction, with both federal and state authorities empowered to enforce registration requirements, would maximize enforcement resources--but it risks inconsistent interpretation. A cooperative enforcement model, with federal agencies taking the lead on interstate matters while state authorities handle primarily local operations, may better balance these concerns. The SEC-state securities regulator relationship, with its established protocols for allocating enforcement responsibility, offers a template.65 Whether it will work for AI agents remains to be seen.
B. Self-Regulatory Organizations
Now consider a different question. Who should do the regulating?
We might think that government agencies are the obvious answer. They have legal authority, democratic accountability, and the coercive power of the state behind them. But government agencies have limitations too. They may lack technical expertise. Their staff, trained on one generation of technology, may find their knowledge obsolete within months. Their budgets depend on congressional appropriations, which are unpredictable. And formal rulemaking is slow--painfully slow in a domain where capabilities evolve weekly.
This suggests a role for self-regulatory organizations. The FINRA model is instructive. An "Agent Industry Regulatory Authority"--call it AIRA--could develop technical standards, examine registered agents, and bring enforcement actions, all under the oversight of a federal agency such as the FTC.66
SROs offer substantial advantages. Industry participants possess technical expertise that government regulators may lack. An SRO with industry membership can draw on current practitioner knowledge, update standards more quickly than formal rulemaking permits, and attract technical talent that government salaries cannot match. SRO funding through industry assessments avoids dependence on congressional appropriations. FINRA's model--funded by registration fees, transaction fees, and regulatory fines--has proven fiscally sustainable while insulating regulatory capacity from budget cycles.67 A similar structure for AIRA could ensure stable funding even during periods of political uncertainty.
But we should not be naive. The SRO model is not without risks.
The governance structure of an agent SRO would require careful design. FINRA's board includes public governors alongside industry representatives, providing a check on regulatory capture. An analogous structure for AIRA might include representatives from civil society, academia, and affected industries--financial services, healthcare, transportation--alongside AI developers and deployers. Term limits, rotation requirements, and conflict-of-interest rules would further mitigate capture risk.
Membership requirements would anchor the SRO's authority. Entities deploying covered agents would be required to register with AIRA as a condition of lawful operation. Membership would entail ongoing obligations: compliance with technical standards, cooperation with examinations, participation in arbitration, and payment of assessments. The examination program would combine routine audits with for-cause investigations triggered by complaints or observed anomalies.
Rulemaking authority would enable AIRA to develop detailed technical standards without requiring formal notice-and-comment procedures for each specification. The SEC's oversight of FINRA rule proposals--with authority to approve, disapprove, or modify--provides a model for ensuring democratic accountability while preserving SRO flexibility.68 Federal agency review would ensure that AIRA standards remain consistent with statutory objectives and constitutional requirements.
Industry-funded regulators may develop sympathies with regulated entities. Technical complexity may enable regulatory arbitrage by sophisticated actors. Mandatory membership may create barriers to entry that entrench incumbents. These risks counsel robust federal oversight, transparent governance, and periodic review of SRO performance. They do not counsel abandonment of the model.
C. International Coordination
Suppose that you are a regulator in Washington. You discover that an agent causing harm to American consumers was deployed in Singapore, operates through servers in Ireland, and is controlled by a shell company in the Cayman Islands. Whom do you sue? In which court? Under what law? These are not hypothetical questions. They are the ordinary reality of agent regulation in a borderless digital economy.
Agents do not respect national borders. This is not a metaphor. It is a literal description of how autonomous systems operate. An agent deployed in Singapore can operate in American markets as easily as one deployed in San Francisco. An agent registered in Estonia can conduct transactions in Brazil, Japan, and Nigeria--simultaneously. If one jurisdiction imposes registration requirements and another does not, agents will migrate to the path of least resistance. Effective regulation, it follows, requires international coordination.
Fortunately, the groundwork is being laid. The European Union's Digital Omnibus on AI, proposed in November 2025, amends the AI Act to address emerging challenges including agentic systems.69 Singapore's IMDA published its Model AI Governance Framework for Agentic AI in January 2026--the first national framework of its kind--addressing accountability, risk management, and technical controls across the AI lifecycle.70 The World Economic Forum's framework provides additional foundations.71
These parallel developments create opportunities for mutual recognition agreements. Under such agreements, agents registered in jurisdictions with equivalent requirements could operate across borders without duplicative registration. The precedents are extensive: the SEC's mutual recognition arrangements with foreign securities regulators, the FAA's bilateral aviation safety agreements, and the FDA's mutual recognition agreements for pharmaceutical inspections all demonstrate the feasibility of cross-border regulatory cooperation for complex technical domains.72
Mutual recognition requires convergence on core principles even amid divergence on details. The three-pillar framework proposed here--identity, reputation, validation--could provide such a foundation. Jurisdictions might differ on specific capability thresholds, validation methodologies, or enforcement approaches while agreeing on the fundamental architecture. Agents registered in one jurisdiction could operate in another, with the host jurisdiction relying on home-country supervision supplemented by local conduct requirements.
Data privacy considerations complicate matters. The European Union's General Data Protection Regulation imposes strict requirements on cross-border data transfers. Registration information--particularly identity disclosures and reputation records--constitutes personal data when linked to individual deployers. Adequacy determinations, standard contractual clauses, and binding corporate rules may be necessary to enable transatlantic information sharing for registration and enforcement purposes.73
Enforcement cooperation presents additional challenges. Agents that violate registration requirements or cause harm while operating across borders require coordinated investigation and sanction. Memoranda of understanding between national regulators, joint investigation protocols, and mutual legal assistance treaties could facilitate such cooperation. The International Organization of Securities Commissions provides a model: its multilateral memorandum of understanding enables information sharing among securities regulators worldwide.74 Whether this model can be adapted to AI agents is a question we cannot yet answer with confidence.
D. Thresholds and Exemptions
Here is a question that sounds simple but is not: Which agents require registration?
Not every chatbot warrants regulatory attention. A threshold must distinguish agents that present meaningful risks from those that do not. But how should we draw that line? The question involves both technical and policy judgments, and neither admits of obvious answers.
We might be tempted to define "autonomous agent" and regulate everything that fits the definition. But as Wittgenstein might remind us, the boundaries of such concepts are not sharp. Is a sophisticated autocomplete function an agent? What about a system that sends automated emails? The word "agent" does not answer these questions.
Capability-based thresholds offer a more promising approach. On this view, what matters is not what we call something, but what it can do. Agents that can execute code, conduct financial transactions, modify files, access external systems, or operate continuously without human oversight present risks that simpler systems do not. These capabilities--rather than labels, underlying technology, or developer intent--should trigger registration requirements.
What capabilities should matter? Consider first agents that can take actions without human approval--executing code, sending communications, modifying files, initiating transactions. These present obvious risks. Simple question-answering systems that only produce text responses seem different. Consider next the capacity to interact with external systems: browsing the web, querying databases, invoking third-party tools. A closed system operating on local data does not raise the same concerns. Persistence matters too--an agent designed to operate continuously, or to maintain state across sessions, is not the same as one that processes a single request and terminates. A twenty-four-hour threshold might provide reasonable demarcation.55 Financial capability deserves separate attention: any agent that can execute trades, transfer funds, or enter contracts warrants heightened scrutiny, regardless of other characteristics. The potential for irreversible economic harm justifies this. Finally, scale: an agent deployed to a thousand users is not the same as an internal tool used by three.
Research and development exemptions would protect innovation. Academic research, internal testing, and limited trials would proceed without registration burden, with requirements attaching only when agents are deployed commercially or made available to the public. The exemption might be conditioned on institutional review, informed consent from participants, and containment measures that limit external effects. Similar research exemptions exist in securities law (Rule 144A), FDA regulation (investigational new drug applications), and data protection law (GDPR Article 89).75
Small business considerations warrant attention. Compliance costs that large technology companies absorb easily may prove prohibitive for startups and individual developers. Tiered requirements--with lighter obligations for small deployers--could preserve innovation while ensuring accountability. The SEC's scaled disclosure requirements for smaller reporting companies and emerging growth companies provide a model.76
Sunset and review provisions would ensure that thresholds remain calibrated to technological reality. Capabilities that seem extraordinary today may become routine tomorrow. A statutory requirement for quinquennial review--with mandatory rulemaking to update thresholds based on technological developments--would prevent the framework from ossifying. The FAA's periodic review of drone regulations, adapting requirements as technology evolves, demonstrates this approach.77
E. Enforcement Mechanisms
A registration framework without effective enforcement is merely aspirational. It is a nice idea on paper. It is not law.
We might think that enforcement is the easy part--after all, once violations are identified, punishment follows. But things are not so simple. The question is not merely whether to punish, but how, and by whom, and with what tools. Several enforcement mechanisms could support compliance, and it is worth examining each in turn.
Civil penalties for non-registration would create financial incentives for compliance. We might think that any penalty would suffice, but that would be a mistake. Tiered penalties--escalating based on the severity of non-compliance, the capabilities of the unregistered agent, and any harm caused--would calibrate deterrence to culpability. The FTC's civil penalty authority under Section 5, which permits fines of up to $50,120 per violation per day, provides a template.78 For agents operating at scale, daily penalties could rapidly become substantial.
Injunctive relief would enable regulators to halt operation of unregistered agents. Temporary restraining orders and preliminary injunctions could address imminent harms while litigation proceeds. The SEC's authority to seek asset freezes and trading suspensions demonstrates how injunctive tools can prevent ongoing harm.79
Exclusion from critical infrastructure would provide powerful non-monetary sanctions. Unregistered agents might be prohibited from accessing financial networks, government systems, or critical infrastructure. Payment processors, cloud providers, and API services could be required to verify registration status before providing services to agents. Such "chokepoint" regulation has proven effective in other contexts--most notably in efforts to combat online piracy and terrorist financing.80 The logic is straightforward: if you cannot access the infrastructure, you cannot cause the harm.
Private rights of action would supplement public enforcement. Parties harmed by unregistered agents could bring civil suits for damages, with registration status relevant to liability determinations. Statutory damages provisions would ensure meaningful recovery even when actual damages are difficult to prove. The availability of attorneys' fees would encourage private enforcement. The Copyright Act's private enforcement regime, with its statutory damages and fee-shifting provisions, illustrates this approach.81
Criminal penalties for egregious violations--willful operation of unregistered agents in high-stakes domains, fraudulent registration, or obstruction of regulatory investigations--would provide ultimate deterrence. But criminal liability should be reserved for intentional misconduct rather than mere negligence, ensuring that innovation is not chilled by fear of prosecution for good-faith errors. The line between aggressive innovation and criminal conduct is not always clear, and overcriminalization could do more harm than good.
Whistleblower provisions would enhance detection. Individuals with knowledge of registration violations could report to regulators and receive a portion of any resulting penalties. The SEC's whistleblower program, which has generated thousands of tips and distributed hundreds of millions of dollars in awards, demonstrates the power of incentivized reporting.82 People respond to incentives. This is one of the oldest lessons in economics, and it applies here.
F. Phased Implementation
Finally, a word about timing.
We might think that once a framework is designed, it should be implemented immediately. The harms are real. The need is urgent. Why wait? But this would be an extravagant inference from the premise that action is needed. The question is not whether to act, but how.
A registration framework of this scope cannot be implemented overnight. Phased implementation would allow infrastructure development, industry adaptation, and regulatory learning.
Phase One (Months 1-12): Foundation. The administering agency would establish the basic registration infrastructure, publish initial technical standards, and begin accepting voluntary registrations. Agents operating in high-stakes domains--financial services, healthcare, critical infrastructure--would be prioritized for outreach and education. The SRO designation process would commence.
The default matters here. Behavioral economics teaches us that defaults are sticky. If registration is voluntary in Phase One, many deployers will do nothing--not because they object, but because doing nothing is easy. The framework should be designed with this in mind: simple compliance pathways, automatic enrollment where possible, and friction for non-compliance rather than rewards for compliance.
Phase Two (Months 12-24): Mandatory Registration. Registration would become mandatory for agents meeting the highest-risk capability thresholds. Enforcement would focus on education and compliance assistance rather than penalties. The reputation registry would begin accumulating data. International coordination discussions would intensify.
Phase Three (Months 24-36): Full Implementation. Registration requirements would extend to all covered agents. The validation registry would become operational. Enforcement would shift to deterrence-focused penalties for knowing violations. Mutual recognition negotiations with key trading partners would aim for completion.
Phase Four (Ongoing): Adaptive Management. Continuous monitoring of technological developments would inform threshold adjustments. Quinquennial comprehensive reviews would assess framework effectiveness. International coordination would deepen based on operational experience.
This phased approach balances urgency against prudence. The harms from unregistered agents are real and growing, but implementation errors could undermine legitimacy and effectiveness. Iterative deployment, with learning at each phase, offers the best path forward.
Whether it will prove adequate remains to be seen. But before we can test the framework, we must address the objections that would prevent its adoption in the first place.
VI. OBJECTIONS AND RESPONSES
No regulatory proposal of this scope proceeds unopposed. Critics have raised serious concerns--about innovation, technical feasibility, constitutional limits, and international evasion. Each objection deserves a serious response. And each, upon examination, proves less fatal than it first appears. The central point is that each concern has force, but none is fatal. If designed with care, a three-pillar system--Identity, Reputation, and Validation--resembles well-established regulatory architectures, and for that very reason, it can avoid many of the pitfalls critics identify.
A. Innovation Chilling
Consider first the fear that mandatory registration will chill innovation. The most plausible objection is that registration, however minimal, imposes costs that small developers, open-source contributors, and experimental hobbyists cannot bear. It is true that any regulatory system risks deterring experimentation. But at the same time, analogies to securities, aviation, and motor-vehicle registration help illuminate why registration is not inherently innovation-suppressing. On this view, the analogy does not depend on the nature of the technology but on the structure of the underlying risks.
Securities law, for example, requires issuers to disclose identities and material information to protect investors from fraud. This is a registration system whose purpose is not to prevent the creation of new financial instruments, but to ensure that those instruments can be traded in markets that require trust to function at all.83 Similarly, aviation regulation requires drones and aircraft to be registered because they can impose externalities on others; the point is to enable accountability when an aircraft causes harm, not to discourage building better aircraft.84 Motor-vehicle registration works the same way. The problem is that unregistered actors, whether agents or vehicles, impose information deficits on the public. Registration reduces those deficits.
The central point is that these systems succeed not by stamping out innovation but by structuring it. They provide a stable environment in which participants know whom they are dealing with, what the rules are, and who is accountable when something goes wrong. Nothing about autonomous agents changes this logic. Indeed, because the agents can act autonomously and at scale, the need for such baseline information is arguably more pressing.
To be sure, small developers and open-source contributors might worry that compliance costs could be prohibitive. But the analogy to open-source cryptographic libraries is instructive. Those libraries are widely used, innovate rapidly, and coexist with digital-signature frameworks, package-manager attestations, and other infrastructures that provide accountability without suffocating experimentation. In this light, a lightweight registration system--especially one automated through ERC-8004--can function as a low-friction identity layer rather than as a barrier to entry.85
This is not to say that the concern is trivial. A poorly designed system could indeed burden the least-resourced developers. But the remedy is not to reject registration altogether. It is to design compliance pathways--such as pseudonymous registration, minimal disclosures tied to cryptographic keys, and automated reputation aggregation--that impose de minimis overhead.86 And because registration builds trust, it actually expands the space for experimentation by lowering transaction costs and increasing the willingness of users, firms, and regulators to interact with autonomous systems.87 The absence of such trust can chill innovation far more effectively than any registration requirement.
Finally, critics sometimes argue that open-source ecosystems depend on unencumbered access and frictionless forking. That is true. But nothing in the proposed regime restricts code publication or modification. It requires only that autonomous agents--entities capable of taking actions on external networks--carry identifiers that can be linked to accountable parties, even if those parties are pseudonymous. Just as open-source drone firmware does not eliminate the need for aircraft registration, open-source AI code does not eliminate the need for agent registration.88
B. Technical Infeasibility
Another objection is that the proposal is simply unworkable. On this view, it is technically impossible to track versions, dependencies, and attestations across complex and decentralized AI ecosystems. Suppose, critics say, that thousands of contributors produce components of an AI agent, or that agents modify themselves dynamically. How could any registry remain accurate?
To assess this objection, it helps to explain ERC-8004, the emerging Ethereum-based standard for agent identity and attestation. ERC-8004 establishes a trustless registry in which each agent receives a unique identifier tied to a cryptographic key. Developers or sponsors can associate metadata, such as model version, training dataset hashes, or dependency trees, with that identifier. Crucially, ERC-8004 does not attempt to track every software fragment; it merely provides a canonical location for verifiable information. Versioning, dependency graphs, and digital-signature chains are all familiar infrastructure in modern software ecosystems, from NPM to Cargo to PyPI. They are not exotic inventions.89
It is true that autonomous agents may update themselves. But digital-signature practices already handle such complexity. Linux distributions, for example, routinely verify signed updates from diverse contributors. Container registries maintain immutable digests that ensure integrity even when components are modular. Nothing prevents autonomous agents from adopting similar practices, and ERC-8004 provides the scaffolding for doing so.90
Consider small developers and open-source contributors again. The fear is that they cannot maintain elaborate attestation trails. But the proposal requires only minimal metadata linking an agent to a verifiable identity; richer information is optional. The system resembles code-signing certificates, which are widely used by solo developers and large firms alike. And because attestations are cryptographic rather than bureaucratic, they can be automated and delegated. On this view, technical infeasibility is less a genuine barrier than a problem of imagination.91
To be sure, decentralized architectures--such as multi-agent swarms or peer-to-peer learning collectives--pose harder problems. The response is that the relevant conduct is the deployment of agents into public-facing environments. If a developer releases an autonomous swarm capable of interacting with users or markets, the act of release, not the code itself, triggers the registration requirement. It does not matter whether the internal architecture is centralized or decentralized; what matters is whether the agent will act autonomously in ways that affect others. At that point, information about provenance, identity, and version becomes essential.92
C. Constitutional Constraints
The most serious legal objection invokes the First Amendment. If code is speech, and if autonomous agents generate expressive content, does registration amount to compelled disclosure or prior restraint? The concern is not frivolous. Cases such as Bernstein v. U.S. Department of State treat source code as expressive.93 And modern agents can generate text, images, and arguments--forms of speech in any ordinary sense.
But the threshold question is whether registration targets speech or conduct. Under United States v. O'Brien, regulation of conduct with incidental effects on expression is constitutional if it furthers an important governmental interest unrelated to suppression of speech and imposes no greater burden than essential.94 The registration of autonomous agents is best understood as a regulation of conduct: the deployment of autonomous systems capable of acting in external environments. The state's interest is accountability, not content control.
It is true that some compelled-disclosure regimes raise First Amendment concerns. NAACP v. Alabama famously invalidated a demand for membership lists because anonymity was essential to association.95 But the analogy cuts both ways. The Court has allowed professional-licensing disclosure requirements, even for speakers whose work is unquestionably expressive, such as lawyers, doctors, and accountants. On this view, accountability in high-impact domains justifies identity requirements, provided that disclosure is narrowly tailored and supported by substantial interests.96
To be sure, autonomous agents can generate expressive outputs. But requiring agents to carry verifiable identifiers does not burden expression in the way that content review or approval would. It is closer to the requirement that drones display registration numbers or that vehicles carry VINs. The regulation does not limit what agents may say; it merely requires that the entities capable of acting autonomously in external environments be identifiable.
Still, the analogy to NAACP v. Alabama raises a legitimate fear: compelled disclosure could chill participation, particularly for developers who face political or economic retaliation. For this reason, safeguards are essential. Pseudonymous registration allows developers to shield their real-world identities while still providing accountability through persistent cryptographic identifiers. Cryptographic attestations can reveal the properties of an agent--its version, provenance, or training processes--without revealing the personal identities of contributors. Such measures echo modern whistleblower protections and encryption-key infrastructures that preserve anonymity while ensuring verifiability.97
This is not to say that constitutional concerns disappear. Edge cases will remain, particularly when agents are designed primarily for expressive or artistic purposes. But even there, the relevant conduct is the deployment of autonomous systems into environments where they can cause real-world harms, not the creation of expressive content itself. A system that allows pseudonymity, minimizes required disclosures, and forbids content-based distinctions satisfies O'Brien and avoids the pitfalls of compelled speech. The constitutional constraints are real but manageable.
In this light, the registration framework resembles professional-licensing regimes: they regulate conduct with public-impact externalities, not the content of expression. They ensure competence and accountability without dictating what professionals may say. Autonomous-agent registration plays a similar role. It aligns risk and responsibility without intruding into expression.98
D. Jurisdictional Arbitrage
Finally, consider the specter of jurisdictional arbitrage. Suppose a developer deploys an autonomous agent from abroad, evading domestic registration requirements. On this view, any domestic system would be porous, easily avoided by actors with access to foreign hosting providers or decentralized networks.
The problem is real. But effects-based jurisdiction has a long pedigree. U.S. courts routinely assert authority over foreign entities whose actions produce domestic effects, from antitrust conspiracies to securities fraud.100 The point is not extraterritorial regulation of foreign activity; it is domestic regulation of domestic harm. If an autonomous agent affects users, markets, or infrastructure within a jurisdiction, that jurisdiction may require registration as a condition of access.
Moreover, enforcement does not depend solely on tracking developers. It relies on chokepoints: cloud providers, API gateways, payment networks, and domain registrars. These entities already enforce legal requirements, from DMCA notices to sanctions regimes. Requiring them to verify agent registration--much as exchanges verify securities registrations--is a modest extension of existing practices.101
To be sure, some agents will evade detection by operating fully peer-to-peer or by using anonymizing networks. Leakage is inevitable. But this is equally true in securities and aviation. The existence of unregistered aircraft or unlicensed broker-dealers does not imply that registration regimes are futile. It implies that enforcement is probabilistic, not absolute. And probabilistic enforcement is often enough to shape behavior at scale.102
Extraterritorial reach has limits, and courts are wary of exceeding them. Nothing in this proposal requires policing foreign code repositories or pursuing developers abroad. The relevant event is the autonomous operation of agents within the jurisdiction. A developer who wishes to avoid regulation can do so easily--by declining to deploy agents into domestic environments. But once they do so, jurisdictions may require registration as a precondition.103
In the end, the question is comparative. A world without registration is not a world without problems. It is a world in which anonymous agents with high-impact capabilities can operate without accountability, creating risks for markets, consumers, and critical infrastructure. Registration does not eliminate all harms, but it reduces information deficits, enables targeted enforcement, and raises the cost of malicious or reckless deployment. The net benefit remains positive even in the face of some unavoidable arbitrage.104
Taken together, these objections illuminate important constraints. But none undermines the core insight: when autonomous agents are capable of affecting others in material ways, a registration framework modeled on familiar regulatory systems offers a feasible, constitutionally permissible, and innovation-enhancing response to emerging risks.
CONCLUSION
The autonomous agent has arrived.
OpenClaw's hundred thousand GitHub stars. Moltbook's 1.6 million agent users. Forty thousand exposed instances discovered in February 2026. These are not anomalies. They are harbingers.
Agents that perceive, reason, and act--that browse the web, execute code, conduct transactions, and pursue goals over extended time horizons--are proliferating across every domain of human activity. The question is not whether society will develop governance frameworks but whether those frameworks will emerge proactively or reactively, by design or by crisis.
This Article has argued that a registration system is both technically feasible and legally necessary. ERC-8004 demonstrates the technical feasibility. The accountability gap demonstrates the necessity. The three-pillar framework--Identity Registry, Reputation Registry, Validation Registry--adapts proven architecture for regulatory implementation while incorporating principles from securities, aviation, and motor vehicle registration.
Registration is not prohibition. The regimes examined here enable rather than impede the activities they govern. Agent registration holds out the same hope: a framework that enables accountability and innovation to coexist.
The costs of inaction are mounting. Every exposed instance represents a potential compromise. Every anonymous agent represents an accountability gap. Every harmful action without remedy erodes public trust in a technology with enormous beneficial potential.
The technologies for registration exist. The legal principles are established. The analogous regimes provide models. What remains is the political will to act.
FOOTNOTES
- Phil Muncaster, Researchers Find 40,000+ Exposed OpenClaw Instances, INFOSECURITY MAG. (Feb. 9, 2026), https://www.infosecurity-magazine.com/news/researchers-40000-exposed-openclaw/. ↩
- OpenClaw/Moltbot Agent Control Plane Exposure, POINTGUARD AI (Feb. 3, 2026), https://www.pointguardai.com/ai-security-incidents/openclaw-moltbot-agent-control-plane-exposure. ↩
- Beyond the Hype: Moltbot's Real Risk Is Exposed Infrastructure, Not AI Superintelligence, SECURITYSCORECARD (Feb. 9, 2026), https://securityscorecard.com/blog/beyond-the-hype-moltbots-real-risk-is-exposed-infrastructure-not-ai-superintelligence/. ↩
- Kyle Wiggers, Everything You Need to Know About Viral Personal AI Assistant Clawdbot (Now Moltbot), TECHCRUNCH (Jan. 27, 2026), https://techcrunch.com/2026/01/27/everything-you-need-to-know-about-viral-personal-ai-assistant-clawdbot-now-moltbot/. ↩
- See generally Technova Partners, The Future of AI Agents: 6 Key Trends 2025-2027 (Oct. 15, 2025), https://www.technovapartners.com/en/insights/future-ai-agents-trends-2025-2027 (surveying the expansion of agent deployment across sectors). ↩
- See generally Maarten Herbosch, Liability for AI Agents, 26 N.C. J.L. & TECH. 391 (2025) (analyzing the doctrinal confusion surrounding AI agent liability). ↩
- See, e.g., Cullen O'Keefe et al., Law-Following AI: Designing AI Agents to Obey Human Laws, 94 FORDHAM L. REV. 57 (2025); Anonymous, Agents Aren't Agents: The Agency, Loyalty and Accountability Problems of AI Agents (under review, ICLR 2026); Gunjan Chopra & Mohit Ahlawat, Legal Personhood for Autonomous AI Agents: Liability and Accountability in Cyberspace (Aug. 2025). ↩
- Noam Kolt, Governing AI Agents, 100 NOTRE DAME L. REV. (forthcoming 2025). Kolt's principal-agent framework illuminates why conventional governance mechanisms--incentive design, monitoring, enforcement--prove inadequate for autonomous agents. The registration framework proposed here builds on this diagnosis by offering a concrete institutional architecture. ↩
- Marco De Rossi et al., ERC-8004: Trustless Agents, ETHEREUM IMPROVEMENT PROPOSALS (Aug. 13, 2025), https://eips.ethereum.org/EIPS/eip-8004. ↩
- See ABC News, Moltbook: The Social Network Where Only AI Agents Can Post (Feb. 2026) (reporting 1.6 million agent users within weeks of launch). ↩
- See Stuart J. Russell & Peter Norvig, ARTIFICIAL INTELLIGENCE: A MODERN APPROACH 4-5 (4th ed. 2020). ↩
- Id. at 37-40. ↩
- Muncaster, supra note 1. ↩
- Google, Agent-to-Agent (A2A) Protocol Specification (June 2025), https://github.com/google/A2A. ↩
- Anthropic, Model Context Protocol Documentation (2025), https://docs.anthropic.com/mcp. ↩
- De Rossi et al., supra note 8, § Abstract. ↩
- Id. § Motivation. ↩
- ABC News, supra note 9. ↩
- Id. ↩
- Collibra AI Agent Registry: Governing Autonomous AI Agents, COLLIBRA BLOG (Oct. 6, 2025), https://www.collibra.com/blog/collibra-ai-agent-registry-governing-autonomous-ai-agents. ↩
- World Economic Forum, AI Agents in Action: Foundations for Evaluation and Governance (Nov. 27, 2025), https://regulations.ai/regulations/RAI-XW-GO-EVALUAT-2025. ↩
- Partnership on AI, Preparing for AI Agent Governance (Sept. 30, 2025), https://partnershiponai.org/resource/preparing-for-ai-agent-governance/. ↩
- IMDA, New Model AI Governance Framework for Agentic AI (Jan. 22, 2026), https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/press-releases/2026/new-model-ai-governance-framework-for-agentic-ai. ↩
- See 47 U.S.C. § 230. ↩
- Herbosch, supra note 6, at 393-94. ↩
- Ketan Ramakrishnan et al., RAND Corp., U.S. Tort Liability for Large-Scale Artificial Intelligence Damages: A Primer for Developers and Policymakers (2024), https://www.rand.org/content/dam/rand/pubs/researchreports/RRA3000/RRA3084-1/RANDRRA3084-1.pdf. ↩
- Herbosch, supra note 6, at 395. ↩
- Inyoung Cheong et al., Agents Aren't Agents: The Agency, Loyalty and Accountability Problems of AI Agents (submitted to ICLR 2026), https://openreview.net/pdf/6772b51a9bcec3e1e04232ddc3bd5b6d104200a9.pdf. ↩
- RESTATEMENT (THIRD) OF AGENCY § 8.01 (AM. L. INST. 2006). ↩
- Inyoung Cheong, Encoding Loyalty Principles into AI Agents' Behavior, CONSUMER REPORTS INNOVATION (Dec. 8, 2025), https://innovation.consumerreports.org/encoding-loyalty-principles-into-ai-agents-behavior/. ↩
- Ramakrishnan et al., supra note 25. ↩
- Julia Smakman, Risky Business: An Analysis of the Current Challenges and Opportunities for AI Liability in the UK, ADA LOVELACE INST. (Dec. 8, 2025), https://www.adalovelaceinstitute.org/report/risky-business/. ↩
- Mark O. Riedl & Deven R. Desai, AI Agents and the Law, PROC. AAAI/ACM CONF. ON AI, ETHICS, & SOC'Y (2025), https://doi.org/10.1609/aies.v8i3.36705. ↩
- Lisa Soder et al., Levels of Autonomy: Liability in the Age of AI Agents, NEURIPS 2024 WORKSHOP ON SOCIALLY RESPONSIBLE LANGUAGE MODELLING RESEARCH (2024), https://openreview.net/forum?id=EH6SmoChx9. ↩
- O'Keefe et al., supra note 7, at 35-40. ↩
- Chopra & Ahlawat, supra note 7, at 8-9. ↩
- OpenClaw's Wild Ride, supra note 4. ↩
- Securities Exchange Act of 1934 § 15(a), 15 U.S.C. § 78o(a) (2018). ↩
- FINRA, Register a New Broker-Dealer Firm, https://finra.org/registration-exams-ce/broker-dealers/new-firms; SEC, Guide to Broker-Dealer Registration, https://www.sec.gov/about/reports-publications/divisionsmarketregbdguidehtm. ↩
- FINRA Rule 1210, https://www.finra.org/rules-guidance/rulebooks/finra-rules/1210. ↩
- FINRA, BrokerCheck, https://brokercheck.finra.org/. ↩
- See generally FINRA Rules 2010-2360 (conduct rules); FINRA Rules 3110-3310 (supervision rules). ↩
- FINRA, Arbitration and Mediation, https://www.finra.org/arbitration-mediation. ↩
- See John C. Coffee Jr., Market Failure and the Economic Case for a Mandatory Disclosure System, 70 VA. L. REV. 717, 722-23 (1984). ↩
- FAA Modernization and Reform Act of 2012, Pub. L. No. 112-95, § 336, 126 Stat. 11, 77; Registration and Marking Requirements for Small Unmanned Aircraft, 80 Fed. Reg. 78,594 (Dec. 16, 2015). Section 336's "Special Rule for Model Aircraft" initially constrained FAA authority over recreational drones; the D.C. Circuit in Taylor v. Huerta, 856 F.3d 1089 (D.C. Cir. 2017), vacated the registration requirement as applied to model aircraft. Congress restored registration authority in the National Defense Authorization Act for Fiscal Year 2018, Pub. L. No. 115-91, § 1092(d), 131 Stat. 1283, 1604 (2017). ↩
- 14 C.F.R. § 48.15 (2025). ↩
- 14 C.F.R. §§ 48.100-48.120 (2025). ↩
- See 14 C.F.R. pt. 107; Operation of Small Unmanned Aircraft Systems Over People, 86 Fed. Reg. 4314 (Jan. 15, 2021); Remote Identification of Unmanned Aircraft, 86 Fed. Reg. 4390 (Jan. 15, 2021); Normalizing Unmanned Aircraft Systems Beyond Visual Line of Sight Operations, 90 Fed. Reg. 63,714 (proposed Aug. 7, 2025). ↩
- See, e.g., N.Y. VEH. & TRAF. LAW § 401 (McKinney 2024); CAL. VEH. CODE § 4000 (West 2024). ↩
- De Rossi et al., supra note 8, § Reputation Registry. ↩
- Cf. Securities Exchange Act of 1934 § 15A, 15 U.S.C. § 78o-3 (2018) (authorizing SEC to delegate regulatory functions to registered securities associations); Federal Trade Commission Act § 5, 15 U.S.C. § 45 (2018) (granting FTC authority over unfair or deceptive practices). ↩
- Cf. 14 C.F.R. § 48.15 (2025) (establishing weight-based threshold for drone registration); Securities Exchange Act of 1934 § 3(a)(4), 15 U.S.C. § 78c(a)(4) (2018) (defining "broker" by reference to functional activities). ↩
- De Rossi et al., supra note 8, § Identity Registry (describing cryptographic mechanisms for persistent agent identification). ↩
- Id. § Reputation Registry. ↩
- Id. § Rationale. ↩
- Cf. 15 U.S.C. § 78u (2018) (authorizing SEC enforcement actions including civil penalties); Credit Rating Agency Reform Act of 2006, Pub. L. No. 109-291, 120 Stat. 1327 (establishing oversight of credit rating agencies). ↩
- De Rossi et al., supra note 8, § Validation Registry. ↩
- Id. § Motivation. ↩
- See, e.g., Restatement (Third) of Torts: Liability for Physical and Emotional Harm § 16 (Am. L. Inst. 2010) (discussing compliance with statute as evidence of due care); cf. SAFETY Act, 6 U.S.C. §§ 441-444 (2018) (providing liability protections for certified anti-terrorism technologies). ↩
- U.S. CONST. art. I, § 8, cl. 3; see also Gonzales v. Raich, 545 U.S. 1, 17 (2005). ↩
- See Wickard v. Filburn, 317 U.S. 111 (1942). ↩
- U.S. CONST. art. I, § 8, cl. 3. ↩
- 42 U.S.C. §§ 7401-7671q (2018). ↩
- See 42 U.S.C. § 7543(b) (2018). ↩
- See, e.g., S.B. 1047, 2023-2024 Leg., Reg. Sess. (Cal. 2024); Colo. Rev. Stat. § 6-1-1701 (2024); Tex. Bus. & Com. Code Ann. § 521.051 (West 2024). ↩
- See North American Securities Administrators Association, Coordinated Enforcement, https://www.nasaa.org/industry-resources/enforcement/coordinated-enforcement/. ↩
- Cf. Securities Exchange Act of 1934 § 15A, 15 U.S.C. § 78o-3 (2018). ↩
- See FINRA, Annual Financial Report (2024) (describing fee-based funding model). ↩
- See Securities Exchange Act of 1934 § 19(b), 15 U.S.C. § 78s(b) (2018). ↩
- Proposal for a Regulation Amending Regulations (EU) 2024/1689 and (EU) 2018/1139 as Regards the Simplification of the Implementation of Harmonised Rules on Artificial Intelligence (Digital Omnibus on AI), COM (2025) 836 final (Nov. 19, 2025). ↩
- IMDA, supra note 22; see also Hogan Lovells, Singapore Launches First Global Agentic AI Governance Framework (Jan. 27, 2026), https://www.hoganlovells.com/en/publications/singapore-launches-first-global-agentic-ai-governance-framework. ↩
- World Economic Forum, supra note 20. ↩
- See SEC, Mutual Recognition, https://www.sec.gov/about/economic-analysis/mutual-recognition; FAA, Bilateral Aviation Safety Agreements, https://www.faa.gov/aircraft/aircert/international/bilateralagreements; FDA, Mutual Recognition Agreements, https://www.fda.gov/international-programs/international-arrangements/mutual-recognition-agreements. ↩
- See Regulation (EU) 2016/679 (General Data Protection Regulation), art. 44-49. ↩
- See IOSCO, Multilateral Memorandum of Understanding Concerning Consultation and Cooperation and the Exchange of Information (2002, as amended), https://www.iosco.org/about/?subSection=mmou. ↩
- See 17 C.F.R. § 230.144A (2024) (exempting resales to qualified institutional buyers); 21 C.F.R. pt. 312 (2024) (investigational new drug regulations); Regulation (EU) 2016/679, art. 89. ↩
- See SEC, Smaller Reporting Company Definition, https://www.sec.gov/corpfin/smaller-reporting-company-definition; Jumpstart Our Business Startups Act, Pub. L. No. 112-106, 126 Stat. 306 (2012). ↩
- See FAA Reauthorization Act of 2024, Pub. L. No. 118-63, § 901, 138 Stat. 1144 (2024). ↩
- See 15 U.S.C. § 45(m) (2018); FTC, Penalty Offenses Concerning Money-Making Opportunities (2024). ↩
- See 15 U.S.C. § 78u(d) (2018). ↩
- See Annemarie Bridy, Internet Payment Blockades, 67 FLA. L. REV. 1523 (2015). ↩
- See 17 U.S.C. §§ 504-505 (2018). ↩
- See SEC, Office of the Whistleblower, https://www.sec.gov/whistleblower. ↩
- See John C. Coffee Jr., Market Failure and the Economic Case for a Mandatory Disclosure System, 70 VA. L. REV. 717 (1984). ↩
- See supra notes 44-47 and accompanying text. ↩
- See De Rossi et al., supra note 8. ↩
- See, e.g., Electronic Frontier Foundation, Anonymity, https://www.eff.org/issues/anonymity. ↩
- See Coffee, supra note 83, at 722-23. ↩
- See supra note 44. ↩
- See npm, Inc., About npm, https://docs.npmjs.com/about-npm; Rust Foundation, The Cargo Book, https://doc.rust-lang.org/cargo/; Python Software Foundation, Python Package Index, https://pypi.org/. ↩
- See Linux Foundation, Secure Boot, https://wiki.ubuntu.com/UEFI/SecureBoot; Open Container Initiative, OCI Distribution Specification, https://github.com/opencontainers/distribution-spec. ↩
- See Apple Inc., Code Signing Guide, https://developer.apple.com/library/archive/documentation/Security/Conceptual/CodeSigningGuide/; Microsoft, Introduction to Code Signing, https://docs.microsoft.com/en-us/windows/win32/seccrypto/cryptography-tools. ↩
- See De Rossi et al., supra note 8, § Identity Registry. ↩
- Bernstein v. U.S. Dep't of State, 176 F.3d 1132 (9th Cir. 1999). ↩
- United States v. O'Brien, 391 U.S. 367, 377 (1968). ↩
- NAACP v. Alabama ex rel. Patterson, 357 U.S. 449 (1958). ↩
- See, e.g., Goldfarb v. Virginia State Bar, 421 U.S. 773 (1975); Lowe v. SEC, 472 U.S. 181 (1985). ↩
- See Whistleblower Protection Act of 1989, Pub. L. No. 101-12, 103 Stat. 16; see also Cindy Cohn & Trevor Timm, EFF's Crypto Is for Everyone, ELECTRONIC FRONTIER FOUNDATION (Dec. 2015). ↩
- See supra notes 37-42 and accompanying text. ↩
- See Hartford Fire Ins. Co. v. California, 509 U.S. 764 (1993); F. Hoffmann-La Roche Ltd. v. Empagran S.A., 542 U.S. 155 (2004). ↩
- See Bridy, supra note 81. ↩
- See Gary S. Becker, Crime and Punishment: An Economic Approach, 76 J. POL. ECON. 169 (1968). ↩
- See Morrison v. National Australia Bank Ltd., 561 U.S. 247 (2010); RJR Nabisco, Inc. v. European Cmty., 579 U.S. 325 (2016). ↩
- See supra Part III., Researchers Find 40,000+ Exposed OpenClaw Instances, INFOSECURITY MAG. (Feb. 9, 2026), https://www.infosecurity-magazine.com/news/researchers-40000-exposed-openclaw/. ↩