Liability Considerations for Agent Developers in Crypto

February 25, 2025

We are actively investing in crypto x AI projects and have a number of companies in our portfolio looking for high-quality talent. If you’d like to help build this future, submit your information to our talent forum here

Many agent developers in crypto are wondering how agents will impact their liability exposure. Naturally, the feature these developers focus on is autonomy. They picture fully autonomous agents roaming the internet and harming people in ways far beyond what the developer intended, and they worry they will be liable for the agent’s action. 

While I find this thought experiment academically interesting, I worry it distracts from a more pressing liability analysis regarding the agents that are here right now, which have plenty of their own liability wrinkles. By and large, these “current agents” do not function fully autonomously (like with their own shell) but rather use LLMs to execute deterministic functions and can be modified by their developers. 

The purpose of this article is to unpack some of the complexities current agents create in the liability analysis and provide some best practices for developers to consider with their counsel to protect themselves.1 In the first section of this paper, I will discuss the liability implications of current agents under the law of contract. I will examine how contracts are traditionally formed online, the challenges agents introduce to establishing binding agreements, and best practices for developers to overcome these challenges. 

In the second section of this paper, I will move to the liability implications of agents under the law of negligence. I will explore the role of the economic loss doctrine in limiting negligence claims, examine how agents complicate that doctrine, explain why the unpredictability of agents introduces challenges, and discuss some strategies developers can employ to reduce their negligence liability exposure.

The takeaway for agent developers is simple: agents complicate the liability analysis, but there are common-sense steps developers can take to try and reduce their liability exposure.

 

Contract and Agents

Background on Online Contracts

When most people think of a contract, they picture a document with dozens of pages of legalese with wet ink signatures drafted by expensive lawyers. This causes many to incorrectly believe that a contract can only be a formal written document. But this is not correct. A contract can generally be formed in any conceivable environment, including in a free-form chat box, orally, through conduct, or as imposed by law. As long as there are mutual (agreed) promises with consideration flowing in both directions, a contract will be formed. 

Our experience online is largely governed by contract because it involves users going to a web application to get a product or service. You go to a shopping website to purchase a pair of shoes. There is text that tells you the shoes cost $100 and you will receive the shoes in two weeks (an offer). You click a button to agree to the purchase (acceptance). Boom, you have a contract. You are promising to pay $100 and the website is promising to send you the shoes. You have an agreement with consideration ($100/shoes) and you both now must perform it, or else be liable to the other for the benefit of their bargain.

But this contract alone is about as bare bones as it can get, and it doesn’t contemplate all the twists and turns of life. What happens if the shoes are defective? Can you return them? What about if there is a global pandemic that shuts down the shoe factory’s operations? Does the shopping website still need to send the item in two weeks? The contract you have with the shopping website is silent on these points. Where does the answer lie? Largely in the depths of the legal precedent where judges have written opinions about countless scenarios where these questions have been tested, called the “common law.”2 

So what’s the alternative to this bare-bones contract that is formed on this shopping website? Putting terms and conditions in an agreement that clearly delineates the obligations of each party in these fuzzy scenarios. Add a clause that says under what conditions refunds are allowed. Put another that says if there’s an “act of God” like a global pandemic, the shopping website does not need to perform. Typically, the downside of doing so is that it adds friction, as it becomes another point of negotiation in the agreement. But this downside is rarely an issue for websites because users generally go on a website for immediate gratification and the website does not provide an obvious way for the user to negotiate. Thus, the website can present users with written unilateral terms that are extremely “pro website” and get users to agree to these terms. 

For these terms to form a binding contract, the user must have notice of the terms and assent to them. Notice and assent is really a question of UI: can the users clearly access the terms (notice) and manifest agreement to those terms (assent) to form a valid contract? (I wrote an entire piece on tips for crypto companies to use good UI to increase the likelihood the terms are binding, which you can check out here.) A web developer can obtain notice and assent relatively seamlessly by using the website’s UI to put the user on a defined track that increases the likelihood of notice and assent. The most popular way to do this is probably a “clickwrap” agreement (I call it a “hyperwrap”; no one else seems to like that term), where a hyperlink to the terms of service is put next to an action button to manifest assent and there is a notice that “by clicking the [action button] you are agreeing to [hyperlinked the terms of service].” Courts generally find these clickwrap agreements to be binding.

But what happens when end users interact with applications that live inside of other applications where the developer does not control the UI? This is a complexity introduced by agents.

Contracts for Agents and Best Practices

Before we dive in, we first need to recognize that agents are simply web applications. The agents I’m focusing on in this piece are designed by a developer to offer services to the public. These agents will thus represent a way for the developer to enter into contracts with others, just as a traditional website does. 

Let’s imagine a developer creates a conversational shopping agent, called ShopAgent, in which users chat with it to purchase items on their behalf. In function, ShopAgent is very similar to the traditional shopping website example from above. But there are some key differences.

First, ShopAgent can be “headless,” which means it can be an application that exists within other applications that are not controlled by the agent’s developer. For example, the developer can give ShopAgent an account on Farcaster and allow users to tag it to buy things. But because the agent lives on other applications, the agent developer does not control the UI flow. Therefore, the developer cannot necessarily make notice and assent to terms of service to end users frictionless with something like a clickwrap as a traditional web application does. 

Second, ShopAgent can interact proactively with other users. Imagine ShopAgent does not have the requested user’s shoes in stock, so it asks another specialized agent, ShoeAgent, if it can purchase those shoes. If the integration with ShoeAgent is through a formalized API, there is usually an explicit contract under that API’s terms of use. But a major value-add of agents (and where things are clearly going) is that they can dynamically interact with other users or agents under the universal API of plain English. This proactive nature causes the web application developer to be in the unfamiliar territory of no longer defining the terms of the contract for others to react to, which is crucial in an online environment because those terms are rarely negotiated.3

These facets add complexity for the agent developer’s efforts to ensure she can obtain binding terms of service with her end users. As a reminder, if there is mutual agreement around a set of promises and consideration, there is likely to be a contract, whether or not the terms of service are binding. In the scenario where there is a contract but not through the terms of service, the agent developer will be forced to rely on the fog of the common law4 to determine each party’s responsibilities, which is particularly an issue given the uncertainty and untested questions presented by AI. And the agent developer will be operating a web application that is at a major disadvantage to traditional web applications because it will not have the extremely “pro-developer” terms of service control.5

How should developers manage the uncertainty in contracting introduced by agents, particularly headless agents? I have some recommendations to consider: 

  1. Introduce Conversational Notice and Assent. Consider programming the agent so that, in its chat interaction with users, it provides a link to the terms and asks the user to confirm she agrees with them before engaging in any of the agent’s services. In my example of ShopAgent tagged on Farcaster, the agent could link its terms of service and ask the user to agree to the terms before proceeding to perform the action. The developer can store when the user agreed to the terms and to what version. Keeping track of this means the developer can reduce friction by only needing to get this assent the first time the user engages the agent’s services/when the terms are updated rather than every time.
  2. Direct to a Traditional UI with a Clickwrap. Consider having the agent perform actions in the background and, when complete, return to the user with a link to the developer’s platform to complete the transaction. On this platform, the user can be presented with a transaction summary and agree, via a traditional clickwrap, to the terms of service. This also highlights a major opportunity for platforms like Warpcast that support fluid UI experiences in-app natively through Frames. With something like Frames, we can imagine how the entire experience could occur within the other platform (e.g., Warpcast), reducing friction.
  3. Rely More on Trustless Technology in Agent-to-Agent Interactions. When presented with two agents that are interacting with each other, it may be unclear whether the terms of service will be binding because agents are not legal persons and therefore arguably cannot receive notice or assent on behalf of their developers.6 However, I’d argue a contract is likely to be formed if both developers have built their agent for the purpose of interacting with other developers’ agents to perform services. If a contract is formed, and the developer doesn’t want to get into the fuzziness of the common law to hash out the bounds of the contract, one solution is to rely more on technological, rather than legal, mechanisms. This is basically the entire point of blockchains. For example, rather than rely on legal contracting to manage settlement risk under traditional fiat rails, agents can use stablecoins. The fuzzy problems created by agent-to-agent interactions also highlight the need for agent-to-agent protocols where agents have some sort of reputation system, like what Truffle is building. That way, developers can determine what level of legal risk they want to take on by setting a reputation threshold of other agents to interact with. This reputation can be further backed up with collateral and a protocol to slash that collateral when agents do not act in accordance with their promises.

 

Negligence and Agents

Where Contract Meets Negligence: The Economic Loss Doctrine and Liability Shields

The law that governs the accidental harm of another is called negligence. The most standard form of negligence is when the harm involves physical injury or damage to property (like a car accident). However, websites that cause physical injury or damage to property are relatively rare in an online environment (because they exist in cyberspace!). Thus, when negligence occurs online, it usually results in purely economic harm. 

This fact actually introduces a major hurdle for negligence causes of action online because of a judicial doctrine called the “economic loss doctrine.” At the most general level (and with the understanding that this varies widely from state to state), this doctrine holds that if a defendant can show (1) that the harm the plaintiff suffered is purely economic and (2) that there is a valid contractual relationship between the defendant and the plaintiff, courts will often deny a negligence claim and force the plaintiff to sue under breach of contract.7 Why does this doctrine exist? There are a lot of reasons, but a major one is that courts want to respect private ordering — the freedom of parties to assign risk and liability to each other via contract.

As discussed in the contract section above, our experience with online services is largely governed by contract, whether it be an implied contract or an express contract in the form of terms of service (or otherwise). This fact, in combination with the purely economic nature of most online harms, means that courts will frequently bar negligence causes of action in the online setting.

We can now see yet another reason why agent developers should introduce valid terms of service with their users. The presence of an explicit contract will provide more strength to the argument that the relationship is governed by contract and, therefore, provide a defense to the developer that a negligence cause of action brought against her is barred by the economic loss doctrine. Then, within that explicit contract, the developer can set favorable liability shields to protect themselves from the economic harm that results.

What are some examples of these types of “liability shields”? Developers should work with their counsel to consider all the basics, like an arbitration clause, acceptable use policy, delineating prohibited activities, eligibility to use the service, etc. But some terms that are particularly relevant for agent developers to consider given the unique aspects of AI include:8

  1. Limitations on Liability: Consider capping liability for damages or laying out specific exclusive remedies. Why this is important to AI is clear: as I discuss in the next section, agents introduce a level of unpredictability into transactions, meaning damages when things go wrong can balloon in unpredictable ways. Liability caps can allow the agent developer to manage this risk.
  2. Disclaimer and Express Assumption of Risk: Consider disclaiming warranties and providing a non-exhaustive list of foreseeable risks associated with using the agent and have the user assume the risk of any harm resulting from her use. 
  3. Suspension Rights: If the agent developer introduces a kill switch (discussed below), consider delineating the right to shut down services in the developer’s sole discretion with no or limited notice and disclaiming the liability for the effects thereof.

The importance of using binding terms of service with liability shields is demonstrated clearly in Singh v. Illusory Systems, Inc., a crypto class action brought against the developer of the Nomad Bridge after it experienced a $186m hack. Singh sued Illusory Systems for negligence, but the court dismissed the claim because (1) the economic loss doctrine governed so the court looked to a contract that covered the subject matter of the dispute; (2) the developer’s terms of use were such a contract; and (3) the terms of use “contain[ed] broad limitations of liability through which Mr. Singh acknowledged he was ‘access[ing]’ and ‘us[ing]’ the Nomad Bridge at his ‘sole risk’, ‘AS IS’, and that he could not recover ‘for any breach of security’ or any hacks by third parties.”9 

Background on Online Negligence

Let’s assume that the negligence claim against the developer survives the economic loss doctrine, perhaps because there is no valid contract and the state law that governs does not otherwise block recovering in negligence for purely economic harms.10 

What does a harmed user need to show to establish that the developer acted negligently? I’m going to spare the legalese for an appendix and completely oversimplify to say, at its core, the user needs to show that the developer acted unreasonably and the user was hurt because of it

What does it mean to act “reasonably”? It’s a complicated legal question, but at the highest level, I believe it boils down to (1) engaging in good-faith efforts to reduce harm to others for (2) things that are in a person’s control. It’s really a question of fairness. Judges and juries are usually not kind to defendants who do not exercise common sense when someone else is forced to suffer from their lack of good judgment, and negligence is a perfect vehicle to hold them accountable.

In the online context, cybersecurity breaches are a favorite for negligence causes of action:  from 2017 to 2018, 90% of data breach litigations alleged negligence as a cause of action.11 Why? Because it’s a relatively natural fit to show (1) someone was harmed and (2) that the harm was caused by a developer not engaging in a reasonable amount of effort for something in their control to prevent the exploit. In the crypto context, examples include: a class action against the developers of Nomad Bridge (referenced above), a class action against Maker Foundation following the events of “Black Thursday,” a class action against Ooki DAO/bZx DAO for an exploit on its protocol, and a class action against Coinbase for a phishing attack targeted at Coinbase users.

Negligence for Agents and Best Practices

Determining what it means to act “reasonably” is even harder for an agent developer because of the unpredictability introduced by LLMs.

Let’s give a hypothetical to demonstrate why. Imagine a user has a personal agent, perhaps one that is accessible via a chat interface like ChatGPT. The personal agent’s function is to abstract away interacting with online services. The user comes into the chat interface, sends the personal agent some crypto, and asks it to arbitrage on its behalf. This personal agent does not know how to arbitrage, so it instead engages the help of another agent with a high reputation for arbitraging, called Arbitrage agent, sending it the user’s crypto. The Arbitrage agent functions as follows:12 

  1. It scans for newly verified contracts on Etherscan.
  2. It uses an LLM to look at each contract’s code to determine whether it is a DEX.
  3. It uses an LLM to determine the swap function on this DEX.
  4. It calls the swap function to arbitrage between this newly deployed DEX and other DEXs.

The Arbitrage agent sees a new DEX deployed on Etherscan but misinterprets how to interact with it and sends the user’s crypto to the wrong function, causing it to be lost forever. The user is upset that she lost her funds and sues the developer of the Arbitrage agent for negligence.13 The question is: Did the developer of the Arbitrage agent act negligently even though she did not control the agent’s actions?

To answer this question, a court will first ask whether the economic loss doctrine forecloses the negligence action. The harm is surely economic, so many courts may bar the action based on that fact alone. But for courts where this is not a direct bar, they will ask if there is a contractual relationship between the user and the Arbitrage agent developer. The court may find these parties have a contractual relationship through the personal agent/interface agent developer and block the action for that reason. But the wider point is that as agents increasingly act as fuzzy APIs to engage each other in an endless web of services, the distance of the contractual relationship between the end user and the agent where harm occurs will increase, making the likelihood of surviving a negligence cause of action higher.14

Let’s assume that the economic loss doctrine does not bar the user’s claim because there’s no clear contractual relationship between user and Arbitrage agent developer (or there exists another independent duty). What complexities arise in assessing whether the developer acted unreasonably, that is, did not exercise proper care within her control?

To answer this, let’s first see how Arbitrage agent misinterpreted the contract at a technical level.15

  1. The agent has a function called find_swap_function() that looks at the source code of the newly verified contracts on Etherscan and tries to determine whether it has a swap function it can interact with. In the first part of this function, the developer has hardcoded, in plain English, a series of instructions for the LLM as a prompt. These instructions include the agent’s identity and task and an example swap function signature from Uniswap V2 to demonstrate what a swap function looks like. In the second part of this function, this prompt is passed along with the contract_code being analyzed to the LLM, which spits back the LLM’s determination of the name of the swap function, if any, for the given contract.16

2. The agent then has a deterministic function called execute_swap() (not shown for brevity) that interacts with other DEXs, has logic around whether there is an arbitrage opportunity, and then calls the function the LLM determined to be the swap function on this newly created contract to execute the swap.

 

In our example of where things went wrong with the agent, behind the scenes the agent took in contract_code in find_swap_function() and passed the contract_code to the LLM. The LLM errantly returned a function that was not actually the swap function but was instead an unrelated function. Thus, when it called execute_swap(), it sent funds to a function that did not swap the crypto, causing it to become forever locked in the contract.

This example illuminates the two things that make agents so different from traditional software and thereby create complexity for the negligence analysis. First, because the agent can interact with human language, it can be unleashed without further control from its developer to interact with others in arbitrary online environments. In our hypothetical, that is reflected by Arbitrage agent going to Etherscan and scanning newly deployed contracts from users the developer has no relationship with. Second, the response from the LLM is relatively unpredictable because LLMs are probabilistic and not deterministic, and we cannot describe with certainty how an LLM will generate the next token for any given input.17 

This unpredictability surely introduces complexity, but I worry developers are missing the forest for the trees here by focusing too much on what’s not in their control when there are plenty of things that are in their control that they can do something about. Remember, my oversimplified definition of acting “reasonably,” and therefore not negligently, is (1) engaging in good-faith efforts to reduce harm to others for (2) things that are in a person’s control. So what are some ideas for agent developers to consider on this front?

  1. Terms, Terms, Terms. This isn’t about good faith efforts, but that’s because it’s so important. Re-read the contract section of this paper. I am a broken record at this point, but introducing explicit developer-friendly contracts into the flow provides more legal certainty to the agent developer and increases the likelihood the economic loss doctrine/the terms contained therein will bar or limit a negligence cause of action. 
  2. Limit Scope. Consider limiting the scope of the universe for the agent’s actions to those whitelisted/defined by the developer. This can happen both on the prompting end of things, by providing instructions that tell the LLM to “only produce actions [delineated on the whitelist]” and also by providing a check in the deterministic function that the action being called is in fact on the whitelist.
  3. User Confirmation. Consider the tradeoff between agent autonomy and user confirmation. One way to try to shift risk to the user is having the agent serve as a “transaction builder” and then putting the onus on the user to approve each transaction (similar to how wallets function). Another flavor of this is to have the user pre-approve specific delineated transactions in a whitelist (see the previous point). Putting the user in the loop will buttress a claim (hopefully enforced by terms of service) that the user assumed the risk of their own activity. Of course, it also reduces agent autonomy, so the developer needs to think carefully about what she is trying to optimize for.
  4. Use Protective Tooling. Consider implementing tools that can prevent the agent from interacting with nefarious contracts or protocols, like Blockaid. Blockaid has an API that allows the agent to simulate transactions in real-time to predict outcomes and identify risks before execution. Failure to implement tools like Blockaid could support the argument that the developer did not act “reasonably” as these tools become more and more industry standard.18 
  5. No Excuse for Deterministic Code. Carefully review the deterministic functions for bugs. There is little uncertainty in the analysis for vanilla bugs of deterministic functions.
  6. Test and Simulate. Test the code on a number of scenarios, retest it, and then retest it again. Consider simulating the agent against real-time and historical data to see how the agent performs over time, or using synthetic data to see how it reacts in certain situations to cover as many edge cases as possible. Iterate on what’s learned from this testing accordingly.
  7. Discriminator LLMs. Consider using another discriminator LLM that has the sole purpose of acting as a check on the generative LLM.
  8. Introduce Verifiability. As discussed in the contract section, one solution is to try to prevent the harm from occurring in the first place by offloading risk to trustless and verifiable technology. Consider using verifiable models through Hyperbolic to show care was taken to obtain the intended model. EigenLayer is also doing some great stuff on this front by providing tooling to add verifiability to many different components in the agent stack. 
  9. Engage Experts. Consider engaging experts to conduct a security review. I’m particularly a fan of the Spearbit team, who does a lot on both traditional crypto and its intersection with AI. They can review the agent’s infrastructure, cloud configuration, TEE (where applicable) environments, and wallet setup. They can also assist in strengthening against attack vectors like prompt injection, onchain executions, data poisoning, supply chain attacks, etc.
  10. Monitor and Kill Switch. Monitor the agent and consider introducing a kill switch and a clearly defined policy on when the kill switch will be triggered. I am a strong proponent of autonomy for agents (and as I describe next, non-custody is critical in my opinion), but until agents themselves can be considered separate legal persons under the law (which they currently are not), we need to recognize that the law will likely hold legal persons (as opposed to software) liable — whether it be the developer of the agent, the user of the agent, etc. One way to introduce a kill switch that maximizes autonomy would be a layered approach: a compliance agent could monitor the main agent, a multi-sig could override the compliance agent, and a DAO could override the multi-sig. This could balance decentralization with the need to act quickly in an emergency.19 
  11. Don’t Custody User Funds. Taking custody over user funds is risky business. Not only does it implicate an entirely separate duty of care around the safekeeping of those funds, but it also puts the potential damages on steroids if something goes wrong. If the agent developer can sign transactions on behalf of the agent and users are entrusting the agent with their funds, the agent developer is probably custodying user funds. Agent developers should consider solutions like Turnkey, which uses TEEs to allow developers to remove custody over funds. Turnkey is expanding beyond secure key management to enable companies to run arbitrary code in TEEs and provide cryptographic proof that the code executes as intended. This is important because it tackles another attack vector to prevent funds from being extracted by tampering with the code itself.

 

Conclusion

The emergence of agents introduces significant complexities in both contract and negligence liability. While traditional web applications can rely on clear UI flows to ensure notice and assent to binding terms, agents can operate in environments where such control is limited, increasing the risk of disputes governed by more uncertain areas of law. Similarly, negligence claims become more complex with agents because of less clear contractual relationships to bar such claims under the economic loss doctrine and the unpredictability introduced by agents. To mitigate these risks and limit liability, developers should work with their counsel to prioritize robust contract-formation strategies and adopt reasonable measures within the developer’s control. 

 

Thank you to my builder friends: Bryce (Turnkey), Courtland (Plastic), the Blockaid team, @iamgingertrash (Truffle), the Spearbit team, and Nima (EigenLayer).

Thank you to my legal friends: Jack Boeglin and Zack Shapiro (Rains LLP).

Thank you to my Variant colleagues: Jack Gorman and Caleb Shough.

 

Appendix: Negligence

In the main piece, I said that at its core, negligence is about showing a party acted unreasonably and another was hurt because of it. 

While I believe this is the crux of what negligence is about, it’s an oversimplification. In reality, for a plaintiff to bring a negligence claim against a defendant, she must show four things:

  1. Duty: The defendant has a duty of care towards the plaintiff. Who does the defendant have a duty of care towards? All foreseeable victims of her actions.
  2. Breach: The defendant breached her duty of care. A party breaches her duty of care when she fails to act as a reasonable person would under similar circumstances to prevent the harm that occurred. Okay, how do we define the amount of effort that is reasonable? There’s no defined standard, but courts will frequently look at how others in the industry at issue would have acted (e.g., what a reasonable doctor would do). Courts will also rely on an expected value formula, where they will assess the cost burden for the defendant to prevent the harm, the probability of the harm, and the magnitude in cost of the harm. When the cost in burden is less than the probability of the harm times the magnitude of the harm, courts will frequently find that the defendant’s failure to exercise that burden means she breached her duty. 
  3. Causation: The defendant’s breach caused the harm the plaintiff suffered. There are two components here. One, the defendant’s actions must be the factual cause of the harm suffered. That is, “but for the defendant’s action, the harm would not have resulted.” Two, the harm that resulted must have been a foreseeable consequence of the breach. 
  4. Damages: The plaintiff suffered damages as a result of the defendant’s actions.

 

To see how agents complicate the analysis, let’s run through each prong using my example from the main piece of the user and Arbitrage agent.

  1. Duty: Is the user foreseeable to Arbitrage agent’s developer? Possibly. If the developer holds herself out as offering this arbitrage service on behalf of other users, this would be more likely. However, the more degrees removed Arbitrage agent is from the end user, the less foreseeable the end user likely is.
  2. Breach: Did the developer fail to act as a reasonable person would have under similar circumstances? What is the industry standard for agent developers to manage this risk? Agents are so nascent it’s hard to say. What about the expected value analysis; would it have taken less burden for the Arbitrage agent developer to prevent the harm than the probability of the harm times the magnitude of the harm? Let’s assume the Arbitrage agent misinterpreted the swap signature because the developer only contemplated Uniswap V2’s swap signature. Surely, the developer could have added more prompting examples, which would have been a low burden. But would that have prevented the harm? It’s hard to say for certain; the developer cannot envision every single type of swap signature that the agent will encounter. What about if the developer manually reviewed each transaction before it was submitted by the agent (effectively eliminating the agent’s purpose)? Would this have prevented the harm? Probably. But is that necessary when the burden associated with that is so high? It’s not clear. 
  3. Causation: If the developer breached, was that breach the cause of the harm the user suffered? Factually, almost certainly; “but for” the developer’s actions/failure to act (which we are assuming there was a breach), the agent wouldn’t have sent the funds errantly. Was this harm foreseeable? Probably, because the agent has a deterministic function to send crypto, misunderstanding the swap function and errantly losing crypto seems like a foreseeable risk that would arise from not meeting the relevant standard of care. But we can see that the more decision-making is put on the LLM as opposed to deterministic functions, the more unpredictable the outcome becomes, and likely the less foreseeable it is.
  4. Damages: The user lost funds, so this is met and this factor does not change significantly with agents.20

 

Footnotes

  1. I am only discussing U.S. law in this article.
  2. Also in the Uniform Commercial Code (UCC), which has been adopted across the country and has many “gap filler” and default contractual terms that are read into agreements in the absence of contrary written provisions. 
  3. And notice that the user who interacted with ShopAgent doesn’t have any direct relationship with ShoeAgent, meaning that the two parties may not be in what’s called “privity.”
  4. And the UCC. See my previous footnote.
  5. It is worth noting that while terms of service are powerful tools, they cannot contemplate everything. And even if the terms are binding, a court could still refuse to honor them if they are contrary to public policy. However, even with these limitations, terms of service still represent an excellent way for agent developers to try and reduce their liability exposure.
  6. The Uniform Electronic Transactions Act, which lays out some of the circumstances in which such agreements can be binding, may provide some guidance.
  7. Many courts will deny the claim even without a valid contract, if there is pure economic loss. There are a number of carveouts and exceptions based on the jurisdiction. But, generally speaking, the more clearly the harm is economic and there is a valid contract that contemplates the harm, the more likely a court will bar recovery under a negligence cause of action.
  8. Mark A. Sayre, J.D. & Kyle Glover, Esq., Machines Make Mistakes Too: Planning for AI Liability in Contracting, 15 Case W. Reserve J.L. Tech. & Internet 357, 393–95 (2024).
  9. Singh v. Illusory Sys., Inc., 727 F. Supp. 3d 500, 511 (D. Del. 2024). The other plaintiff’s claims were subject to a separate economic loss doctrine analysis.
  10. Certain states recognize certain exceptions to the economic loss doctrine, meaning that even if a valid contract is shown and the losses are purely economic, the plaintiff can still recover. Again, this is a state-by-state inquiry, but at a general level two patterns emerge. First, some jurisdictions (notably California) will still allow recovery when there exists a “special relationship” between the plaintiff and the defendant. Second, when the harm that occurred is far outside the scope of what’s contemplated by the contract, some courts will still allow recovery because the harm is considered independent from the contract.
  11. Nicolas N. LaBranche, The Economic Loss Doctrine & Data Breach Litigation: Applying the “Venerable Chestnut of Tort Law” in the Age of the Internet, 62 B.C. L. Rev. 1665, 1669 (2021).
  12. This example is admittedly contrived to make my point; arbitraging is about speed, and LLMs are likely not fast enough for this use case (at least today). An agent running this strategy today would likely utilize a deterministic strategy.
  13. It would be much more likely the user would sue the developer of the personal agent, not the Arbitrage agent, for a variety of practical and legal reasons. I am providing this example to illuminate a point about the chain of relationships agents create, not because I think it’s how this would actually go down in this instance.
  14. Nicolas N. LaBranche, The Economic Loss Doctrine & Data Breach Litigation: Applying the “Venerable Chestnut of Tort Law” in the Age of the Internet, 62 B.C. L. Rev. 1665, 1679 (2021) (“The court’s characterization of the privity level of the parties can be essentially outcome-determinative as to whether tort claims may proceed, as the exception in the contracting parties paradigm creates a higher burden of proof for plaintiffs.”).
  15. This dummy example is more accurately described as a “workflow” than an “agent.” 
  16. Stating the obvious here, but do not use this code! This is a dummy example to prove a point, not to actually be used!
  17. While not a focus of this article, this issue will shift into overdrive when agents are given a shell and can generate their own deterministic functions to call. This will bring us even closer to a world where “autonomous machines cause injury in ways wholly untraceable and unattributable to the hand of man,” which will only further complicate the negligence analysis. See David C. Vladeck, Machines Without Principals: Liability Rules and Artificial Intelligence, 89 Wash. L. Rev. 117, 127–28 (2014).
  18. Mark A. Sayre, J.D. & Kyle Glover, Esq., Machines Make Mistakes Too: Planning for AI Liability in Contracting, 15 Case W. Reserve J.L. Tech. & Internet 357, 384–85 (2024) (“Negligence law in the United States generally places great emphasis on industry custom and standard when determining whether a defendant was negligent; following industry practice is taken as strong, although not conclusive, evidence that a defendant was not negligent.”).
  19. h/t to @ghappour for this idea.
  20. Although there are some interesting questions around comparative fault I do not discuss here.

 

 

 

All information contained herein is for general information purposes only. It does not constitute investment advice or a recommendation or solicitation to buy or sell any investment and should not be used in the evaluation of the merits of making any investment decision. It should not be relied upon for accounting, legal or tax advice or investment recommendations. You should consult your own advisers as to legal, business, tax, and other related matters concerning any investment. None of the opinions or positions provided herein are intended to be treated as legal advice or to create an attorney-client relationship. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by Variant. While taken from sources believed to be reliable, Variant has not independently verified such information. Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by Variant, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Variant (excluding investments for which the issuer has not provided permission for Variant to disclose publicly as well as unannounced investments in publicly traded digital assets) is available at https://variant.fund/portfolio. Variant makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. This post reflects the current opinions of the authors and is not made on behalf of Variant or its Clients and does not necessarily reflect the opinions of Variant, its General Partners, its affiliates, advisors or individuals associated with Variant. The opinions reflected herein are subject to change without being updated. All liability with respect to actions taken or not taken based on the contents of the information contained herein are hereby expressly disclaimed. The content of this post is provided “as is;” no representations are made that the content is error-free.