October 10, 2024
AI agents have brains, but where are their wallets?
From the first moment that ChatGPT launched, stories of people who had the AI model build them a business or automate some portion of their life exploded. Yet the LLM interface itself remains stubbornly text-based. For example, ChatGPT will return web results when you interact directly. But requests through the API won’t search the web unless you write your own code to do so. And Claude explicitly disclaims web search saying that it’s designed to be “self-contained.”
Nonetheless, dozens of startups are pursuing visions of autonomous AI agents completing complex tasks such as buying goods, accessing online services, and making financial decisions. These agents may assist users, manage subscriptions, or handle repetitive transactions more efficiently than humans. But they face a fundamental challenge: in order to act on our behalf in the world, they will have to make purchases. And the existing financial infrastructure is explicitly designed for humans, not bots.
New financial infrastructure must be created to support these bot-based interactions. This will require new interfaces, new standards, and most importantly new fraud tooling.
Here are the reasons why this infrastructure is necessary:
- Autonomous execution: AI agents need financial capabilities to make purchases without human oversight. Existing payment systems are built for human interaction and not agent-initiated transactions.
- Verification and trust: Transactions initiated by AI agents pose new risks. This requires new methods to verify that these agents are trustworthy actors who act within the boundaries of their predefined roles and permissions.
- Custom workflows: AI transactions may follow different logic compared to human transactions. They may need recurring payments, microtransactions, or contextual buying decisions based on real-time data. These are workflows that are currently not efficiently supported by our current financial infrastructure.
What’s the problem with existing financial infrastructure products?
Existing financial systems are designed with humans in mind. And this poses problems for purchases made by AI agents. We have identity and authentication issues where financial products rely on verifying human identity. But agents don't have a digital identity like a human and thus require new methods for identity verification. In fact, one of the most consistent themes in fraud detection today is trying to determine when an automated system is presenting itself with human credentials.
There’s the issue of transaction limits and risk profiles. When merchants try to limit bots, they might also be trying to prevent people from buying up limited inventory – like limited edition Nikes or Taylor Swift tickets. Many of the players involved in payments such as the banks issuing cards or in some cases the card networks themselves have rules and risk thresholds based on typical human behavior.
Moreover, we know that automated systems (whether driven by AI or not) have the capacity to get stuck in loops – driving surprisingly large outcomes from seemingly low-risk rules.
Even beyond the comedic example from Office Space, there are real world examples in capital markets – including most famously Knight Capital, whose algorithms purchased $7B of stocks in under an hour. This nearly bankrupted the company and eventually forced its sale to a rival.
These examples illustrate the intuitive but very difficult problem of permissions and authorization. Limits within traditional infrastructure might block intended transactions from bots, but it would also be untenable to simply remove those limits from agentic transactions.. There are plenty of sophisticated controls in place today, but they aren’t perfectly analogous to the controls that would be needed –constraints tied to context, goals, or spending caps. To handle an agentic payment will require new technology to embed into an agentic use case. An AI agent's buying power may need to be defined differently and the current systems don't support this well.
Wouldn’t Stripe do this?
For anyone who might want to start a new payments company, especially if the fundamental problem is accepting payments over the internet, it’s de rigueur to ask the question: Why wouldn’t Stripe do this?
Of course, there are the obvious answers that apply to any startup – innovators dilemma and just plain tradeoffs. Stripe can make much more money next year by building custom features for enterprise clients like WalMart or Shopify than trying to solve hard technical problems for payment flows that don’t even really exist yet.
More importantly, it seems that these agentic payments are structurally different from the problems that Stripe and other payment processors have solved to date. Stripe and similar companies have robust fraud detection, identity verification, and merchant interaction mechanisms designed for human-initiated transactions.
There are a few key challenges for Stripe in AI agent transactions. They need to deal with bot detection and fraud. Like many financial products, Stripe differentiates between human users and bots. Introducing AI agents changes the paradigm. Now we need to separate "good bots" from "bad bots" and establish "trustworthy agents" vs "malicious actors".
There’s custom logic and workflows to deal with. Stripe has mostly won because of its simplicity, rather than its flexibility. For example, even in the human-centered paradigms today, QED portfolio company Payabli is winning with mid-market software vendors because they need to be able to customize payment flows that are more complicated than one time payments. It seems clear that AI-agent transactions will require even more complex frameworks like dynamic pricing, microtransactions, or on-demand purchases based on real-time analytics. This demands a more agent-tailored financial product.
It’s likely that the first generation of agentic-payments companies will build wrappers on top of Stripe, WorldPay, and Adyen. At some level, they will have to. Since all payments in the US do require bank connectivity, these companies will rely on others to actually make the payments. But in fintech, there’s a common process of outgrowing your initial infrastructure. Or at least outgrowing the economics of your initial infrastructure deal. We can guarantee that Shopify has different economics than ‘MyNewEcommerceProject.com’.
It’s too early to tell whether the first value will be captured by a new type of fraud company that is designed to separate good bots from bad bots or by a new type of payment company that wins because they conquer this new fraud paradigm. But if we believe that the future of commerce is increasingly agentic, then there is a significant opportunity for both. Just as Stripe succeeded because e-commerce demanded a new type of payments company, it is likely that a big company can be built on this new payments paradigm.
What does a good product look like to enable agents to make purchases?
The promise of an agentic payment system will require companies to solve old problems in a new way. Payment processing is a “solved” problem today, but inviting robots into the payments system – instead of trying to keep them out – will require new approaches to each element of a payment transaction. At a high level, these systems must distinguish between good and bad bots. And must be able to handle granular and dynamic risk assessments.
Good vs bad bots: The product must develop a framework for distinguishing good bots and bad bots. Good bots are AI agents working within the context of their programmed duties. And bad bots are the ones that attempt to do fraud or abuse. This requires a new model for bot behavior profiling, establishing good faith actors vs malicious intent.
Granular risk assessment: Different levels of risk will need to be assigned based on the type of AI agent, its behavior, transaction patterns, and the context of the purchase. Such risk assessments should be dynamic and adapt based on evolving agent behavior.
Therefore, a tailored financial product for AI agents should include:
- Agent verification and trustworthiness: A mechanism for establishing and validating an AI agent's identity and intent. It could involve agent-specific credentials or tokens tied to an owner account with clear policies.
- Programmable payment logic: Flexible rules and permission structures to enable agents to make purchases based on pre-set conditions, behaviors, or triggers e.g. daily spend limits, purchasing criteria, allowed merchant types.
- Risk and fraud management for agents: Intelligent systems to distinguish between "good agents" (trusted AI bots) and "bad agents" (malicious or rogue bots). This includes various levels of trust and risk to assess agent behavior in real-time.
- API-first approach: A robust API architecture that allows both merchants and AI developers to integrate seamlessly. For merchants, this means easy support for automated agent transactions. For AI developers, it provides a straightforward way to plug payment capabilities into their agents.
Balancing needs of merchants and AI agent companies
This new product must act as an intermediary between two distinct user groups: merchants and AI agent companies.
For Merchants: The product needs to be straightforward to adopt by allowing them to accept AI-initiated payments without a steep learning curve. This involves providing visibility into agent transactions, trustworthiness indicators, and fraud protections specific to AI-driven purchases.
For AI Agent Companies: The product needs to offer simple ways to set up agent payment capabilities, manage agent permissions, and adapt to various use cases e.g. subscriptions, one-off purchases, microtransactions.
Balancing these needs is essential to creating a frictionless product experience that builds trust on both sides. While this post is mainly about the ability for AI-agents to make purchases in an e-commerce context, the fact that these agentic workflows involve new behavior on both sides of payment flow could create opportunities for network effects within the broader payment network.
For example, one of the obvious potential solutions to agent verification would be direct partnership with an agentic company that would provide the payment provider a “white list” of verified agents. If this provider also acted as the merchant acquirer for an e-commerce company, then white-listed agents could be waived through seamlessly. This can jumpstart a classic network effect.
Agentic future and the big opportunity
The market demand behind increasingly autonomous agents seems clear, but it’s not clear how quickly these agents will be able to earn our trust. Consumers need to trust that AI agents will execute transactions as intended. This fear of misalignment with user intent is a big reason why Alexa’s shopping features didn’t gain traction, and is why things like the Instacart plugin for chatGPT never gained enough traction to go full-stack - even though both Instacart and Amazon in these cases are the merchants and don't really have blockers to the payment automation since they own the full stack
Moreover, for a payment network to be valuable, there would have to be a large number of successful agent frameworks as well as a large number of merchants who want to take payments from them. In other words, the market structure for a network to be valuable must have a large amount of complexity on both sides – both in terms of the technical ‘job to be done’ (as discussed above) and in terms of the number of participants.
In certain futures, it’s possible that the technology to get an AI to be able to pay online doesn’t require any change on the payments side. For example, if AI is easily able to interpret the checkout flows and present its clients payment credentials directly, then all of the payments value would accrue to the agentic companies directly. It’s also possible that the market for agentic platforms has so few winners that the emergent market structure only requires payments companies to do a handful of direct deals with those companies.
On the payments side, it’s possible that the need for standardization of agent and trust protocols takes longer. For agentic payments to work, there will likely need to be regulatory or governance clarity for things like disputes, chargebacks, and reversals. Without these, merchants themselves may not want to accept payments from AI agents. Will the risk-reward tradeoff be worth it in the near future?
Still, we believe that agentic payments infrastructure has all the ingredients of a BIG opportunity.
- AI Autonomy: As capabilities are getting embedded in daily life (e.g. personal assistants, financial management bots), they will need to have payment capabilities to meet their promise.
- New Market for Trust and Security: With the emergence of AI agents, trust and security become paramount. A company that can handle the identity, verification, and risk assessment for agent transactions stands to become a central player in the ecosystem.
- Potential Network Effects: A company that can effectively balance merchant and AI developer needs should be able to provide solutions to both sides, creating network effects and unlocking an incredibly powerful moat.
Ultimately the success of a financial infrastructure for AI agents hinges on the agentic future. And in this version of the future, autonomous AI plays a significant role in commerce and day-to-day consumer behavior.
The rise of AI agents as autonomous economic actors creates a need for tailored financial infrastructure that addresses the unique requirements of trust, permission, and security in agent-driven commerce. This presents a compelling opportunity for startups to create specialized products that fill this gap, offering new pathways for investment and growth in an evolving digital economy.
Prateek Joshi from Moxxie Ventures co-authored this article.