Introduction
In the near future, individuals and non-financial firms will use generative artificial intelligence (AI) to accomplish everyday tasks. In addition to drafting emails (Ashby, Reference Ashby2024) and making restaurant reservations (Huffman, Reference Huffman2019), users may ultimately task applications like Siri or Gemini with all manner of financial tasks. Individuals may ask these applications to pay their utility bills, recommend financial products, or manage investment portfolios with the press of a button. Non-financial firms may task their AI-powered financial management software to pay invoices and effectively manage their cash flow, with humans only interfering when necessary.
When created, these applications will be in a class known as ‘agentic AI’ or ‘AI agents’ – so named because they ‘are characterized by the ability to take actions which consistently contribute toward achieving goals over an extended period of time, without their behavior having been specified in advance’ (Shavit et al., Reference Shavit, Agarwal, Brundage, Adler, O’Keefe, Campbell, Lee, Mishkin, Eloundou, Hickey, Slama, Ahmad, McMillan, Beutel, Passos and Robinson2023: 4). That is, these applications can independently perform tasks on behalf of their users without those users’ involvement. AI agents already exist in one area of finance – in the crypto asset ecosystem, AI agents utilize DeFi exchanges to buy and sell assets in line with their creators’ risk tolerances (Saunders, Reference Saunders2024) – and are poised to become integrated into traditional financial services (Hsu, Reference Hsu2024). Indeed, the technologies needed to create AI agents that provide financial services to their users currently exist, with only the desire to combine the constituent parts needed to make these applications a reality. Apple’s Siri has demonstrated that AI agents are already available to the general public, and open finance firms like Plaid and Stripe have developed application programming interfaces (APIs) that allow third-party finance applications to bloom. Combining the two is technologically around the corner.
These products could benefit their users, as well as pose risks to them and the financial system. The ability for individuals to verbally direct their phones to routinely pay bills as they come due can be an incredible time saver. Receiving personalized, unbiased financial advice can improve investors’ outcomes if followed (Bhattacharya et al., Reference Bhattacharya, Hackethal, Kaesler, Loos and Meyer2012), and having portfolios automatically rebalanced can keep savers on track for retirement (Arnott et al., Reference Arnott, Li and Linnainmaa2024). At the same time, AI agents may cause overdrafts, provide advice contrary to their users’ best interests (they are software not subject to any fiduciary duty, despite the implications from the title of ‘agent’), and engage in market manipulation in ways that place their users at legal risk (Mizuta, Reference Mizuta2020). For businesses, off-the-shelf treasury management solutions equipped with AI can help them balance financial returns with liquidity needs (Polak et al., Reference Polak, Nelischer, Guo and Robertson2020) while having the potential to cause financial instability by propagating flash crashes and bank runs (Gensler and Bailey, Reference Gensler and Bailey2020). Simultaneously, malicious actors are using AI agents to defraud consumers, execute cyberattacks, and intentionally manipulate markets (Fang et al., Reference Fang, Bindu, Gupta, Zhan and Kang2024; Mizuta, Reference Mizuta2020; Hsu, Reference Hsu2024).
To be sure, these risks are not solely thanks to AI. The activities described above can all be accomplished with prior-generation automation technology (e.g., scripts) and APIs. But AI allows users to undertake these activities with greater ease, making their risks more likely. Whereas developers were previously required for fintech firms to create applications, generative AI agents hold the promise of allowing users to implement the same or similar features on their own and without knowing how to code, as well as countless other features that have not yet been thought up. Moreover, AI agents may be able to take users’ vague commands and interpret them in unpredictable ways.
Regulators around the world have yet to fully grapple with the risks posed by individuals and non-financial firms using AI agents to engage in financial services, focusing instead on their uses by existing financial institutions. Given finance’s longstanding history with algorithms and machine learning, regulators have been able to adapt to the deployment of new AI technology by financial institutions. In the United States, for example, the Securities and Exchange Commission imposed new regulations on high-frequency trading firms and sought to ensure that investment banks and financial advisors do not mistreat their clients when leveraging AI (SEC, 2024). Similarly, the European Securities and Markets Authority has warned market participants that MiFID II applies to investment firms’ use of the technology (ESMA, 2024). Some regulators are encouraging financial institutions to adopt AI and machine learning technologies to assist with risk management and regulatory compliance functions (FSOC, 2023).
It is concerning that policymakers have largely ignored how the use of AI agents by individuals and non-financial firms can affect their users and the financial system. In some areas, banking and financial regulators maintain existing authorities to address these risks, needing only to put them to use. The US Consumer Financial Protection Bureau (CFPB), for example, can address unfair, deceptive, and abusive acts and practices posed by AI agents that allow consumers to engage in banking, borrowing, and lending (12 USC § 5481(5)). In such areas, regulators must make clear to technology firms that AI agents that perform financial services will be subject to regulatory scrutiny before these products can come to market. In other areas, regulators’ authority is unclear – there is an open question as to whether the US financial markets regulators can regulate AI agents that execute transactions or provide investment advice, for example – and in such situations, regulators must evaluate the extent of their statutory authorities before consumers, non-financial firms, and investors face harm.
In some areas, regulators appear to lack the authority to fully address the risks posed by AI agents. In the United States, one such area is the management of bank accounts, where AI agents may operate akin to deposit brokers. Brokered deposits (i.e., deposits placed with banks by third parties in the business of doing so) have historically posed risks to banks because they are more likely to be moved from bank to bank in search of higher yields, making it more difficult for bankers to plan (FDIC, 2011: 11). Yet, because brokered deposits were easily identifiable, the law regulates banks, rather than depositors or deposit brokers (12 USC § 1831f). AI agents threaten to upend this regime by, for the first time, enabling deposits to be brokered without banks being made aware – leaving regulators unable to address the risks. Such areas where regulators lack authority should be of the utmost concern when discussing the risks AI poses in finance, and it is incumbent upon regulators to identify such risks and request additional authority from their legislatures.
This essay argues that the use of AI agents with API access by individuals and non-financial firms poses risks to the financial system in ways that policymakers have not yet fully considered. It contributes to the literature on the social studies of finance by identifying that these actors’ use of this new technology may affect banking and financial markets. In doing so, the essay situates AI agents within a broader history of technologies to which public, financial institutions, and regulators have had to adapt. In addition, it contrasts AI agents, which are software, with human agents, who are legally bound by fiduciary duties to their principals. In doing so, this essay uses the bank regulatory regime of the United States as an example to demonstrate the problems posed by the use of AI agents by individuals and non-financial firms, though concerns certainly exist worldwide.
This essay is divided into three main sections. The first hypothesizes how AI agents will operate in traditional finance, taking a broad view of the technology. This section begins by discussing the technology stack underlying AI agents, including large language models (LLMs), AI systems, and the applications that bundle systems and other APIs into usable software. It then explores how open finance technologies may be incorporated into AI agents to offer banking, investment, and other financial services. A second section situates AI agents in the history of technological innovation in finance. This history is described through three ‘waves’ of innovation: transnational finance with railroads and telegraph lines, the evolution from analog to digital recordkeeping, and the internet and advances in computing power. The essay characterizes AI agents as the latest development in this sequence of technologies, something to which the public, financial institutions, and their regulators must adapt. A third section then narrows in on the ability of AI agents to serve as deposit brokers in the United States and the problems this poses. An example is provided of the mechanisms by which AI agents could cause bank runs via deposit brokering. The analysis goes on to discuss the US laws governing deposit brokering and how banking regulators lack the ability to regulate banks’ customers or deposit brokers directly. This section closes by underscoring the need for regulation, demonstrating how regulators’ inability to directly regulate deposit brokers or banking customers leaves them unable to addressing AI agents’ potential to trigger bank runs.
Hypothesizing AI agents in traditional finance
AI agents are applications that are built atop other technologies. In order for individuals and real economy firms to use them in financial services, open finance services must be added to the applications. This section provides a high-level overview of various technologies and explains how they may work together to allow AI agents to perform financial services for their users. It then explains the types of activities AI agents could perform for users.
AI models, systems, and applications
Three distinct technological concepts are necessary to understand AI agents: models, systems, and applications. AI agents are commercial applications that rely on systems of components, of which LLMs are one component.
AI models are the base-level software used by AI applications, including agents. These models are developed with machine learning technology and, when executed, can create new information in response to prompts (Feuerriegel et al., Reference Feuerriegel, Hartmann, Janiesch and Zschech2024). The information these models can create includes text, images, and music, depending on the information upon which they are trained. Of particular relevance here, models capable of producing text – as AI agents will need to do – are known as LLMs. GPT, short for ‘generative pre-trained transformer’ (not to be confused with ChatGPT), is one such model (Yenduri et al., Reference Yenduri, Ramalingam, Selvi, Supriya, Srivastava, Maddikunta, Raj, Jhaveri, Prabadevi, Wang, Vasiakos and Gadekallu2024), as are LLaMA, DALL-E, and VALL-E. Current creators of AI agents include Alphabet, Microsoft, Meta, and OpenAI (the maker of GPT). AI models can be open-source or proprietary, though constructing a model from scratch is prohibitively expensive for all but the largest companies (Smith, Reference Smith2023).
Models constitute one part of AI systems, which are collections of capabilities (including models, data processing capabilities, and user interfaces) that allow for the creation of new data (Feuerriegel et al., Reference Feuerriegel, Hartmann, Janiesch and Zschech2024). Systems, in effect, run the models that were previously created while also providing an interface that allows users the capability of prompting models and receiving outputted information. ChatGPT is one such system; it provides the software interface allowing users to prompt the GPT model, the hardware on which the model runs, and the software to display the model’s output. Others include Claude and Gemini.
In addition to providing the technology necessary to make models functional, systems can include components that provide additional information or functionality, such as access to and incorporation of real-time data, that allow systems to do more than simply create new text, images, or audio (Feuerriegel et al., Reference Feuerriegel, Hartmann, Janiesch and Zschech2024: 113). For example, a system can render a model designed to analyze financial data practically usable by feeding it the real-time information required for investment decisions (Bi et al., Reference Bi, Bao, Xiao, Wang and Deng2024: 130).
Although AI systems can themselves be products – ChatGPT is available to and used by the public as a standalone product – their outputs can be used by applications to effectuate specific, practical use cases. AI applications are ‘the practical use cases and implementations of these systems’ for some concrete purpose, and are offered by the creators of systems, by third parties, or by users themselves (e.g., firms may develop applications for employee use) (Feuerriegel et al., Reference Feuerriegel, Hartmann, Janiesch and Zschech2024: 112). Some of the most well-known AI systems are Siri, Grammarly, and Gemini, though there are a number of other applications and use cases, from music and movie generation (Metz, Reference Metz2023; Garcia, Reference Garcia2023) to software development (Chen et al., Reference Chen, Tworek, Jun, Yuan, Ponde de Oliveira Pinto and Kaplanet2021) to AI agents.
In many instances, AI agents are applications built atop other AI systems, supplemented with additional software for the purpose of implementing their users’ commands. For example, when a user asks their AI agent to complete a task, the agent could direct the user’s command to a text-to-code model, such as OpenAI’s Codex, that turns the command into instructions for other parts of the system. The system will run the instructions through APIs to put the command into effect.
As discussed previously, the actions AI agents can take are similar to those that can be accomplished through computer automation. The difference between AI agents and scripts is that the latter require users to maintain a working knowledge of programming languages and APIs, whereas the former may be used by anyone. The text-to-code models incorporated into AI agents create the necessary scripts without users even knowing what a script is, allowing less technologically savvy individuals and firms to undertake any actions that scripts alone could accomplish. Users need only provide natural language commands that describe what they want the agent to accomplish, which the AI systems will execute.
AI agents in finance
Relying on open finance data transfer platforms and their APIs, AI agents will likely be able to perform financial services – including those traditionally reserved for brokers, bankers, and investment advisers – on behalf of their users.
The term ‘open finance’ refers to the ability of a piece of software or firm to use APIs to access and, perhaps, modify financial information held by another financial institution (Awrey and Macey, Reference Awrey and Macey2023). In traditional or closed finance, information about a customer’s account at a financial institution is accessible only through methods offered by that institution (known as a walled garden). For example, to access information about their bank account, or to make payments with that account, a customer would have to use the bank’s website and use the tools it offers – if it offers them at all. With open finance, that customer would be able to access their account information from any third-party institution that allows such capabilities.
The prototypical examples of open finance, and the first type of consumer-facing open finance applications available, are budgeting and financial planning apps like Quicken or Empower. Providers of these applications allow customers to create accounts with the providers and use APIs provided by the likes of Plaid, Stripe, and others to access their accounts at traditional financial institutions, allowing them to obtain a holistic view of their financial health from a single application. The APIs will keep customers’ account information updated, downloading transactions on a periodic basis. Budgeting and financial planning apps will then offer customers the capability to, for example, track their monthly cash flow, set and track goals, and project future finances based on past patterns.
These budgeting and financial planning apps function as data aggregators; they allow their customers to import information from their financial institutions but do not make changes to customers’ accounts. Early iterations of aggregators relied – and current aggregators continue to rely in part – on screen scrapers. Screen scrapers are APIs that access customers’ accounts at financial institutions using their login information and transfer account information into aggregators’ databases for use by the customers.
Current data transfer platforms are much more complex than screen scrapers. They offer APIs that, when adopted by traditional financial institutions, can provide third-party applications with secure access to customers’ accounts. Those third-party applications can then receive customers’ financial information and execute transactions on behalf of their customers. For example, some investment advisers execute transactions in their clients’ securities brokerage accounts using open finance applications.
Given the existing demand for third-party finance applications, it is likely that there will be demand for AI agents capable of providing similar services. And since those existing applications rely on APIs to aggregate customers’ financial information and provide additional venues for executing financial transactions, the same APIs would need to be integrated into AI agents’ systems for them to conduct financial services, seeing their users’ account information, making recommendations, and executing transactions.
To understand how this is likely to play out, imagine an individual tells their AI agent, ‘Pay my phone bill from my bank account’. The AI agent directs the individual’s statement to the system’s LLM, which transforms the statement into a set of commands for the system’s open finance API. That API will access the individual’s bank account and initiate a transaction from the account to the phone company.
Using open finance APIs, AI agents are likely to be able to provide myriad financial services. With access to users’ bank accounts, AI agents may be able to execute transactions, as described above. They may also be able to transfer assets between checking and savings accounts at an institution as needed in order to optimize yield and avoid overdraft fees. With real-time data feeds incorporated into their systems, AI agents may be able to identify which bank offers the lowest interest rate on a loan and recommend that product to their users. Similarly, they may be able to identify the bank with the highest rate offered on an account, take the initiative to open an account for the user, and move their assets from the old account to the new one. When it comes to investing, AI agents may analyze their users’ brokerage accounts and recommend investing strategies tailored to their users’ risk tolerances. They may also recommend specific financial instruments or investment opportunities and execute transactions accordingly.
AI agents are the next technological innovation in finance
AI agents are not the first technological innovations in finance, and they certainly will not be the last. It is possible, therefore, to situate AI agents as the most recent in a line of technologies to which the public, financial institutions, and their regulators adapt.
The history of technological innovation in finance
While serving as Chairman of the Basel Committee on Banking Supervision and a central banker, Pablo Hernández de Cos argued that ‘one can think of three waves of technological disruptions in finance’ (Hernández de Cos, Reference Hernández de Cos2019). In the first wave, spurred in the nineteenth century by the development of transnational railroads and transatlantic telegraph cables, technology permitted finance to expand across the globe. The second wave, prompted by digital recordkeeping, allowed the deployment of automated teller machines (ATM), electronic clearinghouses, and electronic securities trading in the 1960s and 1970s. The third wave, powered by the internet and advances in computing power, allows the public to interact with all-new actors – from fintechs to permissionless blockchains to AI – in ways previously unimaginable. Indeed, scholars note that technological innovations and customers’ responses have been one of the, if not the, driving forces behind the changes in finance (Campanella et al., Reference Campanella, Della Peruta and Del Giudice2017; Joseph et al., Reference Joseph, McClure and Joseph1999).
Each wave has required financial institutions, their customers, and their regulators to adapt. The potential to perform financial services over long distances allowed banks to coordinate more effectively between headquarters and branch offices, become larger, and play a more important role in international relations (Bátiz-Lazo and Wood, Reference Bátiz-Lazo and Wood2002; Battilossi, Reference Battilossi2000). Advances in communications technology allowed for more efficient fundraising, both by expanding the pool of potential investors and by reducing intermarket price differentials. In response, financial regulation needed to become more robust. During this period, many central banks were created, both to engage in regulation and to facilitate international finance (Monnet, Reference Monnet2023).
The transition from analog to digital finance prompted another round of evolution and adaptation. Computer systems centralized bank customers’ accounts in a single location, rather than in ledgers at branches throughout a geographic region, and clearinghouses could operate across great distances. Benefits accrued to banks and their customers. Operational costs decreased, as clearing could occur automatically and across many banks (Bátiz-Lazo and Wood, Reference Bátiz-Lazo and Wood2002). Customers could obtain services at any branch, and the introduction of the first ATM in 1967 by Barclays was revolutionary, allowing banks’ customers to access their accounts at any time of day rather than being constrained by bankers’ hours of operation (Marr and Prendergast, Reference Prendergast and Norman1994). Similarly, bank-issued credit cards allowed merchants to offload risk and recordkeeping responsibilities to financial institutions, resulting in efficiencies (Lauer, Reference Lauer2020). Financial markets benefited as well. New IT systems were adopted to automate trading and clearing, resulting in more efficient settlement and less paperwork (Picot et al., Reference Picot, Bortenlaenger and Roehrl1995). Traders’ use of computers to evaluate the prices of financial assets enhanced the price discovery function of markets (Black and Scholes, Reference Black and Scholes1973).
At the same time, these technological changes resulted in costs as well. Branch managers became more beholden to decisions of centralized senior managers, making lending and other decisions less reliant on personal information and interactions (Bátiz-Lazo and Wood, Reference Bátiz-Lazo and Wood2002). Financial institutions were required to adapt to customers’ preferences for new technologies (Joseph et al., Reference Joseph, McClure and Joseph1999; Joseph and Stone, Reference Joseph and Stone2003). With computer-assisted financial modeling, traders mistook price specificity for accuracy and believed their risks were perfectly addressed and hedged, resulting in collapses of giant financial institutions when those beliefs were proven incorrect (MacKenzie, Reference MacKenzie2004; Beunza and Stark, Reference Beunza and Stark2012). Indeed, both the 1987 US stock market crash and the Global Financial Crisis of 2007–08 – with the latter causing dire consequences for the people who lived through it – were the results of this thinking.
The third wave of technological innovation in banking and finance is ongoing. The advent of personal computers, the internet, and other advances in computing have ‘changed the way [financial institutions] are organized, their business strategies, relationships with customers, and all specific functions’ (Campanella et al., Reference Campanella, Della Peruta and Del Giudice2017). Perhaps most importantly, finance is conducted at significantly faster speeds than ever before. The internet and mobile banking allow for real-time transactions from anywhere and at any time (Romānova and Kudinska, Reference Romānova and Kudinska2016). Electronic securities filings and webcasts allow securities dealers to sell to institutional investors from anywhere without having to conduct real-world roadshows (Mahoney, Reference Mahoney2002), and advances in computing power allowed significant volumes of trading to occur without any human intervention at all (Borch and Min, Reference Borch and Min2023).
Personal computers, the internet, and mobile phones have also brought about increased competition to incumbent financial institutions. The free flow of information on the internet has allowed banking customers to easily comparison shop, rejecting local institutions in favor of financial institutions offering higher yields or lower prices. Banks have been able to specialize, focusing on particular customer segments rather than all customers at the same time (Blickle et al., Reference Blickle, Parlatore and Saunders2023). Intermediaries are in many cases no longer necessary; some financial markets allow for broker-less trading or peer-to-peer lending, disintermediating brokers and bankers in the process (Teplý et al., Reference Teplý, Roshwalb, Polena, Pompella and Matousek2021). To that end, regulatory arbitrage has continued. Some banks transformed their business models such that they effectively serve only as payment-system onramps for unregulated fintech firms, relying on open finance APIs that allow users to interface with the fintechs instead of the banks themselves (Byrum, Reference Byrum, Babich, Birge and Hilary2022). The internet allowed ‘consumer shadow banks’ and firms offering ‘platformed money’ to thrive, offering bank-like products to retail consumers without bank-like regulation (Phillips and Bruckner, Reference Phillips and Bruckner2024; Ekpo et al., Reference Ekpo, Drenten, Albinsson, Anong, Appau, Chatterjee, Dadzie, Echelbarger, Muldrow, Ross, Santana and Weinberger2022). The advent of blockchain technology has allowed a second financial system to develop alongside the traditional financial system, with the two becoming increasingly interconnected.
The consequences of these changes have been significant. For traditional financial institutions, these technologies have resulted in a loss of customer relationships, requiring operational changes to retain and attract customers (Durkin and Howcroft, Reference Durkin and Howcroft2003). Some were able to adapt, while others failed. Regulatory changes have pushed banks to withdraw from certain business lines, allowing unregulated competitors to take their place, with banks providing those competitors the funding to operate (Levin and Malfroy-Camine, Reference Levin and Malfroy-Camine2025). Financial markets are more unstable now than in the past, as high-frequency trading means that liquidity is illusory and flash crashes may occur (Diaz-Rainey et al., Reference Diaz-Rainey, Ibikunle and Mention2015; Min and Borch, Reference Min and Borch2022). Newer institutions using novel technologies, such as those operating in the crypto ecosystem, have used their wealth to push for policy changes that will allow them to engage in activities similar to those of their traditional peers with less regulation, while subjecting the latter to the systemic risks posed by the former (Yaffe-Bellany, Reference Yaffe-Bellany2025).
The public has been served well by these technological changes in some areas and harmed greatly in others. The rise of fintech firms has provided many new options that individuals and non-financial firms may use to engage with the financial system, offering users products that better fit their needs. At the same time, these options have come with their own risks. In the United States, the failure of the fintech firm Synapse, which served as an intermediary between banks and depositors, has left thousands without their cash as the company proceeds through bankruptcy (Mikula, Reference Mikula2025). Internet-based brokerages and blockchains have also allowed the ‘gamification’ of finance; brokers are using behavioral science studies to induce investment, and derivatives markets are taking on significant aspects of gambling, with all the negative consequences that entail (CFA Institute, 2022).
AI agents as an evolution in need of regulation
Time will tell whether AI agents are a part of this third wave or are the beginning of a fourth: Their effects on finance rely on personal computers, mobile phones, and the internet, and they have the capacity to further decentralize finance, but AI agents may fundamentally alter the ways finance is conducted. Regardless, they are likely to be the next technological innovation in finance to which regulators must adapt, and the harms that were caused by prior innovations augur harms for this change as well. Because AI agents may serve as trading machines, the lessons learned from prior machine learning trading systems (e.g., the autonomy of such systems, how that autonomy directly affects the performativity of finance) may be instructive (Borch and Mi, 2023), though AI agents pose novel risks.
There are two significant differences between AI agents and the preceding technologies that relied on APIs. First, the fintech firms that allowed customers to interact with financial institutions in all-new ways previously required human software engineers to design and code software, whereas AI agents may offer unparalleled flexibility that allows any individual to develop software that will accomplish any financial task. Take, for example, the fintech Oportun, which promises to analyze each user’s spending habits and migrate excess cash from their checking account to their savings, and vice versa if the user’s checking account drops below a minimum threshold (Oportun, n.d.). For Oportun to perform these activities, developers coded rigid rules into its software, allowing it to accomplish its account-balancing activities. AI agents hold the promise of allowing users to implement the same feature on their own without knowing how to code, as well as countless other features that have not yet been thought up.
AI agents are therefore poised to be much more powerful than existing fintechs, but they come with commensurate risks. The ability to interpret users’ vague commands may result in outcomes that users and regulators cannot forecast and protect against. The command ‘transfer $500 from my savings to my checking account whenever the checking account drops below $100’ is fairly simple for AI agents to understand; ‘maximize my bank accounts’ interest income’ is something else entirely, and it may be impossible to predict how AI agents will respond to that command.
Second, to the extent prior technological innovations were used to perform activities on behalf of financial institutions’ customers, agency relationships were created with humans who were subject to regulation and faced legal liability for errors. AI agents may be fundamentally different; although they may perform functions akin to agents, it is possible that they may not be subject to existing regulations designed to protect their users and society on the grounds that they are mere products.
Human agents in financial services are subject to various legal regimes that help to protect users and preserve financial stability. In common law nations, agents are required to act with loyalty, but they may nevertheless take into consideration the long-term consequences of their short-term actions when acting on behalf of their users (Lydenberg, Reference Lydenberg2014). To that end, human agents may mildly deviate from their client’s instructions if doing so is for the client’s benefit. Similarly, regulations impose restrictions on brokers, dealers, and other intermediaries for the benefit of the financial system, placing the onus on agents to ensure stability rather than expecting their customers to do so.
These legal regimes may apply to AI agents, but they also may not. Some scholars have argued that AI should be considered products regulated under product liability law (Sharkey, Reference Sharkey2024). If this is to be the case, AI agents may not be subject to the various existing regulations applicable to human agents in finance. Many financial regulations apply to the individuals or entities that provide services to customers, such that they may not apply to AI agents’ developers who are licensing products. In the United States, for example, securities brokers are defined as ‘any person engaged in the business of effecting transactions in securities for the account of others’ (15 USC § 78c). It may be the case that developers cannot be said to be affecting the transactions their AI agents execute; users affect the transactions themselves.
In addition, AI agents may take actions that ultimately harm their users while carrying out their explicit commands. For example, AI agents may put their users in legal jeopardy if what they are asked to do is in some way illegal, even if the users are unaware. Human agents in finance are, by definition, knowledgeable about the laws that govern them and their activities and are aware that they are to follow the law in complying with their users’ orders; those serving as securities brokers are required to pass exams that, in part, evaluate test takers’ knowledge of the law applicable to their services. But generalized AI agents not designed for discrete tasks may violate the law in service of their users’ goals, not being programmed to take into consideration every possible law. Although the developers of AI agents may be held responsible under various product liability theories, financial regulation exists in large part because private law solutions did not work to effectively protect individuals or financial systems.
The problems posed by AI agent deposit brokers
As explained, AI agents’ use of open finance APIs to perform financial services may pose significant risks to financial institutions, their customers, and the financial system. Although the scope of those risks depends on a variety of factors (e.g., the activity’s nature, the total value of assets under AI agent management, users’ and their institutions’ interconnectedness with the financial system), it is imperative that regulators have the capabilities to address them.
To demonstrate the problems regulators may face, this part discusses the use of AI agents by banks’ depositors in the United States. Depositors may direct their AI agents to manage their assets on their behalf to optimize yield or minimize risk of loss, effectively asking their AI agents to operate akin to deposit brokers. If these applications manage a sufficient volume of assets, they may be able to transfer enough deposits from one bank to another in a short amount of time in ways that the bank cannot meet its obligations, ultimately causing the bank to default. Yet, US banking regulators may only regulate banks and not deposit brokers or depositors, leaving them unable to prevent such runs before they start.
AI agents’ potential to cause runs
Banks may run because they engage in a process known as maturity transformation, whereby they make long-term loans from short-term debt. Using demand deposits, which are debts owed by banks to their depositors that may be called at any time, banks invest in longer-duration assets that cannot easily be liquidated to meet depositors’ redemption requests (Pennacchi, Reference Pennacchi and Brown2010). This maturity transformation process means that even solvent banks may run thanks to what is a collective-action problem and the first-mover advantage that demand deposits create (Diamond and Dybvig, Reference Diamond and Dybvig1983; Gorton, Reference Gorton2010; Ricks, Reference Ricks2014). Depositors will demand their cash at the first sign that other depositors may run, making runs a self-fulfilling prophecy.
This type of run fell Silicon Valley Bank in Spring 2023 (OIG, 2023). The bank’s depositors were largely startups with a large share of uninsured deposits that had received seed funding from a small number of venture capital funds. When it announced in a press release that it had sold its portfolio of Available for Sale assets at a loss and would be raising capital from other sources on March 8, this first-mover advantage caused fund managers to recommend that their startups quickly move their deposits to different institutions. Depositors rushed to withdraw $100 billion in two days, and the bank was placed into receivership on March 10.
To understand how AI agents may propagate bank runs, one may simply extrapolate from Silicon Valley Bank’s experience. Imagine that a large share of a bank’s uninsured depositors use AI agents to manage their deposit accounts, with instructions to minimize the risk of loss. Relying on real-time data feeds, the AI agents incorporated the bank’s press releases and other financial statements into their corpus of information, ultimately observing that the bank’s riskiness had increased. Based on this new information and their original instructions to minimize loss risks, agents make decisions that it is in the best interest of their users to transfer their assets away from Silicon Valley Bank. Although each AI agent makes the decision to move its user’s assets on an individualized assessment of the needs of that user, the result is a bank run.
Directives to minimize the risk of loss are not the only orders that could cause AI agents to run; directives to maximize yield could result in runs as well. If a bank offers higher rates than another, the AI agents of the second bank’s depositors could migrate deposits en masse to the first so as to take advantage of those higher rates, resulting in a run on the second. Moreover, changes in banks’ risk profile may not even be necessary for AI agents to run, as updates to AI agents’ software could cause them to take different actions than before, even with no changes to the real world.
This type of run – one that is initiated by the search for higher profit, rather than avoiding losses – has the potential to fundamentally change the nature of banking system vulnerabilities. The traditional understanding of bank runs is that those depositors who withdraw first and, in some sense, initiate runs do so in a flight to quality or safety at healthier institutions that will preserve their assets. Although theory explains that it is possible for bank runs to occur out of either pure fear or panic (Ennis, Reference Ennis2003; De Graeve and Karas, Reference De Graeve and Karas2014) or a rational review of the fundamentals (Gorton, Reference Gorton1988; Calomiris and Mason, Reference Calomiris and Mason2003), real-world evidence indicates that depositors run because banks are in distress. In a study of more than 5,000 bank failures, Correia, Luck, and Verner (Reference Correia, Luck and Verner2025) find that the majority of these failures are of firms that were insolvent, such that bank runs are not generally something inflicted on healthy banks. Brown et al. (Reference Brown, Guin and Morkoetter2020) and Acharya et al. (Reference Acharya, Das, Kulkarni, Mishra and Prabhala2025) both find that depositors are more likely to relocate away from distressed banks than non-distressed ones. Iyer and Puri (Reference Iyer and Puri2012) find that uninsured depositors are more likely to run than insured, providing more evidence that runs are precipitated by a flight to safety. If AI agents can start bank runs in a search for yield, there need not be any underlying vulnerability or weakness in the banking system, or even in a single institution.
What is perhaps most frightening is how easily AI agents may be deployed in search of yield. Corporate treasurers have long balanced liquidity with a search for yield (Polak, Robertson, and Lind, Reference Polak, Robertson and Lind2011), and it would be unsurprising to see firms relying on AI agents to do the same. Individual depositors have in the past responded less sensitively to deposit rate changes, and although this appears to be changing somewhat in recent years (Narayanan and Ratnadiwakara, Reference Narayanan and Ratnadiwakara2024), it is feasible to see it change dramatically with AI agents. Users may ask their AI agents to continuously optimize interest rates or ask Apple to proactively send an offer to iPhone users to have Siri maximize their interest income in exchange for a small fee.
That there is likely to be only a small number of LLMs magnifies these risks. AI agents that are built atop the same LLM will respond to information and commands in identical manners. If AI agents operate on a large number of different code bases, their reactions to the same information and commands will be different; some will observe a press release announcing losses and decide to transfer their users’ assets, but others will wait for further information before making any sudden move. But because LLMs are expensive to develop, there will only ever be three or four on the market from which AI agents may be created, meaning that 25% or more of AI agents will respond in the same manner to new information, a phenomenon known as herding. If 25% of a bank’s depositors decide to leave at once, that will push the bank into insolvency.
Non-regulation of deposit brokers
The term ‘deposit broker’ includes ‘any person engaged in the business of placing deposits, or facilitating the placement of deposits, of third parties with insured depository institutions,’ with limited exclusions (12 USC § 1831f), and a ‘brokered deposit’, therefore, is ‘any deposit that is obtained, directly or indirectly, from or through the mediation or assistance of a deposit broker’ (12 C.F.R. § 337.6(a)(2)). Accordingly, AI agents that may move users’ deposits between banks should be considered deposit brokers, and their deposits should be considered brokered. Yet, existing statutes and regulations are insufficient to address their potential to run as described previously.
The policy concern with brokered deposits has historically been that banks rely on them more as the institutions’ financial conditions weaken (FDIC, 2011: 47). In an unregulated market, depositors will bring market discipline to banks, demanding higher interest rates from riskier banks and moving their assets between institutions according to their risk tolerance. But FDIC insurance upends this market by allowing depositors to be risk-insensitive up to the deposit insurance ceiling (currently $250,000). Understanding this, risky banks attempt to stave off insolvency by working with deposit brokers to obtain deposits, paying higher-than-market-but-less-than-free-market rates in increments up to the ceiling. In such instances, banks take risks, with the Deposit Insurance Fund facing losses (and surviving institutions paying higher premiums) if those risks do not pay off.
Brokered deposits are also considered ‘hot money’ and pose greater flight risks to banks when compared with traditional ‘core’ deposits (FDIC, 2011: 49). Core depositors choose banks based on a variety of factors, such as location and ongoing relationships, and are unlikely to switch banks to chase higher interest rates. Brokered deposits, because they are managed by brokers with the fiduciary goal of, inter alia, maximizing depositors’ yield, are less likely than core depositors to have loyalty to any given bank, and are more likely to move deposits from institution to institution. Hot money is problematic for banks, as it cannot be relied upon for long-term investment opportunities.
Because policy concerns regarding brokered deposits focus on how they affect banks – not depositors or brokers – federal banking laws only regulate banks’ acceptance of brokered deposits. Since the fear with brokered deposits is that they may be misused by and cause losses to highly leveraged banks, the FDI Act limits their acceptance to well-capitalized banks (i.e., banks that use high levels of shareholder capital to fund loans) and adequately capitalized banks as the FDIC determines on a case-by-case basis so as to not force previously well-capitalized banks that fall below regulatory thresholds to shed deposits on which they rely (12 USC § 1831f). Banking law does not authorize the banking agencies to regulate deposit brokers in any way.Footnote 1 This stands in contrast to the regulation of capital markets, where the Securities and Exchange Commission and Commodity Futures Trading Commission have oversight of and may directly regulate securities brokers and commodity futures merchants (7 USC § 6d; 15 USC § 78o).
Nevertheless, this regulatory regime has largely worked.2 Direct regulation of deposit brokers has not previously been necessary. Because brokers do not tend to hold customer assets for extended periods of time, they have limited risk management responsibilities (IntraFi n.d.). Indeed, so long as deposit brokers actually do place clients’ cash with banks, their own bankruptcy simply would not affect clients in the ways that banks’ failures would. Moreover, banks have always been aware of when they are interacting with deposit brokers, as brokers have operated through omnibus accounts that receive pass-through insurance (FDIC, 2024b).
The need for legislation
Although the regulatory regime for brokered deposits has historically preserved the safety of the banking system, it may not be able to do so, or at least to do so as effectively, when faced with AI agents that could fundamentally change the nature of deposit brokering.
As an initial matter, AI agents may not even be considered deposit brokers under existing law. Excluded from the statutory definition of ‘deposit broker’ are ‘agent[s] or nominee[s] whose primary purpose is not the placement of funds with depository institutions’ (12 USC § 1831f). Given that AI agents’ ‘primary purpose’ is to assist their users with all manner of tasks, they are unlikely to be considered deposit brokers, nor their users’ deposits considered brokered, despite being as hot or hotter than traditionally brokered deposits and posing similar risks to banks.
More importantly, if AI agents do broker deposits, they will likely do so directly from users’ accounts through the use of APIs. When a depositor uses software reliant on open finance APIs to transfer funds, financial institutions do not know what software is being used to conduct the transfer – all that banks may observe is that the customer has connected the account to some third-party service that requests ACH, RTP, or FedNow transactions via that platform (Plaid, n.d.). Accordingly, banks may not know that particular accounts are being brokered. Even if banks are made aware that certain customers are using AI agents to manage their accounts (by, for example, using contracts with API providers to require them to identify to banks the services to which accountholders have connected their accounts), banks must assume that any account that uses AI agents is subject to run, even though the conclusion that an account is brokered does not necessarily follow from the fact that a depositor uses AI agents for transactions. Given that banks may view it as a competitive advantage to allow accountholders to use AI agents, it is not practical to expect banks themselves to stop this particular risk.
With respect to regulation, existing limitations on brokered deposits may prove ineffective. If less-than-well-capitalized banks do not know which accounts are brokered, they may not be able to comply with regulatory requirements, even if they wish to do so. Alternatively, these banks could prohibit depositors from using any software with open finance APIs. Although this would address concerns about those banks inadvertently accepting brokered deposits in contravention of regulation, it would prevent their depositors from reaping any benefit that would accrue from using such software.
Other regulatory options are conceivable, though they each has flaws that would limit their efficacy. Federal banking regulators, for example, could prohibit banks from paying interest on accounts connected to these APIs to incentivize depositors to hold their deposits in savings accounts with withdrawal limitations. Although this would slow runs and allow banks time to obtain more capital, it would not prevent runs from starting in the first place. Banking regulators could similarly impose so-called ‘redemption gates’ that prevent depositors from withdrawing large quantities of funds at a time when redemption requests are high. This proposal would also slow AI agents’ runs but may incentivize other depositors to begin running (Cipriani et al., Reference Cipriani, Martin, McCabe and Parigi2014). Alternatively, the CFPB could enact a regulation pursuant to section 1033 of the Consumer Financial Protection Act of 2010 (12 USC § 5533) to require open finance APIs to have fields identifying the activities that applications are authorized to perform with customers’ accounts, allowing AI agents to identify for banks whether accounts’ deposits are being brokered. However, this brokered/not-brokered state can quickly change, and there is no guarantee such a rule would be upheld in court, given ongoing litigation against the CFPB’s first rule under that statute (Forcht Bank v. CFPB). The CFPB could regulate AI agents, as discussed above, though it is questionable whether its authority would allow it to address a concern so far removed from consumer protection. Finally, the Financial Stability Oversight Council could designate AI agents as systemically important nonbank financial companies subject to supervision and regulation by the Federal Reserve (12 USC § 5323), but whether such designations are legally permissible remains open to question.
Perhaps the most plausible option for the regulators is to change banks’ capital requirements to account for the heightened run risk associated with accounts connected to open finance APIs. Capital rules could be changed to account for changes in the deposit risk posed by API-connected accounts, requiring the pairing of such accounts with high-quality liquid assets like reserve bank balances and US Treasury debt. Of course, there are downsides to this possible regulatory change. Because these assets offer low yield, a requirement that banks hold more of them would compress banks’ net interest margin, reducing their profitability and maybe even making some unprofitable. Given that community banks rely more on traditional banking than their larger competitors (Sengupta and Xue, Reference Sengupta and Xue2022) and are therefore more likely to feel such a rule’s effects, lending to those reliant on community banks, such as small businesses, would be reduced (Beiseitov, Reference Beiseitov2023).
These options are all worse than the alternative: Congress providing the FDIC (or the three banking regulators more generally) with authority to regulate AI agent deposit brokers directly. If the FDIC becomes aware of an AI agent that is able to perform deposit brokering services, it is in the best position to ensure the software does not or cannot act in ways that destabilize individual banks or the banking system. Congress would, of course, have to decide whether to allow the FDIC to regulate only AI agent brokers or all deposit brokers, but either option is sufficient for the agency to address the problems this essay identifies.
If Congress does give the FDIC authority to regulate AI agent deposit brokers directly, it would have several options. It could, for example, refuse to let AI agents move deposits to or from less-than-well-capitalized banks. Or, the regulator could require AI agents to identify to their users’ banks that the deposits are being managed by a deposit broker, allowing banks to decline those deposits. Congress could also simply prohibit AI agents from moving depositors’ cash between accounts; accountholders would be permitted to initiate those transactions themselves, just not with their AI agents’ help. More aggressively, the FDIC could address bank runs’ collective-action problem by slowing the pace at which AI agents may withdraw depositors’ cash. Regardless of the optimal solution – which may be any or none of these – what is clear is that the FDIC today lacks the authority to adequately address the concerns posed by AI agent deposit brokers.
Conclusion
This essay serves as a call for policymakers worldwide to consider the consequences of having AI agents with open finance APIs provide individualized financial services without human interaction. Real risks exist to consumers, firms, and the financial system when individuals and non-financial firms use AI agents to help manage their financial lives and corporate treasuries. In some instances, regulators have the legal authority to mitigate these risks, but in others, as with the above example of AI agents serving as deposit brokers, they do not. It is imperative that policymakers think about these risks and develop plans of action. Regulators should consider the extent to which AI agents used by the general public can pose risks and evaluate whether and how their authorities can be used to address them. Where authority does not exist, legislators should consider granting it to them.
Acknowledgments
The author thanks Dan Awrey, Emily DiVito, Tom Lin, and two anonymous peer reviewers for their helpful comments. The author also thanks Emma Kelley for her excellent research assistance.