SaaS vendors should assess whether their trust boundary includes customers' AI agents. Liability has pushed banks toward securing the customer's device four times, and the fifth wave is forming around AI agents.

Trust Boundary of SaaS Will Include Customers' AI Agents - illustration

As SaaS vendors make their products usable by customers’ AI agents, they’ll face a trust-boundary decision. Is the vendor responsible for securing any aspect of the customer’s client system? The answer might seem like an easy “no,” but financial services have answered it four times, always with some form of “yes.”

Banks now fingerprint browsers, shield mobile apps, score typing rhythm, and bind credentials to device hardware. Each security measure followed a specific threat, loss, or legal action. This pattern will repeat for customers’ AI agents, and the last four rounds inform how we should prepare for the next one.

Agent infrastructure is shipping ahead of its defenses.

AI agents are a new endpoint for interacting with SaaS, but the threats against them lack strong defenses. For example, OpenAI flagged that prompt injection is unlikely to ever be fully “solved.” Simon Willison’s “lethal trifecta” of sensitive data access, untrusted content, and outbound connectivity describes the capabilities that enable exploitation.

Every SaaS product that interacts with a customer’s AI agent inherits that attack surface. The exposure is greatest for consumer-facing products because enterprise customers are subject to security controls from their organizations.

In the meantime, vendors are making increasingly powerful capabilities accessible natively to AI agents. In banking, for example, Meow lets customers open and run business accounts through AI agents with per-transaction limits. GoCardless targets bank-payment integration, introducing MCP as groundwork for agentic commerce.

Card networks are starting to write the rules for agent commerce before the defenses take shape. Visa Trusted Agent Protocol and Mastercard Agent Pay were announced in 2025. American Express followed in April 2026 with a network-level liability commitment that covers agent-initiated purchases.

How should vendors decide whether, when, and how to invest in securing customers’ AI agent systems? We can extrapolate from how the banking industry has answered versions of that question over recent decades.

Four drivers push providers toward the customer’s device.

Four drivers have shaped when and how banks extended security measures onto the customer’s device:

  • Liability: The US Regulation E in 1979 and the UK APP reimbursement rule in 2024 pushed fraud loss onto banks. Banks funded defensive controls in response.
  • Regulatory standard of care: Actions from FFIEC 2005 through the EBA RTS on SCA in 2018 each raised the minimum controls banks had to deploy.
  • Customer inability to self-protect: Banking trojans in the late 2000s and mobile malware in the early 2010s pushed banks toward device fingerprinting, transaction signing, and out-of-band confirmation.
  • Loss economics: Losses grew costly enough to justify app shielding and behavioral biometrics at scale, since liability assigned them to banks.

These drivers produced four waves of customer-device controls. A fifth wave is forming around AI agents, and history predicts how it’ll play out.

Four waves pushed banks onto the customer’s device.

The following four waves pushed banks to deploy new security measures on customers’ devices. The pressure came from a mix of threats, research, court cases, and regulations:

Regulation and liability are the constants across all four waves. Regulators raised the standard of care, while courts and rules put liability on banks. Banks deployed different controls in different waves, but this pressure drove every round.

Liability will shape agent-era defenses.

Courts and regulators still need to decide who pays when a compromised AI agent authorizes or takes an action that looks intentional. Once they do, liability will drive the timing and scope of agent-era defenses.

For risky transactions, banks stopped trusting users’ devices and built defenses that operated outside them. Similarly, agent-era defenses will need to work outside the potentially compromised AI agent. Measures can include agent identity verification, agent behavior analytics, transaction-bound signing, and out-of-band human confirmation for high-risk actions.

Financial services implemented transaction-bound signing in the pre-agent era. Germany’s chipTAN binds the signing step to a separate device that confirms the recipient and amount before the bank accepts. An agent-era equivalent would bind signing to something the agent can’t observe or forge.

As SaaS vendors prepare for AI agents, four actions are worth considering:

  • Map your customer’s AI agent scenarios to the liability and reimbursement rules applicable to your product.
  • Inventory where customer-side agents reach your product, including direct API traffic, MCP servers, and browser automation. Commerce products should add payment protocols such as Stripe ACP, PayPal MCP, AP2 intents, and Visa Trusted Agent Protocol to that list.
  • Favor provider-side controls over any step that asks the agent or principal to act, since either can be compromised.
  • Require verifiable agent attestation, intent signing, and out-of-band confirmation for high-risk actions.

Customer-side AI agents trigger the fifth wave of pressure on providers to secure customers’ devices. Liability has shaped the previous four, and it’ll shape the current one too.

About the Author

Lenny Zeltser is a cybersecurity executive with deep technical roots, product management experience, and a business mindset. He has built security products and programs from early stage to enterprise scale. He is also a Faculty Fellow at SANS Institute and the creator of REMnux, a popular Linux toolkit for malware analysis. Lenny shares his perspectives on security leadership and technology at zeltser.com.