Welcome to the PCI Security Standards Council’s blog series, The AI Exchange: Innovators in Payment Security. This special, ongoing feature of our PCI Perspectives blog offers a resource for payment security industry stakeholders to exchange information about how they are adopting and implementing artificial intelligence (AI) into their organizations.
In this edition of The AI Exchange, Checkout.com’s Security Director, Jo Vane, offers insight into how her company is using AI, and how this rapidly growing technology is shaping the future of payment security.
How have you most recently incorporated artificial intelligence within your organization?
At Checkout.com, AI is deeply embedded across our fraud, risk, and payment optimization stack. Most recently, we’ve expanded our use of machine learning–driven risk models that operate in real time across the full payment lifecycle—from pre-authorization risk scoring to post-transaction monitoring.
These models leverage Checkout.com’s network intelligence at scale, combining transaction data, device and behavioral signals, and rich merchant context to make dynamic risk decisions in real time. As highlighted in Thrive 2025, the power comes not from any single signal, but from learning across the network in a privacy-preserving way—using aggregated signals and insights where permitted—allowing us to understand how customer behavior, payment patterns, and fraud tactics evolve across regions, verticals, and payment methods.
Rather than relying on static rules, our systems continuously adapt based on new data and emerging attack vectors, enabling us to respond to change at speed while preserving performance. This approach allows us to strengthen security without introducing unnecessary friction, supporting our broader goal of delivering trusted, high-performance payments on a global scale.
Beyond fraud prevention, AI plays an important role in improving payment performance, resilience, and operational efficiency across Checkout.com. We use AI to optimize authorization rates and smart routing decisions in real time, while also enhancing merchant-facing and internal tools. This includes our AI-powered support chatbot, which helps merchants resolve issues more quickly and accurately, as well as emerging natural language capabilities within the Merchant Dashboard that make it easier for merchants to explore data, investigate anomalies, and act on insights. Together, these applications help reduce friction, improve reliability, and support better commercial and operational decisions.
What is the most significant change you’ve seen in your organization since AI-use has become so much more prevalent?
The most significant change has been a shift from rules-based decisioning to adaptive, intelligence-led systems operating on a global scale. This has fundamentally changed how teams think about risk—from setting low thresholds and exceptions to training, validating, and monitoring models that evolve continuously.
AI has also changed how product, risk, and engineering teams collaborate. Decision-making is now heavily grounded in model performance metrics, explainability outputs, and real-time feedback loops, rather than retrospective analysis. This has enabled faster iteration cycles and more confident decisions in high-volume, high-stakes environments.
Crucially, AI has allowed us to scale globally without scaling friction, maintaining strong fraud controls while improving conversion across diverse markets and payment methods.
How do you see AI evolving or impacting payment security in the future?
AI is fundamentally reshaping payment security by enabling a context-aware, predictive defense model that operates continuously across both individual transactions and the wider payment network. From a security standpoint, this represents a shift from static controls toward adaptive, intelligence-driven risk management—but it also materially changes the threat landscape.
Defensively, AI allows security teams to move beyond static identity checks and rules toward behavioral and anomaly-based detection. By analyzing device characteristics, interaction patterns, and transaction behavior in real time, AI can identify compromised accounts or malicious activity even when credentials and traditional identifiers appear legitimate. At the network level, AI enables cross-merchant and cross-region threat correlation, allowing emerging fraud campaigns or attack patterns to be detected and mitigated earlier, before they scale.
AI also enables more granular, risk-based security controls, where authentication challenges, step-ups, or declines are dynamically applied based on real-time risk signals. From a security perspective, this reduces unnecessary exposure while preserving user experience, ensuring friction is introduced only when it meaningfully reduces risk.
However, the same capabilities that strengthen defenses are increasingly being leveraged by attackers. Recent examples of AI-orchestrated cyber operations demonstrate how adversaries can use AI to automate reconnaissance, generate highly targeted social engineering, and rapidly adapt attack strategies. In the payments ecosystem, this raises concerns around automated fraud testing, faster exploit discovery, and more scalable abuse of payment flows.
As a result, AI will not simply improve payment security—it will drive an arms-race dynamic. Success will depend on treating AI systems themselves as security-critical assets, with robust governance, explainability, continuous monitoring, and human oversight. The goal is not only to prevent known fraud types, but to anticipate and contain novel, AI-enabled threats, while ensuring resilience against adversarial manipulation and model degradation.
In this environment, effective payment security is no longer just a technical challenge—it is a strategic capability, requiring adaptive defenses, threat intelligence at scale, and disciplined operations, supported by robust AI governance, executive oversight, and alignment with the firm’s broader enterprise risk framework in an increasingly AI-driven ecosystem.
What potential risks should organizations consider as AI becomes more integrated into payment security?
As AI becomes more embedded in core payment decisioning, organizations must manage several critical risks:
-
Model opacity and explainability: Payment decisions often have regulatory, merchant, and customer implications. Models must be explainable and auditable, not black boxes.
-
Bias and data drift: Fraud patterns change rapidly across geographies and demographics. Without rigorous monitoring, models can degrade or introduce unintended bias.
-
Adversarial adaptation: Fraudsters actively probe and adapt to AI-driven systems, requiring continuous retraining and layered defenses.
-
Over-automation risk: AI should augment human expertise, not eliminate it. Human oversight remains essential for edge cases, investigations, and strategic decisions.
Strong governance, continuous monitoring, and human-in-the-loop controls provide the foundation for managing AI risk—supporting transparent, auditable, and accountable decision-making across regulated and high-impact environments.
What advice would you provide for an organization just starting their journey into using AI?
Start by grounding AI initiatives in clear, high-impact business problems—whether that’s reducing fraud, improving authorization rates, or increasing operational efficiency. AI is only as strong as the foundations beneath it, so early investment in data quality, governance, and monitoring is critical to ensuring models are reliable, explainable, and safe to operate at scale.
It’s equally important to invest in education and enablement. Organizations should put structured programs in place to help employees understand how AI works, how to use it effectively, and where its limitations lie. Clear organizational frameworks for secure and responsible AI use provide essential guardrails, helping teams innovate without exposing the business to unnecessary risk.
From a delivery perspective, focusing first on internal tools and workflows allows teams to build confidence and expertise in a lower-risk environment. This makes it easier to identify and correct errors or hallucinations before exposing AI capabilities to merchants or customers. At the same time, success depends on strong collaboration between domain experts, data scientists, engineers, and security teams, particularly in payments where regulatory, risk, and customer experience considerations are tightly intertwined.
Finally, foster a culture of experimentation and learning. AI should move quickly from ideas to small, testable prototypes, with teams encouraged to iterate and even fail fast. Avoid letting AI initiatives stall at the strategy or presentation stage—building and learning from real systems, even imperfect ones, is far more valuable than over-planning projects that never reach production.
What AI trend (not limited to payments) are you most excited about?
One of the most compelling AI trends is the rise of agent-based systems and intelligent copilots that can operate across complex technical, commercial, and security environments. These systems go beyond surface-level automation by reasoning across multiple data sources, tools, and constraints—making them particularly powerful in high-stakes domains like payments and information security.
From an infosec and fraud perspective, AI agents have the potential to dramatically improve threat detection, incident response, and root-cause analysis. They can correlate signals across logs, transactions, infrastructure, and user behavior to identify anomalies earlier, prioritize risks more accurately, and reduce response times during security incidents. This capability is increasingly critical as attack surfaces expand, and adversaries become more sophisticated and automated themselves.
From a commercial standpoint, these same capabilities translate directly into impact: faster fraud investigations, reduced operational costs, improved authorization rates, and stronger trust with merchants and partners. By intelligently reducing friction for legitimate users while maintaining strong security controls, AI can help drive revenue growth without compromising risk posture.
However, realizing this potential requires a strong emphasis on education and responsible adoption. As AI systems become more embedded in decision-making, it’s essential that teams across engineering, risk, security, and commercial functions understand not just how to use these tools, but how they work, where their limitations lie, and how to challenge or validate their outputs. Continuous learning and upskilling are critical to ensuring AI remains a force multiplier rather than a blind dependency.
Ultimately, the organizations that will realize the greatest and most sustainable value from AI are those that balance advanced technology with well-informed leadership, strong governance, and a deeply embedded culture of accountability. As agent-driven systems become more autonomous and pervasive, these organizations ensure that decision ownership remains clearly defined, human oversight and validation are built into critical workflows, and responsibility for outcomes always rests with people—not the technology itself.


