Welcome to the PCI Security Standards Council’s blog series, The AI Exchange: Innovators in Payment Security. This special, ongoing feature of our PCI Perspectives blog offers a resource for payment security industry stakeholders to exchange information about how they are adopting and implementing artificial intelligence (AI) into their organizations.
In this edition of The AI Exchange, Cloud Security Alliance Chief Strategy Officer, Troy Leach, offers insight into how his company is using AI, and how this rapidly growing technology is shaping the future of payment security.
How have you most recently incorporated artificial intelligence within your organization?
At the Cloud Security Alliance (CSA), our mission is to develop cybersecurity frameworks and best practices for the latest technology. That means we must not only study new technologies but also adopt them early to understand their implications.
For artificial intelligence, we’ve taken a deliberate approach to embed it across our organization. Since we publish several research papers and best practices each month, we made sure every department—from research to training to IT to marketing—was experimenting with large language models (LLMs) and other AI tools early and often.
To drive adoption, a colleague and I launched internal “AI Days,” where we partner with each department to identify opportunities where generative AI could streamline workflows or solve problems once considered too formidable. These workshops are hands-on: our IT team has used AI to accelerate coding and SQL query development; our marketing staff has built AI-driven personas to refine communications; we’ve even tasked AI agents to recommend updates to older research as technologies evolve, and we’ve developed internal bots that dynamically collect and surface CSA knowledge to all staff.
The success has been significant—so much so that staff have asked us to continue “AI Days” regularly to keep pace with change.
Externally, we’ve also launched our Valid-AI-ted model, which reviews compliance self-questionnaires for accuracy of completion. Much like the PCI Self-Assessment Questionnaire (SAQ), our Cloud Controls Matrix and AI frameworks use the CAIQ. By applying our AI validation, we help assure that entries into the STAR Registry—a public repository of cloud security assessments—are as accurate and reliable as possible. This is the kind of practical compliance improvement we believe AI can bring to payments as well.
What is the most significant change you’ve seen in your organization since AI-use has become so much more prevalent?
The most striking change is the pace of change itself. Tasks that once required weeks—such as developing code snippets or refining analytics—are now accelerated through AI-assisted “vibe coding.” It may not be the final code put into production but gives new ideas to our developers for how to address a challenge.
But speed cuts both ways. Just as organizations build familiarity, the models evolve again. In the early days of GPTs, for example, prompt engineering required precise structure, and training courses emerged on how to “engineer” prompts. Within a year, the models were asking clarifying questions themselves, raising outcomes dramatically without such expertise.
The same leapfrogging has occurred in capabilities like image recognition and OCR. Today, a developer can paste a snippet of code as an image into a prompt and have the model understand, troubleshoot, and even suggest solutions in real time. Similarly, AI is now embedding directly into productivity platforms—whether office applications, file repositories, or messaging channels—making the boundary between human workflow and AI assistance nearly invisible.
As security professionals, we must view this with both appreciation and caution. The more seamless the integration is, the greater the potential exposure if organizations fail to segment data properly or implement governance controls. For PCI DSS environments, where the confidentiality and integrity of payment data are paramount, transparency and oversight of these AI-enabled workflows is critical.
How do you see AI evolving or impacting payment security in the future?
When I meet with financial institutions, I hear equal parts optimism and concern. The promise of AI is personalization at scale—merchants and payment providers will deliver unique value for customers and partners, often autonomously. But personalization must not come at the expense of the core PCI DSS principles of data minimization and controlled scope.
One area of real benefit is training. Imagine security awareness training that adapts examples to each department’s exact context. EY recently reported that 94% of employees felt security became a personal goal if they had training within the last year. Now picture each scenario being tailored to show how that specific role affects overall payment security strategy. That level of relevance transforms compliance with PCI DSS Requirements from a “check-the-box” activity into a meaningful defense mechanism.
AI is also poised to revolutionize compliance itself. Instead of manual reviews of data-flow diagrams or lengthy interviews to verify controls, AI could continuously validate documentation and flag changes in real time. That ensures merchants and service providers maintain PCI DSS throughout the year—not just during a QSA assessment.
Finally, payment security operations are already benefiting. We hear already that Security Operations Centers (SOC) integrate AI for root-cause analysis now to uncover subtle anomalies—the proverbial “needles in the haystack”—with far greater accuracy. In effect, AI becomes an always-on analyst, augmenting human staff with tireless pattern recognition and other intelligence features.
What potential risks should organizations consider as AI becomes more integrated into payment security?
The first risk is over-reliance on AI decisioning without human oversight. Generative AI is inherently probabilistic; unlike deterministic static software, it may produce unexpected outputs. In payments, we’ve already seen chatbot rollouts fail because customer creativity exposed edge cases that deviated from company policy.
The risk escalates with agentic AI use cases—where autonomous AI agents act on behalf of users. Too often, organizations secure the employee’s ID but neglect to account for what the AI agent itself can access or execute. This creates over-privileged agents with little governance, undermining PCI DSS controls for least privilege and access management.
Authentication is also evolving rapidly under AI pressure. Biometric data, once thought nearly impossible to spoof, can now be mimicked. While AI can sometimes detect these deepfake or “bio phishing” attempts today, it is unrealistic to assume that defensive AI will always stay ahead of malicious AI.
Ultimately, if organizations fail to document agent privileges, validate AI outputs, and maintain oversight, they risk eroding PCI DSS’s foundational requirements for access control, auditability, and secure authentication.
What advice would you provide for an organization just starting their journey into using AI?
Start with the basics and ask the “why” question. Too often, I hear financial institutions being told to “make this an AI process” without clarity on the value. The right entry point is identifying use cases that clearly enhance security, efficiency, or compliance outcomes (although adding “AI” to any budget request seems to help prioritize the item).
From there, pilot wisely. Choose a manageable scope, give it adequate duration, and designate internal champions responsible for monitoring AI behavior and outcomes. Treat it as you would a new vendor or system within your PCI DSS environment: prove it at small scale before scaling widely.
Next, test rigorously. Each new AI model or parameter update should be checked against a baseline set of questions to ensure it hasn’t altered results unexpectedly. Unlike traditional patches that can be applied automatically, AI updates may subtly change behavior in ways that matter deeply to your business and compliance.
Finally, codify everything in an AI policy. About two-thirds of companies now report AI as a risk in their annual assessments. Embedding governance of AI into your existing information security policy—already required under PCI DSS Requirement 12—ensures alignment across the organization and prepares you for external scrutiny.
What AI trend (not limited to payments) are you most excited about?
I’m excited by the automation of repetitive, documentation-heavy tasks. For merchants and service providers, PCI DSS evidence collection has always been resource intensive. AI now promises to streamline control mapping, generate compliance narratives, and maintain document libraries dynamically. This makes ongoing assessment more efficient and less disruptive.
On a more technical level, I’m encouraged by the rapid adoption of Model Context Protocol (MCP) and Agent-to-Agent (A2A) protocols. These universally accepted protocols act as translators between AI agents and applications (or other agents), standardizing how systems communicate. For organizations, that means reduced vendor lock-in, improved interoperability, and a clearer governance model for AI adoption.
Standardization is often an unsung hero in security. Just as PCI DSS itself provides a common foundation for payment security, these AI communication protocols can create consistency that accelerates adoption while reducing systemic risk for everyone, wherever they may be on their AI journey.
Interested in learning more? Register now to see Troy Leach speak at the 2025 Europe Community Meeting where he will deliver his AI-themed presentation, It’s Not You, It’s Us – Strategy for Building a Shared Security Responsibility Model with your Service Providers.