Welcome to the PCI Security Standards Council’s blog series, The AI Exchange: Innovators in Payment Security. This special, ongoing feature of our PCI Perspectives blog offers a resource for payment security industry stakeholders to exchange information about how they are adopting and implementing artificial intelligence (AI) into their organizations.
In this edition of The AI Exchange, Elavon Inc. Director of Customer Data Security, Candice Pressinger, offers insight into how her company is using AI, and how this rapidly growing technology is shaping the future of payment security.
How have you most recently incorporated artificial intelligence within your organization?
At Elavon, AI now underpins our fraud orchestration layer. We have deployed adaptive scoring and dynamic rule applications across our merchant portfolio, enabling real-time decisioning based on thousands of contextual signals per transaction. Using this AI driven capability, Elavon has also increased exemption-based authorization approvals by 3.6 percentage points in the European market.
One of the other main benefits so far is enabling engineers to complete their code through AI assistance using GitHub co-pilot. SVP Elavon’s Software Engineering Director, Richie Walsh, is working towards AI Agents being able to work from user-story through to deploying to production and monitoring production applications. We will be enabling our Platform Engineering/DevOps with new tools to automate code generation quality gates, deployment and monitoring.
Finally, for AI we plan to use it to help with the generation of API integrations for our merchants and partners through our developer portal.
What is the most significant change you’ve seen in your organization since AI-use has become so much more prevalent?
AI has redefined how we approach both risk and optimization internally and for our merchants. What was once a linear fraud flow is now a dynamic orchestration model, layered with real-time machine learning insights. In our latest regional pilot, AI-driven decisioning improved exemption accuracy by 29% and contributed to a 9% reduction in customer friction at checkout.
From a purely operational perspective, one of the biggest changes is that people look at some of the tedious everyday work in a different light. AI allows the opportunities and drive to do something about these challenges. While (Gen) AI might not always be the answer, there is a new openness and excitement to a different approach.
How do you see AI evolving or impacting payment security in the future?
We’ve spent years trying to stop fraud. And it’s worked, sort of. But it’s not going to be enough. In payments, the real opportunity now is not just protection, it’s precision. Enabling trust at scale, in real time, without breaking the customer journey. That’s what AI unlocks when it’s done right. We’re moving from old-school rules and reactive defense to something smarter: AI that predicts risk before it happens, adapts dynamically, and learns with every transaction. This is especially powerful for SMBs, who’ve long been priced out of enterprise-grade fraud tools. With the right AI, they get more than security; they get optimization.
More than 50% of declined transactions are false positives and is a $600 billion industry problem as merchants who are tightening security to block fraud. That’s not just friction that’s lost revenue. At Elavon, our early-stage AI models have already cut false positives by up to 65%. Merchants keep the right customers, protect conversion, and reduce chargebacks without introducing friction. No more trade-offs.
There is also the problem with the bots. Fraudsters are using automation to overwhelm checkout flows. But we’re not defenseless. Tools like Spec use AI honey traps, behavioral biometrics, and intent analytics to catch bots before they can act. These systems spot the difference between a human and a script not by what they do, but how they do it. Spec’s approach has shown up to 99% reduction in bot-driven attacks. And that matters because fraudsters don’t wait. Neither should we.
AI is no longer just fraud prevention. It’s fraud orchestration. It blends context, behavior, and business logic to make smarter decisions in milliseconds. For the industry, it’s a wake-up call.
Security should never come at the cost of conversion and with the right AI, it doesn’t have to.
What potential risks should organizations consider as AI becomes more integrated into payment security?
With every leap forward in AI, we gain not just power, but responsibility. As AI becomes deeply embedded in payment security, it’s tempting to chase speed, scale, and optimization wins. But true resilience lies in recognizing and actively managing the risks that grow with scale.
The greatest threat isn’t always malicious. More often, it’s systemic. Unchecked, AI can introduce blind spots like misclassifying transactions, skewing risk scores, or failing to keep pace with evolving fraud tactics. And in a world where milliseconds make the difference between catching fraud and killing a legitimate sale (those false positives mentioned earlier), those blind spots matter.
According to the World Economic Forum and McKinsey, 51% of organizations have already experienced an AI-related failure, yet only 1 in 5 have implemented the governance needed to prevent recurrence. A recent IBM report found 75% of CEOs say they cannot trust AI without proper oversight, and only 39% report having governance in place for generative AI today. The message is clear here that trust doesn’t just happen. It is built.
At Elavon, we don’t just run models, we build them and have stringent model risk governance in place. Our fraud-specific AI is developed and maintained by dedicated data science teams, tuned in real time by merchant segment, and subject to continuous learning. That means our systems evolve faster than attackers can pivot. But we go further in that every model we deploy is tested for explainability. We embed drift detection, human-in-the-loop oversight, and transparent auditable logic.
Because in payment security, it’s not just about being right, it’s about being accountable. We believe trust isn’t a by-product. It’s an output of design. Governance, transparency, and explainability aren’t “nice to haves” they are the foundation of secure, scalable AI. Regulation is catching up fast. On 1 August 2024, the EU AI Act (Regulation 2024/1689) came into force, the world’s first comprehensive law governing AI systems. It marks a global shift, and it’s changing the landscape for every business deploying AI in payments, fraud prevention, and security. The AI Act doesn’t replace existing data protection laws like GDPR, it amplifies them. Under the AI Act, most fraud detection systems are likely to be classed as high-risk, triggering stringent obligations around human oversight, transparency, robustness, monitoring, and explainability. And the overlap with GDPR is no longer optional. If you’re using personal data in your AI, whether to train, infer, or log outcomes, you need a lawful basis, data minimization, purpose limitation, and clarity on automated decision-making rights under Article 22. Even pseudonymized or synthetic training datasets can fall under AI Act traceability rules and that means dual compliance: both AI accountability and GDPR compatibility. It’s no longer enough to demonstrate performance. Regulators and customers will expect explainability. Organizations must conduct documented AI Impact Assessments (AIPAs), alongside Data Protection Impact Assessments (DPIAs), with clear governance models that can withstand both internal audits and external scrutiny.
My final thought here would be built for the boardroom and the courtroom. The next frontier in payment security isn’t just smarter AI, it’s safer systems. To get there, organizations must embed governance from day one. Map your data flows. Align your AI and privacy teams.
What advice would you provide for an organization just starting their journey into using AI?
Start with governance, not glamour. It’s easy to be dazzled by what AI can do. But the real power and real risk lies in how you govern it. Before chasing automation wins or optimization gains, stop and ask the hard questions. How will we govern this system? Who will monitor it? How will we explain its decisions, especially the ones that go wrong?
At Elavon, we learned early that responsible AI doesn’t start with models. It starts with boundaries. Every model we build begins with transparent, auditable logic, and explainability as a requirement, not a feature. We embed human-in-the-loop controls for high-stakes decisions, along with continuous tuning by use case and segment because fraud evolves, and so must the defenses.
Too many organizations wait for AI to fail before testing for drift. But AI models are only as good as the data they’re trained on and when that data changes, models degrade quietly. That’s why we build early warning systems and make room for skepticism in every deployment. And let’s be clear, trust isn’t something you assume it’s something you design for.
My advice for any organization starting out is to start with governance. Scale with care. Build systems you’d be proud to explain in a courtroom and not just a boardroom. If you’re not building AI in-house, partner with those who are already doing it responsibly. The future of secure AI doesn’t begin with ambition. It begins with accountability.
What AI trend (not limited to payments) are you most excited about?
I’m most excited about the shift from black-box AI to explainable, responsible AI across every industry. We’re entering an era where it’s not enough for AI to be powerful, it needs to be understandable, auditable, and aligned with human intent.
Whether it’s in healthcare, financial services, or fraud prevention, the trend I’m watching closely is the rise of agentic AI systems that can act autonomously and explain their reasoning in real time. This impacts both compliance and trust.
I’m also energized by the convergence of AI and cybersecurity. Security systems now anticipate and adapt to threats in milliseconds using real-time behavioral modelling, rather than just reacting to known risks. It’s like giving your fraud defenses a sixth sense.
But what excites me most isn’t just the tech. It’s the maturing mindset behind it. We’re finally having the right conversations about governance, fairness, risk, and accountability at the same pace we are innovating. And that, to me, is the most exciting shift of all.
Interested in learning more? Register now to see Candice Pressinger speak at the 2025 Europe Community Meeting where she will deliver her AI-themed presentation, AI at the Gates: Stopping Payment Fraud Before it Starts