Artificial intelligence (AI) systems are increasingly being used within businesses to help in the creation, management, and operation of payment systems and environments. Their use is expanding beyond systems directly managed by humans, to agentic AI systems, which have a level of agency to perform actions on their own behalf. The rapid pace of change in AI systems means it can be difficult to understand the potential risk posed by these systems, and how to best securely deploy their use.
In collaboration with the payment security industry, the PCI Security Standards Council (PCI SSC) offers high-level principles to consider when developing and deploying AI systems. The principles are expressed in terms of things that should not be, things that should be, and things that may be – as they relate to the use of AI systems. Although this content is guidance only, the list also starts with a single item that must be.
Note that the following list is not intended to be exhaustive. As this industry is rapidly innovating, PCI SSC will continue to evolve and adapt and evaluate these principles on a regular basis to ensure industry alignment.
Overview of AI Principles
Explore AI Principles in More Detail:
- AI Systems Must Be > Explore Further
- AI Systems Should Not Be > Explore Further
- AI Systems Should Be > Explore Further
- AI Systems May Be > Explore Further
AI Systems Must Be:
AI systems must be deployed and managed in compliance with applicable PCI SSC requirements
Use of AI does not remove or bypass the need to meet the requirements of any applicable PCI SSC standard. For example, if your implementation is in scope for PCI DSS, the AI systems need to be implemented in accordance with PCI DSS requirements. This includes how data is secured as it is stored, processed, and transmitted.
This can sometimes be complex, and AI systems can often operate in ways that are not easily decomposed and understood. However, this complexity does not remove the need to meet any applicable requirement.
The following principles are designed to help with understanding the security nuances of AI systems, and some may call out specific PCI DSS requirements. Any individual requirement references should not be interpreted as indicating that only those requirements apply, but as examples of requirements that are likely to be impacted or require consideration in light of that principle.
AI Systems Should Not Be:
Trusted with high-impact secrets or unprotected sensitive data
AI systems have been known to provide output that contains sensitive information, either without direct prompting or through specifically and maliciously formatted prompt input. Additionally, the data processing and flow of AI systems may not be fully contained or constrained within an entity’s control.
Preventing this exposure of sensitive data starts by limiting the sensitive data provided to the AI system in the first place. This includes ensuring that data used to train the AI system is sanitised of sensitive information and secrets prior to use, as well as ensuring production data used and trusted to these systems is appropriately secured.
Examples of sensitive data that should be kept out of AI datasets include API access tokens and user credentials (that are not specific to that AI), unsecured account data, and cryptographic keys or key material.
Given agency to perform operations which require the formal acceptance of responsibility
It is important to remember that even with the increasing ability of AI, these systems are not human individuals, and they cannot accept or take on responsibility. Therefore, operations and roles which require the formal acceptance of responsibility are not suitable for AI systems. This includes roles such as key custodians (a role which is also prevented by the ‘no high-impact secrets’ principle above) and providing management-level authorization or approvals.
Used to generate security-sensitive random or secret values
One situation where AI can be said to be just as good as humans is in generating random numbers and other unique, secret values. That is, both AI and humans are just as bad at ‘thinking up’ random values! It is often a challenge to determine if a value is truly random or not, and the recommendation is always to use a known-good random number generator (RNG), in a way that will keep the values that are generated secret and safe.
It may seem reasonable to have an AI system interface to such a known-good RNG where random values are required, but if the use is generating high-impact secrets (cryptographic keys, passwords, etc.), the use will violate the first ‘should not’ principle anyway. Of course, if the random values are not security sensitive, this principle may not apply.
Implemented with full agency over the creation and deployment process chain without a human-in-the-loop
An important concept in the use of AI systems is ensuring that there is a ‘human-in-the-loop’. This means that there should always be a human who is involved and responsible for the oversight of an end-to-end process. AI systems may be used to perform the individual actions to create, test, and deploy software – but the entirety of the creation and deployment pipeline should not be fully automated.
It is also important to understand that different AI systems may be trained or tuned to specific aspects of development. An AI system that is used to create software may not be specifically trained to create secure software.
PCI DSS Requirement 6 covers the security of software development and remains a good reference for use of AI systems.
Provided with access to systems and information not required for their operation
PCI DSS Requirement 7 notes the importance of ‘least privilege’ (providing only the minimum level of privileges needed to perform a job) and ‘need to know’ (providing access to only the least amount of data needed to perform a job). These items hold true when considering the use of AI systems. It can be tempting to give an AI system access to as many systems and as much information as possible, but this increases the potential impact if things go wrong or work in a way that was not expected.
AI Systems Should Be:
Provided with access to account data only when it is suitably protected
AI systems have been used in payments to help detect fraud and manage risk in payments for a long time. Relatively new, however, is the use of agentic AI to facilitate and even make payments on behalf of cardholders. These operations obviously require access to some form of payment card data; however, this does not mean that the data cannot be protected. In a later principle, the use of single-use PANs and payment tokens is discussed as a potential way to align the need for AI to use payment data, whilst also keeping that payment data secure.
Requirement 3 of PCI DSS covers how to secure cardholder data at rest, and Requirement 4 covers securing this data during transmission, and these requirements apply equally to AI-based systems.
Deployed so that the actions performed by the AI can be logged and monitored, and a (human) individual held responsible for those actions
In one of the previous principles the importance of understanding an AI cannot be held responsible was noted. This principle is another aspect of that – there needs to be ways to log and monitor what an AI system is doing so that the actions can be traced back to the system performing that action, and a human individual can be held responsible for those actions. The role of the responsible individual is important here, as AI systems can often involve large teams, but ultimate responsibility must come down to a single individual.
Where possible, logging should be sufficient to audit the prompt inputs and reasoning process used by the AI system that led to the output provided.
PCI DSS Requirement 10 covers the need for logging and monitoring and should be considered in this context.
Validated prior to, and throughout, deployment to confirm they continue to work as expected
The need for validation is important across the entire lifecycle of an AI system; from training, to initial deployment, and on-going validation of the correctness of operation. This includes the consideration for supply chain risk – where does the AI model come from, where does the training data come from, etc.
When deploying and using AI systems it’s important to understand that they are often inherently non-deterministic – the output obtained from a system may change given the exact same input. Additionally, systems can ‘drift’ over time as they obtain and process more input (which may include malicious input from threat actors). This makes the need for on-going validation even more important.
PCI DSS Requirement 6 and 11 should be considered.
Implemented so that they can be easily disabled if required
An important question to ask when implementing any system is, “what if something goes wrong?” This is equally true for AI systems, and so as part of planning any AI deployment, ensure there is a clear and well-understood process for disabling operation. This may seem obvious, but it can sometimes be complex if not designed in from the outset.
Protected against malicious input and malformed output
When securing software and data-repository interfaces, the need to protect against malicious input is well understood. A similar type of security is equally important for AI systems, which can be vulnerable to attacks like ‘prompt-injection’ and ‘data-poisoning’. Fundamentally these types of attacks exploit an unsecured interface to the AI system, with the goal to get the AI to perform privileged actions, reveal sensitive data, or change its behaviour in a way that is useful to the malicious entity. In a payment system, AI systems should be protected against attacks aiming to perform fraudulent payments.
Therefore, placing controls over the types of input or commands the AI system can receive and act upon, and filtering the output of the AI system, are important.
Provided with limited, use case, and context specific credentials for any required access
A previous principle outlined that AI systems should not be ‘provided with access to systems and information not required for their operation’. The other side of that is ensuring that any access that is provided is appropriately limited and specific to the needs and implementation of that AI system.
AI systems should be provided with their own credentials, which can be easily tracked and revoked if necessary (as per the principle for disabling systems if required). Where possible, these credentials should be tied to the AI system in some way that mitigates the risk of credential theft and reuse by a malicious party (for example, through use of bound credentials). Any credentials provided should meet the requirements of least privilege and need to know.
Another aspect of secure AI systems is considering which human users (or other automated systems) should be provided with access to the AI system.
Considered as a potential ‘malicious insider’ during threat analysis exercises and incident response walk-throughs
PCI DSS Requirement 12 covers information security policy, processes, and incident response. Specifically, PCI DSS Requirement 12.10 covers the need for an incident response plan. These plans should cover the potential for malicious AI use, either by external parties subverting the normal AI operation or misuse of AI systems by internal staff. Where an AI is given some agency to perform actions within the network or environment, incident response plans should consider the potential for the AI to become a ‘malicious insider’ itself.
Implemented in a way that secures the operational environment, context / user-specific data from other users and other AI systems.
PCI DSS Requirement 1 covers the importance of implementing secure networks, and although a segmented network is not required, segmentation and isolation of AI systems can help reduce the risk of these systems being used in a malicious context, or exposing sensitive systems, data, or functionality though unexpected operation. Segmentation and security controls around AI systems may also be implemented at the system level – at the level of the physical machine, virtual machine, OS, container, etc.
AI Systems May Be:
Provided with access to protected payment data
When implementing AI systems that access cardholder data, consider the use of payment tokens or single-use PANs to limit the scope and impact of the AI system. Although, beyond the scope of PCI requirements, implementation of limits on the use of these values (limited total spend or frequency of spend, limited to use by a single merchant, limited usable lifetime, etc) can further reduce risk. Where access does not strictly require the use of the full PAN, truncated PANs or PANs encrypted whilst still maintaining cleartext BIN data, may be suitable to mitigate any risk associated with AI use.
Used to provide input to an approval decision and perform actions after an approval decision has been made
Although an AI system cannot assume responsibility, it is very reasonable to use AI systems to provide input to an authorization decision. For example, AI systems may be useful in determining and implementing patches for networked systems. A responsible human would be expected to authorize such use – however, based on an appropriate risk analysis this may extend from a blanket-level approval through to specific approval for each individual system to be patched.
PCI DSS Requirement 6 covers change management, including the need for auditability and approval. These requirements apply equally to AI-based systems.
Trusted to autonomously perform fail-secure actions
AI systems can monitor many different sources of information and perform some actions more quickly than a human. Although a previous principle noted the importance of authorization for AI actions, one area where action prior to authorization may be appropriate is in response to on-going attacks. Here, an AI system may be trusted to perform proactive isolation, network throttling, or other mitigations without direct human approval.
However, care must be taken if such control is provided to an AI system: Can this lead to more easily deployed denial of service attacks? What access and permissions do the AI need to perform these actions, and how can that access and permission structure be misused?
Used to gather, review, and summarize content
Probably one of the most common uses of AI currently is to parse, and provide output base on, large sets of data. Log reviews, as required by PCI DSS Requirement 10.4, are a good example of this.
However, as previous principles have noted, it remains important to continually check and validate that these systems are providing the correct and expected output – a worst-case scenario for this type of implementation is where the AI system keeps providing summarised logs that show everything is OK, when there are in fact areas which need more attention.
Used to generate content, as part of a product development or deployment process
AI is now often used in the generation of content – written documents (such as policy documents) and software are the most common forms of content which may be produced in this way. Although the importance of a ‘human-in-the-loop’ was previously noted, this does not mean that AI systems cannot be used in content generation and system/software testing.
Used as part of user-interaction systems
Finally, the use of AI systems for user-interaction is both common and increasing. This is a valid use-case, if appropriate controls are put in place. It is reasonable to expect that any AI system which is exposed to public access will be subject to all types of attempted manipulation, from the silly (getting a helpdesk chatbot to help write code) to the damaging (manipulating a chatbot to expose sensitive data or details on other users).