CyberArk Responsible AI
CyberArk Responsible AI Policy
Scope and Purpose
This CyberArk Responsible AI Policy, including the Features FAQs, applies to use of those artificial intelligence services, features, and functionalities that we provide within CyberArk’s products and services (collectively, “AI Features”). This policy supplements CyberArk’s SaaS Terms of Service. Reference to “you” or “your” in this Policy means the “Customer”, as defined in CyberArk’s SaaS Terms of Service.
Responsible Design and Governance of AI Features
CyberArk is committed to developing safe, fair, and accurate AI Features and providing you with tools designed to enhance your security and user experience. An AI Impact Assessment is conducted for AI Features in accordance with standard market practice, designed to identify biases or unethical use of AI and to address transparency, fairness, accountability, and privacy and security by design. These principles are applied throughout the lifecycle of AI Features, from design and development through deployment, with the objective of supporting secure, reliable, and appropriate use in enterprise environments. CyberArk periodically reviews AI Features and related safeguards in light of evolving technologies, customer expectations, and applicable legal and regulatory requirements.
CyberArk assesses the accuracy of its AI Features, for example by using a validated reference (“golden”) dataset, running the model against it, and comparing the results to the labeled data. CyberArk may also perform manual validation on a representative portion of the results to provide additional assurance. In addition, CyberArk collects user feedback to help measure the accuracy and efficiency of AI outputs and continuously improve the AI Features.
CyberArk applies various layers of monitoring and alerting, to identify anomalies, data quality issues, or unintended model behavior, including potential data or model corruption, designed to support the reliability of AI Features.
Where applicable, CyberArk provides clear explanations within the user interface describing the reasons behind each recommendation. This includes transparent insight into the different factors and statistical categories that influence the recommendation. These explanations are continuously enhanced by incorporating additional data to further strengthen and improve the reasoning mechanisms.
CyberArk incorporates security-by-design principles, aligned with industry standards, into AI Features, including controls intended to reduce the risk of misuse, unintended behavior, or unauthorized access. These controls may include technical and organizational measures, as well as contractual safeguards, depending on the nature of the AI Feature and its intended use.
Use of Third-Party Generative AI Services
Certain AI Features use third-party generative AI services (currently through Azure OpenAI, AWS Bedrock and GCP Vertex AI), and some AI Features may be provided from a different region to the region where your SaaS Products are hosted. The third-party generative AI services that CyberArk uses do not use your customer data to train their models nor do they store your customer data. For limited purposes, such as abuse monitoring, as further described in the applicable third-party generative AI service provider policies, these third-parties may temporarily store data as required for those limited purposes. CyberArk or the third-party generative AI services may detect and mitigate instances of recurring content and/or behaviors that suggest use of the product in a manner that may violate this Policy or other applicable product terms, in accordance with industry standards.
Limitations and Human Oversight
AI Features utilize models to generate predictions or recommendations based on data patterns, producing probabilistic outputs. AI introduces the potential for inaccurate or inappropriate content. It is crucial to evaluate these outputs for accuracy and suitability in your specific use case. Both you and your end users are accountable for decisions, advice, actions, and failures to act resulting from the use of AI Features.
Please refer to the specific AI Feature page in CyberArk’s documentation for detailed information about each feature.
Acceptable Use of AI Features
You may not use, or facilitate or allow others to use, the AI Features:
- for any illegal or fraudulent activity;
- for intentional disinformation or deception;
- to violate the rights of others, including privacy rights, unlawful tracking, monitoring, and identification;
- to harass, harm, or encourage the harm of individuals or specific groups;
- to intentionally circumvent safety filters and functionality or prompt models to act in a manner that violates law, regulation or this Policy;
- to violate the security, integrity, or availability of any user, network, computer or communications system, software application, or network or computing device.
Opt-Out and Updates
If you do not wish to receive the AI Features, your authorized system admin may opt out of the AI Features on behalf of your organization as described in the AI Preference Center.
CyberArk may update this policy from time to time without prior notice.
CyberArk AI Features FAQs
Q: How does CyberArk ensure responsible use of AI and bias mitigation?
A: CyberArk conducts AI Impact Assessments for AI Features in accordance with standard market practice, designed to identify biases or unethical use of AI and to address transparency, fairness, accountability, and privacy and security by design.
Q: Can I opt-out of using AI Features?
A: Yes, if you do not wish to receive the AI Features, your authorized system admin may opt out on behalf of your organization as described in the AI Preference Center.
Certain detections within Privileged Threat Analytics (PTA) use proprietary machine learning-based algorithms that are configured at the product level. These detections may be individually enabled or disabled through the applicable product user interface. For more information, see the PTA Security Configuration documentation page.
Q: Where can I find detailed information about each AI Feature?
A: Please refer to the specific AI Feature page in CyberArk’s documentation for detailed information about each feature.
Q: Will I know when AI technology is used within the CyberArk product?
A: CyberArk will inform users when AI is used to produce results or recommendations, by indicating this in-product, as well as in the supporting documentation.
Q: Do the AI Features use any third-party generative AI services?
A: Yes, certain AI Features use third-party generative AI services (currently through Azure OpenAI, AWS Bedrock and GCP Vertex AI), as further detailed in each specific AI Feature documentation page. CyberArk’s use of these services within its products and services is subject to CyberArk’s contractual agreements with the respective vendors of these services.
Q: Does CyberArk or any third party monitor my use of the AI Feature?
A: CyberArk or the third-party generative AI services may detect and mitigate instances of recurring content and/or behaviors that suggest use of the product in a manner that may violate CyberArk’s responsible AI Policy or other applicable product terms.
Q: Will CyberArk process personal data for the purpose of providing AI Features?
A: CyberArk may process personal data for the purpose of providing AI Features to you in accordance with your Data Processing Agreement with CyberArk.
Q: Is human oversight required prior to applying the AI Feature’s recommendation?
A: Yes. AI Features utilize models to generate predictions or recommendations based on data patterns, producing probabilistic outputs. AI introduces the potential for inaccurate or inappropriate content. It is crucial to evaluate these outputs for accuracy and suitability in your specific use case. Both you and your end users are accountable for decisions, advice, actions, and failures to act resulting from the use of AI Features.
Q: What is CyberArk IGA AI Profiles capability?
A: AI Profiles is a machine learning-based capability within CyberArk’s Identity Governance and Administration (IGA) solution that analyzes identity‑related data from the IGA solution to generate assistive profile recommendations, helping streamline user provisioning, onboarding, and lifecycle governance. These recommendations are provided to support customer decision-making, and final access decisions remain under customer control. AI Profiles does not currently include an in-product notification of AI-generated content, but no generation of content will occur without customer’s action. Further details of CyberArk’s ongoing assessment of the Responsible AI features of this capability are available on request.
Q: What is the Machine Identity AI Assistant and how may it be used?
A: The Machine Identity AI Assistant is a chatbot available on CyberArk Machine Identity Security documentation pages (e.g., https://docs.cyberark.com/mis-saas/). It is trained on Machine Identity Security documentation to assist users in navigating and understanding product information. The AI Assistant processes user inputs through third-party providers, OpenAI and Chatbase (chatbase.co). Users are not required to log in to any CyberArk account to use the AI Assistant, and CyberArk does not cross-reference account data to AI Assistant interactions.
Use of the AI Assistant is subject to CyberArk’s Responsible AI Policy and the acceptable use restrictions as set forth above. Users may not submit personal, proprietary, or sensitive data to the AI Assistant. User inputs are processed anonymously when the AI Assistant is used in accordance with these restrictions.
Users assume the risk of any errors, omissions, or misinformation generated by the AI Assistant. CyberArk shall not be liable for AI Assistant outputs.
Last updated February 9, 2026