[{"body":"In the earlier Pod Security Policy controller, it was possible to define a setting which would enable AppArmor for all the containers within a Pod so they may be assigned the desired profile. Assigning an AppArmor profile, accomplished via an annotation, is useful in that it allows secure defaults to be defined and may also result in passing other validation rules such as those in the Pod Security Standards. This policy mutates Pods to add an annotation for every container to enabled AppArmor at the runtime/default level.\n","category":"PSP Migration","filters":"mutate::PSP Migration::%!s(\u003cnil\u003e)::Pod,Annotation","link":"/policies/psp-migration/add-apparmor/add-apparmor/","policy":"mutate","subject":"Pod,Annotation","title":"Add AppArmor Annotations","version":null},{"body":"In the earlier Pod Security Policy controller, it was possible to configure a policy to add capabilities to containers within a Pod. This made it easier to assign some basic defaults rather than blocking Pods or to simply provide capabilities for certain workloads if not specified. This policy mutates Pods to add the capabilities SETFCAP and SETUID so long as they are not listed as dropped capabilities first.\n","category":"PSP Migration","filters":"mutate::PSP Migration::%!s(\u003cnil\u003e)::Pod","link":"/policies/psp-migration/add-capabilities/add-capabilities/","policy":"mutate","subject":"Pod","title":"Add Capabilities","version":null},{"body":"CAST AI will not downscale a node that includes a pod with the  autoscaling.cast.ai/removal-disabled=\"true\" label on it, this protects sensitive workloads from being evicted and can be attributed to any pod to protect against unwanted downscaling. This policy will mutate jobs and  cronjobs to add the removal-disabled label to protect against eviction. \n","category":"CAST AI","filters":"mutate::CAST AI::%!s(\u003cnil\u003e)::Job, CronJob","link":"/policies/castai/add-castai-removal-disabled/add-castai-removal-disabled/","policy":"mutate","subject":"Job, CronJob","title":"Add CAST AI Removal Disabled","version":null},{"body":"In some cases you would need to trust custom CA certificates for all the containers of a Pod. It makes sense to be in a ConfigMap so that you can automount them by only setting an annotation. This policy adds a volume to all containers in a Pod containing the certificate if the annotation called `inject-certs` with value `enabled` is found.\n","category":"Sample","filters":"mutate::Sample::1.5.0::Pod,Volume","link":"/policies/other/add-certificates-volume/add-certificates-volume/","policy":"mutate","subject":"Pod,Volume","title":"Add Certificates as a Volume","version":"1.5.0"},{"body":"Pods which don't specify at least resource requests are assigned a QoS class of BestEffort which can hog resources for other Pods on Nodes. At a minimum, all Pods should specify resource requests in order to be labeled as the QoS class Burstable. This sample mutates any container in a Pod which doesn't specify memory or cpu requests to apply some sane defaults.\n","category":"Other","filters":"mutate::Other::1.7.0::Pod","link":"/policies/other/add-default-resources/add-default-resources/","policy":"mutate","subject":"Pod","title":"Add Default Resources","version":"1.7.0"},{"body":"A Pod securityContext entry defines fields such as the user and group which should be used to run the Pod. Sometimes choosing default values for users rather than blocking is a better alternative to not impede such Pod definitions. This policy will mutate a Pod to set `runAsNonRoot`, `runAsUser`, `runAsGroup`, and  `fsGroup` fields within the Pod securityContext if they are not already set.\n","category":"Sample","filters":"mutate::Sample::1.6.0::Pod","link":"/policies/other/add-default-securitycontext/add-default-securitycontext/","policy":"mutate","subject":"Pod","title":"Add Default securityContext","version":"1.6.0"},{"body":"When a Pod requests an emptyDir, by default it does not have a size limit which may allow it to consume excess or all of the space in the medium backing the volume. This can quickly overrun a Node and may result in a denial of service for other workloads. This policy adds a sizeLimit field to all Pods mounting emptyDir volumes, if not present, and sets it to 100Mi.\n","category":"Other","filters":"mutate::Other::1.6.0::Pod","link":"/policies/other/add-emptydir-sizelimit/add-emptydir-sizelimit/","policy":"mutate","subject":"Pod","title":"Add emptyDir sizeLimit","version":"1.6.0"},{"body":"Instead of defining a common set of environment variables multiple times either in manifests or separate policies, Pods can reference entire collections stored in a ConfigMap. This policy mutates all initContainers (if present) and containers in a Pod with environment variables defined in a ConfigMap named `nsenvvars` that must exist in the destination Namespace.     \n","category":"Other","filters":"mutate::Other::1.6.0::Pod","link":"/policies/other/add-env-vars-from-cm/add-env-vars-from-cm/","policy":"mutate","subject":"Pod","title":"Add Environment Variables from ConfigMap","version":"1.6.0"},{"body":"The Kubernetes downward API only has the ability to express so many options as environment variables. The image consumed in a Pod is commonly needed to make the application aware of some logic it must take. This policy takes the value of the `image` field and adds it as an environment variable to Pods.\n","category":"Other","filters":"mutate::Other::%!s(\u003cnil\u003e)::Pod","link":"/policies/other/add-image-as-env-var/add-image-as-env-var/","policy":"mutate","subject":"Pod","title":"Add Image as Environment Variable","version":null},{"body":"Images coming from certain registries require authentication in order to pull them, and the kubelet uses this information in the form of an imagePullSecret to pull those images on behalf of your Pod. This policy searches for images coming from a registry called `corp.reg.com` and, if found, will mutate the Pod to add an imagePullSecret called `my-secret`.\n","category":"Sample","filters":"mutate::Sample::1.6.0::Pod","link":"/policies/other/add-imagepullsecrets/add-imagepullsecrets/","policy":"mutate","subject":"Pod","title":"Add imagePullSecrets","version":"1.6.0"},{"body":"Images coming from certain registries require authentication in order to pull them, and the kubelet uses this information in the form of an imagePullSecret to pull those images on behalf of your Pod. This policy searches for images coming from a registry called `corp.reg.com` referenced by either one of the containers or one  of the init containers and, if found, will mutate the Pod to add an imagePullSecret called `my-secret`.\n","category":"Sample","filters":"mutate::Sample::1.6.0::Pod","link":"/policies/other/add-imagepullsecrets-for-containers-and-initcontainers/add-imagepullsecrets-for-containers-and-initcontainers/","policy":"mutate","subject":"Pod","title":"Add imagePullSecrets for Containers and InitContainers","version":"1.6.0"},{"body":"In order for Istio to include namespaces in ambient mode, the label `istio.io/dataplane-mode`  must be set to `ambient`. As an alternative to rejecting Namespace definitions which don't already  contain this label, it can be added automatically. This policy adds the label `istio.io/dataplane-mode` set to `ambient` for all new Namespaces.\n","category":"Istio","filters":"mutate::Istio::1.6.0::Namespace","link":"/policies/istio/add-ambient-mode-namespace/add-ambient-mode-namespace/","policy":"mutate","subject":"Namespace","title":"Add Istio Ambient Mode","version":"1.6.0"},{"body":"In order for Istio to inject sidecars to workloads deployed into Namespaces, the label `istio-injection` must be set to `enabled`. As an alternative to rejecting Namespace definitions which don't already contain this label, it can be added automatically. This policy adds the label `istio-inject` set to `enabled` for all new Namespaces.\n","category":"Istio","filters":"mutate::Istio::1.6.0::Namespace","link":"/policies/istio/add-sidecar-injection-namespace/add-sidecar-injection-namespace/","policy":"mutate","subject":"Namespace","title":"Add Istio Sidecar Injection","version":"1.6.0"},{"body":"If a Pod exists with the annotation `karpenter.sh/do-not-evict: true` on a Node, and a request is made to delete the Node, Karpenter will not drain any Pods from that Node or otherwise try to delete the Node. This is useful for Pods that should run uninterrupted to completion. This policy mutates Jobs and CronJobs so that Pods spawned by them will contain the `karpenter.sh/do-not-evict: true` annotation.\n","category":"Karpenter, EKS Best Practices","filters":"mutate::Karpenter, EKS Best Practices::1.6.0::Pod","link":"/policies/karpenter/add-karpenter-donot-evict/add-karpenter-donot-evict/","policy":"mutate","subject":"Pod","title":"Add Karpenter Do Not Evict","version":"1.6.0"},{"body":"Selecting the correct Node(s) provisioned by Karpenter is a way to specify the appropriate resource landing zone for a workload. This policy injects a nodeSelector map into the Pod based on the Namespace type where it is deployed.\n","category":"Karpenter, EKS Best Practices","filters":"mutate::Karpenter, EKS Best Practices::1.6.0::Pod","link":"/policies/karpenter/add-karpenter-nodeselector/add-karpenter-nodeselector/","policy":"mutate","subject":"Pod","title":"Add Karpenter nodeSelector","version":"1.6.0"},{"body":"Labels are used as an important source of metadata describing objects in various ways or triggering other functionality. Labels are also a very basic concept and should be used throughout Kubernetes. This policy performs a simple mutation which adds a label `foo=bar` to Pods, Services, ConfigMaps, and Secrets.\n","category":"Sample","filters":"mutate::Sample::1.6.0::Label","link":"/policies/other/add-labels/add-labels/","policy":"mutate","subject":"Label","title":"Add Labels","version":"1.6.0"},{"body":"Sidecar proxy injection in Linkerd may be handled at the Namespace level by setting the annotation `linkerd.io/inject` to `enabled`. In addition, a second annotation may be applied which controls the Pod startup behavior. This policy sets the annotations, if not present, `linkerd.io/inject` and `config.linkerd.io/proxy-await` to `enabled` on all new Namespaces.\n","category":"Linkerd","filters":"mutate::Linkerd::%!s(\u003cnil\u003e)::Namespace, Annotation","link":"/policies/linkerd/add-linkerd-mesh-injection/add-linkerd-mesh-injection/","policy":"mutate","subject":"Namespace, Annotation","title":"Add Linkerd Mesh Injection","version":null},{"body":"Linkerd will, by default, allow all incoming traffic to Pods in the mesh including that from outside the cluster network. In many cases, this default needs to be changed to deny all traffic so it may be selectively opened using Linkerd policy objects. This policy sets the annotation `config.linkerd.io/default-inbound-policy` to `deny`, if not present, for new Namespaces. It can be customized with exclusions to more tightly control its application.\n","category":"Linkerd","filters":"mutate::Linkerd::%!s(\u003cnil\u003e)::Namespace,Annotation","link":"/policies/linkerd/add-linkerd-policy-annotation/add-linkerd-policy-annotation/","policy":"mutate","subject":"Namespace,Annotation","title":"Add Linkerd Policy Annotation","version":null},{"body":"The ndots value controls where DNS lookups are first performed in a cluster and needs to be set to a lower value than the default of 5 in some cases. This policy mutates all Pods to add the ndots option with a value of 1.\n","category":"Sample","filters":"mutate::Sample::1.6.0::Pod","link":"/policies/other/add-ndots/add-ndots/","policy":"mutate","subject":"Pod","title":"Add ndots","version":"1.6.0"},{"body":"By default, Kubernetes allows communications across all Pods within a cluster. The NetworkPolicy resource and a CNI plug-in that supports NetworkPolicy must be used to restrict communications. A default NetworkPolicy should be configured for each Namespace to default deny all ingress and egress traffic to the Pods in the Namespace. Application teams can then configure additional NetworkPolicy resources to allow desired traffic to application Pods from select sources. This policy will create a new NetworkPolicy resource named `default-deny` which will deny all traffic anytime a new Namespace is created.\n","category":"Multi-Tenancy, EKS Best Practices","filters":"generate::Multi-Tenancy, EKS Best Practices::1.6.0::NetworkPolicy","link":"/policies/best-practices/add-network-policy/add-network-policy/","policy":"generate","subject":"NetworkPolicy","title":"Add Network Policy","version":"1.6.0"},{"body":"By default, Kubernetes allows communication across all Pods within a cluster. The NetworkPolicy resource and a CNI plug-in that supports NetworkPolicy must be used to restrict communication. A default NetworkPolicy should be configured for each Namespace to deny all egress traffic from the Pods while still allowing DNS resolution. Application teams can then configure additional NetworkPolicy resources to allow desired traffic to application Pods from select sources. This policy will create a new NetworkPolicy resource named `allow-dns` when a new Namespace is created,  which will deny all egress traffic while still allowing DNS queries to the kube-system Namespace.\n","category":"Multi-Tenancy, EKS Best Practices","filters":"generate::Multi-Tenancy, EKS Best Practices::1.6.0::NetworkPolicy","link":"/policies/best-practices/add-networkpolicy-dns/add-networkpolicy-dns/","policy":"generate","subject":"NetworkPolicy","title":"Add Network Policy for DNS","version":"1.6.0"},{"body":"Node affinity, similar to node selection, is a way to specify which node(s) on which Pods will be scheduled but based on more complex conditions. This policy will add node affinity to a Deployment and if one already exists an expression will be added to it.\n","category":"Other","filters":"mutate::Other::%!s(\u003cnil\u003e)::Deployment","link":"/policies/other/add-node-affinity/add-node-affinity/","policy":"mutate","subject":"Deployment","title":"Add Node Affinity","version":null},{"body":"The nodeSelector field uses labels to select the node on which a Pod can be scheduled. This can be useful when Pods have specific needs that only certain nodes in a cluster can provide. This policy adds the nodeSelector field to a Pod spec and configures it with labels `foo` and `color`.\n","category":"Sample","filters":"mutate::Sample::1.6.0::Pod","link":"/policies/other/add-nodeselector/add-nodeselector/","policy":"mutate","subject":"Pod","title":"Add nodeSelector","version":"1.6.0"},{"body":"Applications may involve multiple replicas of the same Pod for availability as well as scale purposes, yet Kubernetes does not by default provide a solution for availability. This policy sets a Pod anti-affinity configuration on Deployments which contain an `app` label if it is not already present.\n","category":"Sample","filters":"mutate::Sample::1.6.0::Deployment, Pod","link":"/policies/other/create-pod-antiaffinity/create-pod-antiaffinity/","policy":"mutate","subject":"Deployment, Pod","title":"Add Pod Anti-Affinity","version":"1.6.0"},{"body":"A PodDisruptionBudget limits the number of Pods of a replicated application that are down simultaneously from voluntary disruptions. For example, a quorum-based application would like to ensure that the number of replicas running is never brought below the number needed for a quorum. As an application owner, you can create a PodDisruptionBudget (PDB) for each application. This policy will create a PDB resource whenever a new Deployment is created.\n","category":"Sample","filters":"generate::Sample::1.6.0::Deployment","link":"/policies/other/create-default-pdb/create-default-pdb/","policy":"generate","subject":"Deployment","title":"Add Pod Disruption Budget","version":"1.6.0"},{"body":"A Pod PriorityClass is used to provide a guarantee on the scheduling of a Pod relative to others. This policy adds the priorityClassName of `non-production` to any Pod controller deployed into a Namespace that does not have the label env=production.\n","category":"Other","filters":"mutate::Other::1.6.0::Pod","link":"/policies/other/add-pod-priorityclassname/add-pod-priorityclassname/","policy":"mutate","subject":"Pod","title":"Add Pod priorityClassName","version":"1.6.0"},{"body":"In restricted environments, Pods may not be allowed to egress directly to all destinations and some overrides to specific addresses may need to go through a corporate proxy. This policy adds proxy information to Pods in the form of environment variables. It will add the `env` array if not present. If any Pods have any of these env vars, they will be overwritten with the value(s) in this policy.\n","category":"Sample","filters":"mutate::Sample::1.6.0::Pod","link":"/policies/other/add-pod-proxies/add-pod-proxies/","policy":"mutate","subject":"Pod","title":"Add Pod Proxies","version":"1.6.0"},{"body":"This Policy mutates secretRef key to add a prefix. External Secret Operator proposes to use kyverno to force ExternalSecrets to have namespace prefix so that kubernetes administrators do not need to define permissions and users per namespace. Doing this developers are abstracted by administrators naming convention and will not  be able to access secrets from other namespaces. In this example, in the JSON patch change \"prefix-\" to your preferred prefix. For example: {{ request.namespace }}\n","category":"ExternalSecretOperator","filters":"mutate::ExternalSecretOperator::1.6.0::ExternalSecret","link":"/policies/external-secret-operator/add-external-secret-prefix/add-external-secret-prefix/","policy":"mutate","subject":"ExternalSecret","title":"Add prefix to external secret","version":"1.6.0"},{"body":"When a DaemonSet is added to a cluster every node will get a new pod. There may not be  enough room for this on every node. Karpenter cannot provision extra nodes just for the  DaemonSet because the new pods are not scheduled the way regular pods are. It would require parallel scheduling logic that is not proper to Kubernetes. Therefore, eviction of regular  pods should happen instead. This can be achieved with the priority class system-node-critical.\n","category":"Karpenter","filters":"mutate::Karpenter::1.6.0::DaemonSet","link":"/policies/karpenter/add-karpenter-daemonset-priority-class/add-karpenter-daemonset-priority-class/","policy":"mutate","subject":"DaemonSet","title":"Add priority class for DaemonSets to help Karpenter.","version":"1.6.0"},{"body":"When Pod Security Admission is configured with a cluster-wide AdmissionConfiguration file which sets either baseline or restricted, for example in many PaaS CIS profiles, it may be necessary to relax this to privileged on a per-Namespace basis so that more granular control can be provided. This policy labels new and existing Namespaces, except that of kube-system, with the `pod-security.kubernetes.io/enforce: privileged` label.\n","category":"Pod Security Admission","filters":"mutate::Pod Security Admission::1.7.0::Namespace","link":"/policies/psa/add-privileged-existing-namespaces/add-privileged-existing-namespaces/","policy":"mutate","subject":"Namespace","title":"Add Privileged Label to Existing Namespaces","version":"1.7.0"},{"body":"Pod Security Admission (PSA) can be controlled via the assignment of labels at the Namespace level which define the Pod Security Standard (PSS) profile in use and the action to take. If not using a cluster-wide configuration via an AdmissionConfiguration file, Namespaces must be explicitly labeled. This policy assigns the labels `pod-security.kubernetes.io/enforce=baseline` and `pod-security.kubernetes.io/warn=restricted` to all new Namespaces if those labels are not included.\n","category":"Pod Security Admission, EKS Best Practices","filters":"mutate::Pod Security Admission, EKS Best Practices::1.6.0::Namespace","link":"/policies/psa/add-psa-labels/add-psa-labels/","policy":"mutate","subject":"Namespace","title":"Add PSA Labels","version":"1.6.0"},{"body":"This policy is valuable as it ensures that all namespaces within a Kubernetes  cluster are labeled with Pod Security Admission (PSA) labels, which are crucial for defining security levels and ensuring that pods within a namespace operate  under the defined Pod Security Standard (PSS). By enforcing namespace labeling, This policy audits namespaces to verify the presence of PSA labels.  If a namespace is found without the required labels, it generates and maintain  and ClusterPolicy Report in default namespace.  This helps administrators identify namespaces that do not comply with the  organization's security practices and take appropriate action to rectify the  situation.\n","category":"Pod Security Admission, EKS Best Practices","filters":"validate::Pod Security Admission, EKS Best Practices::1.6.0::Namespace","link":"/policies/psa/add-psa-namespace-reporting/add-psa-namespace-reporting/","policy":"validate","subject":"Namespace","title":"Add PSA Namespace Reporting","version":"1.6.0"},{"body":"This policy is valuable as it ensures that all namespaces within a Kubernetes  cluster are labeled with Pod Security Admission (PSA) labels, which are crucial for defining security levels and ensuring that pods within a namespace operate  under the defined Pod Security Standard (PSS). By enforcing namespace labeling, This policy audits namespaces to verify the presence of PSA labels.  If a namespace is found without the required labels, it generates and maintain  and ClusterPolicy Report in default namespace.  This helps administrators identify namespaces that do not comply with the  organization's security practices and take appropriate action to rectify the  situation.\n","category":"Pod Security Admission, EKS Best Practices in CEL","filters":"validate::Pod Security Admission, EKS Best Practices in CEL::1.11.0::Namespace","link":"/policies/psa-cel/add-psa-namespace-reporting/add-psa-namespace-reporting/","policy":"validate","subject":"Namespace","title":"Add PSA Namespace Reporting in CEL expressions","version":"1.11.0"},{"body":"To better control the number of resources that can be created in a given Namespace and provide default resource consumption limits for Pods, ResourceQuota and LimitRange resources are recommended. This policy will generate ResourceQuota and LimitRange resources when a new Namespace is created.\n","category":"Multi-Tenancy, EKS Best Practices","filters":"generate::Multi-Tenancy, EKS Best Practices::1.6.0::ResourceQuota, LimitRange","link":"/policies/best-practices/add-ns-quota/add-ns-quota/","policy":"generate","subject":"ResourceQuota, LimitRange","title":"Add Quota","version":"1.6.0"},{"body":"Typically in multi-tenancy and other use cases, when a new Namespace is created, users and other principals must be given some permissions to create and interact with resources in the Namespace. Very commonly, Roles and RoleBindings are used to grant permissions at the Namespace level. This policy generates a RoleBinding called `\u003cuserName\u003e-admin-binding` in the new Namespace which binds to the ClusterRole `admin` as long as a `cluster-admin` did not create the Namespace. Additionally, an annotation named `kyverno.io/user` is added to the RoleBinding recording the name of the user responsible for the Namespace's creation.\n","category":"Multi-Tenancy","filters":"generate::Multi-Tenancy::1.6.0::RoleBinding","link":"/policies/best-practices/add-rolebinding/add-rolebinding/","policy":"generate","subject":"RoleBinding","title":"Add RoleBinding","version":"1.6.0"},{"body":"In the earlier Pod Security Policy controller, it was possible to configure a policy to add a Pod's runtimeClassName. This was beneficial in that various container runtimes could be specified according to a policy. This Kyverno policies mutates Pods to add a runtimeClassName of `prodclass`.\n","category":"PSP Migration","filters":"mutate::PSP Migration::%!s(\u003cnil\u003e)::Pod","link":"/policies/psp-migration/add-runtimeclassname/add-runtimeclassname/","policy":"mutate","subject":"Pod","title":"Add runtimeClassName","version":null},{"body":"The Kubernetes cluster autoscaler does not evict pods that  use hostPath or emptyDir volumes. To allow eviction of these pods, the annotation  cluster-autoscaler.kubernetes.io/safe-to-evict=true must be added to the pods. \n","category":"Other","filters":"mutate::Other::1.6.0::Pod,Annotation","link":"/policies/best-practices/add-safe-to-evict/add-safe-to-evict/","policy":"mutate","subject":"Pod,Annotation","title":"Add Safe To Evict","version":"1.6.0"},{"body":"Containers running in Pods may sometimes need access to node-specific information on  which the Pod has been scheduled. A common use case is node topology labels to ensure  pods are spread across failure zones in racks or in the cloud. The mutate-pod-binding policy already does this for annotations, but it does not handle labels. A useful use case is for passing metric label information to ServiceMonitors and then into Prometheus. This policy watches for Pod binding events when the pod is scheduled and then  asynchronously mutates the existing Pod to add the labels. This policy requires the following changes to common default configurations: - The kyverno resourceFilter should not filter Pod/binding resources. - The kyverno backgroundController service account requires Update permission on pods.  It is recommended to use https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles \n","category":"Other","filters":"mutate::Other::1.10.0::Pod","link":"/policies/other/add-node-labels-pod/add-node-labels-pod/","policy":"mutate","subject":"Pod","title":"Add scheduled Node's labels to a Pod","version":"1.10.0"},{"body":"Add an SSH Service to every VirtualMachineInstance which is getting created. This Service will use a ClusterIP, thus the admin has to ensure that the IP space is large enough and ClusterIP type can be met.\n","category":"KubeVirt","filters":"generate::KubeVirt::%!s(\u003cnil\u003e)::VirtualMachineInstance","link":"/policies/kubevirt/add-services/add-services/","policy":"generate","subject":"VirtualMachineInstance","title":"Add Services","version":null},{"body":"Pod tolerations are used to schedule on Nodes which have a matching taint. This policy adds the toleration `org.com/role=service:NoSchedule` if existing tolerations do not contain the key `org.com/role`.\n","category":"Other","filters":"mutate::Other::1.6.0::Pod","link":"/policies/other/add-tolerations/add-tolerations/","policy":"mutate","subject":"Pod","title":"Add Tolerations","version":"1.6.0"},{"body":"Jobs which are user created can often pile up and consume excess space in the cluster. In Kubernetes 1.23, the TTL-after-finished controller is stable and will automatically clean up these Jobs if the ttlSecondsAfterFinished is specified. This policy adds the ttlSecondsAfterFinished field to an Job that does not have an ownerReference set if not already specified.\n","category":"Other","filters":"mutate::Other::1.6.0::Job","link":"/policies/other/add-ttl-jobs/add-ttl-jobs/","policy":"mutate","subject":"Job","title":"Add TTL to Jobs","version":"1.6.0"},{"body":"Some Kubernetes applications like HashiCorp Vault must perform some modifications to resources in order to invoke their specific functionality. Often times, that functionality is controlled by the presence of a label or specific annotation. This policy, based on HashiCorp Vault, adds a volume and volumeMount to a Deployment if there is an annotation called \"vault.k8s.corp.net/inject=enabled\" present.\n","category":"Sample","filters":"mutate::Sample::1.6.0::Deployment, Volume","link":"/policies/other/add-volume-deployment/add-volume-deployment/","policy":"mutate","subject":"Deployment, Volume","title":"Add Volume to Deployment","version":"1.6.0"},{"body":"In instances where a ClusterPolicy defines all the approved image registries is insufficient, more granular control may be needed to set permitted registries, especially in multi-tenant use cases where some registries may be based on the Namespace. This policy shows an advanced version of the Restrict Image Registries policy which gets a global approved registry from a ConfigMap and, based upon an annotation at the Namespace level, gets the registry approved for that Namespace.\n","category":"Other","filters":"validate::Other::1.6.0::Pod","link":"/policies/other/advanced-restrict-image-registries/advanced-restrict-image-registries/","policy":"validate","subject":"Pod","title":"Advanced Restrict Image Registries","version":"1.6.0"},{"body":"In instances where a ClusterPolicy defines all the approved image registries is insufficient, more granular control may be needed to set permitted registries, especially in multi-tenant use cases where some registries may be based on the Namespace. This policy shows an advanced version of the Restrict Image Registries policy which gets a global approved registry from a ConfigMap and, based upon an annotation at the Namespace level, gets the registry approved for that Namespace.\n","category":"Other in CEL","filters":"validate::Other in CEL::1.11.0::Pod","link":"/policies/other-cel/advanced-restrict-image-registries/advanced-restrict-image-registries/","policy":"validate","subject":"Pod","title":"Advanced Restrict Image Registries in CEL expressions","version":"1.11.0"},{"body":"Kubernetes Nodes, in addition to standard compute resources like CPU and memory, may offer extended resources such as FPGAs and GPUs, both of which can be defined per custom design. These extended resources are advertised in the `status` object of a Node. This policy, functional only starting in Kyverno 1.9, adds the extended resource `example.com/dongle` with a value/capacity of `2` to Kubernetes Nodes.\n","category":"Other","filters":"mutate::Other::1.9.0::Node","link":"/policies/other/advertise-node-extended-resources/advertise-node-extended-resources/","policy":"mutate","subject":"Node","title":"Advertise Node Extended Resources","version":"1.9.0"},{"body":"Rather than creating a deny list of annotations, it may be more useful to invert that list and create an allow list which then denies any others. This policy demonstrates how to allow two annotations with a specific key name of fluxcd.io/ while denying others that do not meet the pattern.\n","category":"Other","filters":"validate::Other::1.6.0::Pod, Annotation","link":"/policies/other/allowed-annotations/allowed-annotations/","policy":"validate","subject":"Pod, Annotation","title":"Allowed Annotations","version":"1.6.0"},{"body":"Rather than creating a deny list of annotations, it may be more useful to invert that list and create an allow list which then denies any others. This policy demonstrates how to allow two annotations with a specific key name of fluxcd.io/ while denying others that do not meet the pattern.\n","category":"Other in CEL","filters":"validate::Other in CEL::1.11.0::Pod, Annotation","link":"/policies/other-cel/allowed-annotations/allowed-annotations/","policy":"validate","subject":"Pod, Annotation","title":"Allowed Annotations in CEL expressions","version":"1.11.0"},{"body":"Building images which specify a base as their origin is a good start to improving supply chain security, but over time organizations may want to build an allow list of specific base images which are allowed to be used when constructing containers. This policy ensures that a container's base, found in an OCI annotation, is in a cluster-wide allow list.\n","category":"Other","filters":"validate::Other::1.7.0::Pod","link":"/policies/other/allowed-base-images/allowed-base-images/","policy":"validate","subject":"Pod","title":"Allowed Base Images","version":"1.7.0"},{"body":"In addition to restricting the image registry from which images are pulled, in some cases and environments it may be required to also restrict which image repositories are used,  for example in some restricted Namespaces. This policy ensures that the only allowed image repositories present in a given Pod, across any container type, come from the designated list.\n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::Pod","link":"/policies/other/allowed-image-repos/allowed-image-repos/","policy":"validate","subject":"Pod","title":"Allowed Image Repositories","version":null},{"body":"In some cases, operations teams need a type of limited access to change resources during troubleshooting or outage mitigation. This policy demonstrates how to prevent modification to labels except one with the key `breakglass`. Changing, adding, or deleting any other labels is denied.\n","category":"Other","filters":"validate::Other::1.6.0::Pod,Label","link":"/policies/other/allowed-label-changes/allowed-label-changes/","policy":"validate","subject":"Pod,Label","title":"Allowed Label Changes","version":"1.6.0"},{"body":"A Pod PriorityClass is used to provide a guarantee on the scheduling of a Pod relative to others. In certain cases where not all users in a cluster are trusted, a malicious user could create Pods at the highest possible priorities, causing other Pods to be evicted/not get scheduled. This policy checks the defined `priorityClassName` in a Pod spec to a dictionary of allowable PriorityClasses for the given Namespace stored in a ConfigMap. If the `priorityClassName` is not among them, the Pod is blocked.\n","category":"Sample","filters":"validate::Sample::1.6.0::Pod","link":"/policies/other/allowed-pod-priorities/allowed-pod-priorities/","policy":"validate","subject":"Pod","title":"Allowed Pod Priorities","version":"1.6.0"},{"body":"A Pod PriorityClass is used to provide a guarantee on the scheduling of a Pod relative to others. In certain cases where not all users in a cluster are trusted, a malicious user could create Pods at the highest possible priorities, causing other Pods to be evicted/not get scheduled. This policy checks the defined `priorityClassName` in a Pod spec to a dictionary of allowable PriorityClasses for the given Namespace stored in a ConfigMap. If the `priorityClassName` is not among them, the Pod is blocked.\n","category":"Sample in CEL","filters":"validate::Sample in CEL::1.11.0::Pod","link":"/policies/other-cel/allowed-pod-priorities/allowed-pod-priorities/","policy":"validate","subject":"Pod","title":"Allowed Pod Priorities in CEL expressions","version":"1.11.0"},{"body":"By default, images that have already been pulled can be accessed by other Pods without re-pulling them if the name and tag are known. In multi-tenant scenarios, this may be undesirable. This policy mutates all incoming Pods to set their imagePullPolicy to Always. An alternative to the Kubernetes admission controller AlwaysPullImages.\n","category":"Sample","filters":"mutate::Sample::1.6.0::Pod","link":"/policies/other/always-pull-images/always-pull-images/","policy":"mutate","subject":"Pod","title":"Always Pull Images","version":"1.6.0"},{"body":"A base image used to construct a container image is not accessible by any Kubernetes component and not a field in a Pod spec as it must be fetched from a registry. Having this information available in the resource referencing the containers helps to provide a clearer understanding of its contents. This policy adds an annotation to a Pod or its controllers with the base image used for each container if present in an OCI annotation.\n","category":"Other","filters":"mutate::Other::1.7.0::Pod","link":"/policies/other/annotate-base-images/annotate-base-images/","policy":"mutate","subject":"Pod","title":"Annotate Base Images","version":"1.7.0"},{"body":"This policy performs some best practices validation on Application fields. Path or chart must be specified but never both. And destination.name or destination.server must be specified but never both.\n","category":"Argo","filters":"validate::Argo::1.6.0::Application","link":"/policies/argo/application-field-validation/application-field-validation/","policy":"validate","subject":"Application","title":"Application Field Validation","version":"1.6.0"},{"body":"This policy performs some best practices validation on Application fields. Path or chart must be specified but never both. And destination.name or destination.server must be specified but never both.\n","category":"Argo in CEL","filters":"validate::Argo in CEL::1.11.0::Application","link":"/policies/argo-cel/application-field-validation/application-field-validation/","policy":"validate","subject":"Application","title":"Application Field Validation in CEL expressions","version":"1.11.0"},{"body":"Pod Security Standards define the fields and their options which are allowable for Pods to achieve certain security best practices. While these are typically validation policies, workloads will either be accepted or rejected based upon what has already been defined. It is also possible to mutate incoming Pods to achieve the desired PSS level rather than reject. This policy sets all the fields necessary to pass the PSS Restricted profile. Note that it does not attempt to remove non-compliant volumes and volumeMounts. Additional policies may be employed for this purpose.\n","category":"Other, PSP Migration","filters":"mutate::Other, PSP Migration::%!s(\u003cnil\u003e)::Pod","link":"/policies/other/apply-pss-restricted-profile/apply-pss-restricted-profile/","policy":"mutate","subject":"Pod","title":"Apply PSS Restricted Profile","version":null},{"body":"This policy generates and synchronizes Argo CD cluster secrets from Rancher  managed cluster.provisioning.cattle.io/v1 resources and their corresponding CAPI secrets. In this solution, Argo CD integrates with Rancher managed clusters via the central Rancher authentication proxy which shares the network endpoint of the Rancher API/GUI. The policy implements work-arounds for Argo CD issue https://github.com/argoproj/argo-cd/issues/9033 \"Cluster-API cluster auto-registration\" and Rancher issue https://github.com/rancher/rancher/issues/38053 \"Fix type and labels Rancher v2 provisioner specifies when creating CAPI Cluster Secret\".\n","category":"Argo","filters":"generate::Argo::1.7.0::Secret","link":"/policies/argo/argo-cluster-generation-from-rancher-capi/argo-cluster-generation-from-rancher-capi/","policy":"generate","subject":"Secret","title":"Argo Cluster Secret Generation From Rancher CAPI Secret","version":"1.7.0"},{"body":"Kubernetes Events are limited in that the circumstances under which they are created cannot be changed and with what they are associated is fixed. It may be advantageous in many cases to augment these out-of-the-box Events with custom Events which can be custom designed to your needs. This policy generates an Event when a Secret has been deleted. It lists the userInfo of the actor performing the deletion.\n","category":"Other","filters":"generate::Other::1.10.0::Secret","link":"/policies/other/audit-event-on-delete/audit-event-on-delete/","policy":"generate","subject":"Secret","title":"Audit Event on Delete","version":"1.10.0"},{"body":"Kubernetes Events are limited in that the circumstances under which they are created cannot be changed and with what they are associated is fixed. It may be advantageous in many cases to augment these out-of-the-box Events with custom Events which can be custom designed to your needs. This policy generates an Event on a Pod when an exec has been made to it. It lists the userInfo of the actor performing the exec along with the command used in the exec.\n","category":"Other","filters":"generate::Other::1.10.0::Pod","link":"/policies/other/audit-event-on-exec/audit-event-on-exec/","policy":"generate","subject":"Pod","title":"Audit Event on Pod Exec","version":"1.10.0"},{"body":"In order for Velero to backup volumes in a Pod using an opt-in approach, it requires an annotation on the Pod called `backup.velero.io/backup-volumes` with the value being a comma-separated list of the volumes mounted to that Pod. This policy automatically annotates Pods (and Pod controllers) which refer to a PVC so that all volumes are listed in the aforementioned annotation if a Namespace with the label `velero-backup-pvc=true`.\n","category":"Velero","filters":"mutate::Velero::%!s(\u003cnil\u003e)::Pod, Annotation","link":"/policies/velero/backup-all-volumes/backup-all-volumes/","policy":"mutate","subject":"Pod, Annotation","title":"Backup All Volumes","version":null},{"body":"The baseline profile of the Pod Security Standards is a collection of the most basic and important steps that can be taken to secure Pods. Beginning with Kyverno 1.8, an entire profile may be assigned to the cluster through a single rule. This policy configures the baseline profile through the latest version of the Pod Security Standards cluster wide.\n","category":"Pod Security, EKS Best Practices","filters":"validate::Pod Security, EKS Best Practices::1.8.0::Pod","link":"/policies/pod-security/subrule/podsecurity-subrule-baseline/podsecurity-subrule-baseline/","policy":"validate","subject":"Pod","title":"Baseline Pod Security Standards","version":"1.8.0"},{"body":"In some cases, it may be desirable to block operations of certain privileged users (i.e. cluster-admins) in a specific namespace. In this policy, Kyverno will look for all user operations (CREATE, UPDATE, DELETE), on every object kind, in the testnamespace namespace, and for the ClusterRole cluster-admin. The user testuser is also mentioned so it won't include all the cluster-admins in the cluster, but will be flexible enough to apply only for a sub-group of the cluster-admins in the cluster.\n","category":"Other","filters":"validate::Other::1.9.0::Namespace, ClusterRole, User","link":"/policies/other/block-cluster-admin-from-ns/block-cluster-admin-from-ns/","policy":"validate","subject":"Namespace, ClusterRole, User","title":"Block cluster-admin from modifying any object in a Namespace","version":"1.9.0"},{"body":"Ephemeral containers, enabled by default in Kubernetes 1.23, allow users to use the `kubectl debug` functionality and attach a temporary container to an existing Pod. This may potentially be used to gain access to unauthorized information executing inside one or more containers in that Pod. This policy blocks the use of ephemeral containers.\n","category":"Other","filters":"validate::Other::1.6.0::Pod","link":"/policies/other/block-ephemeral-containers/block-ephemeral-containers/","policy":"validate","subject":"Pod","title":"Block Ephemeral Containers","version":"1.6.0"},{"body":"Ephemeral containers, enabled by default in Kubernetes 1.23, allow users to use the `kubectl debug` functionality and attach a temporary container to an existing Pod. This may potentially be used to gain access to unauthorized information executing inside one or more containers in that Pod. This policy blocks the use of ephemeral containers.\n","category":"Other in CEL","filters":"validate::Other in CEL::1.11.0::Pod","link":"/policies/other-cel/block-ephemeral-containers/block-ephemeral-containers/","policy":"validate","subject":"Pod","title":"Block Ephemeral Containers in CEL expressions","version":"1.11.0"},{"body":"OCI images may optionally be built with VOLUME statements which, if run in read-only mode, would still result in write access to the specified location. This may be unexpected and undesirable. This policy checks the contents of every container image and inspects them for such VOLUME statements, then blocks if found.\n","category":"Other","filters":"validate::Other::1.6.0::Pod","link":"/policies/other/block-images-with-volumes/block-images-with-volumes/","policy":"validate","subject":"Pod","title":"Block Images with Volumes","version":"1.6.0"},{"body":"Pods which run containers of very large image size take longer to pull and require more space to store. A user may either inadvertently or purposefully name an image which is unusually large to disrupt operations. This policy checks the size of every container image and blocks if it is over 2 Gibibytes.\n","category":"Other","filters":"validate::Other::1.6.0::Pod","link":"/policies/other/block-large-images/block-large-images/","policy":"validate","subject":"Pod","title":"Block Large Images","version":"1.6.0"},{"body":"The `exec` command may be used to gain shell access, or run other commands, in a Pod's container. While this can be useful for troubleshooting purposes, it could represent an attack vector and is discouraged. This policy blocks Pod exec commands based upon a Namespace label `exec=false`.\n","category":"Sample","filters":"validate::Sample::1.6.0::Pod","link":"/policies/other/block-pod-exec-by-namespace-label/block-pod-exec-by-namespace-label/","policy":"validate","subject":"Pod","title":"Block Pod Exec by Namespace Label","version":"1.6.0"},{"body":"The `exec` command may be used to gain shell access, or run other commands, in a Pod's container. While this can be useful for troubleshooting purposes, it could represent an attack vector and is discouraged. This policy blocks Pod exec commands to Pods in a Namespace called `pci`.\n","category":"Sample","filters":"validate::Sample::1.6.0::Pod","link":"/policies/other/block-pod-exec-by-namespace/block-pod-exec-by-namespace/","policy":"validate","subject":"Pod","title":"Block Pod Exec by Namespace Name","version":"1.6.0"},{"body":"The `exec` command may be used to gain shell access, or run other commands, in a Pod's container. While this can be useful for troubleshooting purposes, it could represent an attack vector and is discouraged. This policy blocks Pod exec commands to containers named `nginx` in Pods starting with name `myapp-maintenance`.\n","category":"Sample","filters":"validate::Sample::1.6.0::Pod","link":"/policies/other/block-pod-exec-by-pod-and-container/block-pod-exec-by-pod-and-container/","policy":"validate","subject":"Pod","title":"Block Pod Exec by Pod and Container","version":"1.6.0"},{"body":"The `exec` command may be used to gain shell access, or run other commands, in a Pod's container. While this can be useful for troubleshooting purposes, it could represent an attack vector and is discouraged. This policy blocks Pod exec commands to Pods having the label `exec=false`.\n","category":"Sample","filters":"validate::Sample::1.6.0::Pod","link":"/policies/other/block-pod-exec-by-pod-label/block-pod-exec-by-pod-label/","policy":"validate","subject":"Pod","title":"Block Pod Exec by Pod Label","version":"1.6.0"},{"body":"The `exec` command may be used to gain shell access, or run other commands, in a Pod's container. While this can be useful for troubleshooting purposes, it could represent an attack vector and is discouraged. This policy blocks Pod exec commands to Pods beginning with the name `myapp-maintenance-`.\n","category":"Sample","filters":"validate::Sample::1.6.0::Pod","link":"/policies/other/block-pod-exec-by-pod-name/block-pod-exec-by-pod-name/","policy":"validate","subject":"Pod","title":"Block Pod Exec by Pod Name","version":"1.6.0"},{"body":"Images that are old usually have some open security vulnerabilities which are not patched. This policy checks the contents of every container image and inspects them for the create time. If it finds any image which was built more than 6 months ago this policy blocks the deployment.\n","category":"Other","filters":"validate::Other::1.6.0::Pod","link":"/policies/other/block-stale-images/block-stale-images/","policy":"validate","subject":"Pod","title":"Block Stale Images","version":"1.6.0"},{"body":"Restrict creation of TaskRun resources to the Tekton pipelines controller.\n","category":"Tekton","filters":"validate::Tekton::1.6.0::TaskRun","link":"/policies/tekton/block-tekton-task-runs/block-tekton-task-runs/","policy":"validate","subject":"TaskRun","title":"Block Tekton TaskRun","version":"1.6.0"},{"body":"Restrict creation of TaskRun resources to the Tekton pipelines controller.\n","category":"Tekton in CEL","filters":"validate::Tekton in CEL::1.11.0::TaskRun","link":"/policies/tekton-cel/block-tekton-task-runs/block-tekton-task-runs/","policy":"validate","subject":"TaskRun","title":"Block Tekton TaskRun in CEL expressions","version":"1.11.0"},{"body":"Kubernetes RBAC allows for controls on kinds of resources or those with specific names. But it does not have the type of granularity often required in more complex environments. This policy restricts updates and deletes to any Service resource that contains the label `protected=true` unless by a cluster-admin.\n","category":"Sample","filters":"validate::Sample::%!s(\u003cnil\u003e)::RBAC","link":"/policies/other/block-updates-deletes/block-updates-deletes/","policy":"validate","subject":"RBAC","title":"Block Updates and Deletes","version":null},{"body":"Velero allows on backup and restore operations and is designed to be run with full cluster admin permissions. It allows on cross namespace restore operations, which means you can restore backup of namespace A to namespace B. This policy protect restore operation into system or any protected namespaces, listed in deny condition section.  It checks the Restore CRD object and its namespaceMapping field. If destination match protected namespace then operation fails and warning message is throw.\n","category":"Velero","filters":"validate::Velero::%!s(\u003cnil\u003e)::Restore","link":"/policies/velero/block-velero-restore/block-velero-restore/","policy":"validate","subject":"Restore","title":"Block Velero Restore to Protected Namespace","version":null},{"body":"Velero allows on backup and restore operations and is designed to be run with full cluster admin permissions. It allows on cross namespace restore operations, which means you can restore backup of namespace A to namespace B. This policy protect restore operation into system or any protected namespaces, listed in deny condition section.  It checks the Restore CRD object and its namespaceMapping field. If destination match protected namespace then operation fails and warning message is throw.\n","category":"Velero in CEL","filters":"validate::Velero in CEL::%!s(\u003cnil\u003e)::Restore","link":"/policies/velero-cel/block-velero-restore/block-velero-restore/","policy":"validate","subject":"Restore","title":"Block Velero Restore to Protected Namespace in CEL expressions","version":null},{"body":"Kubernetes managed non-letsencrypt certificates have to be renewed in every 100 days.\n","category":"Cert-Manager","filters":"validate::Cert-Manager::1.6.0::Certificate","link":"/policies/cert-manager/limit-duration/limit-duration/","policy":"validate","subject":"Certificate","title":"Certificate max duration 100 days","version":"1.6.0"},{"body":"The Default DNS policy in Kubernetes gives the flexibility of service  access; however, it costs some latency on a high scale, and it needs to  be optimized. This policy helps us to optimize the performance of DNS  queries by setting DNS Options, nodelocalDNS IP, and search Domains. This policy can be applied for the clusters provisioned by kubeadm.\n","category":"Other","filters":"mutate::Other::%!s(\u003cnil\u003e)::Pod","link":"/policies/other/dns-policy-and-dns-config/dns-policy-and-dns-config/","policy":"mutate","subject":"Pod","title":"Change DNS Config and Policy","version":null},{"body":"Check the 'dataprotection' label for production Deployments and StatefulSet workloads. Use in combination with 'kasten-generate-example-backup-policy' policy to generate a Kasten policy for the workload namespace, if it doesn't already exist.\n","category":"Veeam Kasten","filters":"validate::Veeam Kasten::1.6.2::Deployment, StatefulSet","link":"/policies/kasten/kasten-data-protection-by-label/kasten-data-protection-by-label/","policy":"validate","subject":"Deployment, StatefulSet","title":"Check Data Protection By Label","version":"1.6.2"},{"body":"Check the 'dataprotection' label that production Deployments and StatefulSet have a named K10 Policy. Use in combination with 'generate' ClusterPolicy to 'generate' a specific K10 Policy by name.\n","category":"Kasten K10 by Veeam in CEL","filters":"validate::Kasten K10 by Veeam in CEL::1.11.0::Deployment, StatefulSet","link":"/policies/kasten-cel/k10-data-protection-by-label/k10-data-protection-by-label/","policy":"validate","subject":"Deployment, StatefulSet","title":"Check Data Protection By Label in CEL expressions","version":"1.11.0"},{"body":"Kubernetes APIs are sometimes deprecated and removed after a few releases. As a best practice, older API versions should be replaced with newer versions. This policy validates for APIs that are deprecated or scheduled for removal. Note that checking for some of these resources may require modifying the Kyverno ConfigMap to remove filters. In the validate-v1-22-removals rule, the Lease kind has been commented out due to a check for this kind having a performance penalty on Kubernetes clusters with many leases. Its enabling should be attended carefully and is not recommended on large clusters. PodSecurityPolicy is removed in v1.25 so therefore the validate-v1-25-removals rule may not completely work on 1.25+. This policy requires Kyverno v1.7.4+ to function properly.\n","category":"Best Practices","filters":"validate::Best Practices::1.7.4::Kubernetes APIs","link":"/policies/best-practices/check-deprecated-apis/check-deprecated-apis/","policy":"validate","subject":"Kubernetes APIs","title":"Check deprecated APIs","version":"1.7.4"},{"body":"Kubernetes APIs are sometimes deprecated and removed after a few releases. As a best practice, older API versions should be replaced with newer versions. This policy validates for APIs that are deprecated or scheduled for removal. Note that checking for some of these resources may require modifying the Kyverno ConfigMap to remove filters. PodSecurityPolicy is removed in v1.25 so therefore the validate-v1-25-removals rule may not completely work on 1.25+.\n","category":"Best Practices in CEL","filters":"validate::Best Practices in CEL::%!s(\u003cnil\u003e)::Kubernetes APIs","link":"/policies/best-practices-cel/check-deprecated-apis/check-deprecated-apis/","policy":"validate","subject":"Kubernetes APIs","title":"Check deprecated APIs in CEL expressions","version":null},{"body":"Environment variables control many aspects of a container's execution and are often the source of many different configuration settings. Being able to ensure that the value of a specific environment variable either is or is not set to a specific string is useful to maintain such controls. This policy checks every container to ensure that if the `DISABLE_OPA` environment variable is defined, it must not be set to a value of `\"true\"`.\n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::Pod","link":"/policies/other/check-env-vars/check-env-vars/","policy":"validate","subject":"Pod","title":"Check Environment Variables","version":null},{"body":"Environment variables control many aspects of a container's execution and are often the source of many different configuration settings. Being able to ensure that the value of a specific environment variable either is or is not set to a specific string is useful to maintain such controls. This policy checks every container to ensure that if the `DISABLE_OPA` environment variable is defined, it must not be set to a value of `\"true\"`.\n","category":"Other in CEL","filters":"validate::Other in CEL::%!s(\u003cnil\u003e)::Pod","link":"/policies/other-cel/check-env-vars/check-env-vars/","policy":"validate","subject":"Pod","title":"Check Environment Variables in CEL expressions","version":null},{"body":"VerticalPodAutoscaler (VPA) is useful to automatically adjust the resources assigned to Pods.  It requires defining a specific target resource by kind and name. There are no built-in  validation checks by the VPA controller to ensure that the target resource is associated with it.  This policy ensures that the matching kind has a matching VPA. \n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::Deployment, StatefulSet, ReplicaSet, DaemonSet, VerticalPodAutoscaler","link":"/policies/other/check-vpa-configuration/check-vpa-configuration/","policy":"validate","subject":"Deployment, StatefulSet, ReplicaSet, DaemonSet, VerticalPodAutoscaler","title":"Check for matching VerticalPodAutoscaler (VPA)","version":null},{"body":"K10 Policy resources can be educated to adhere to common Recovery Point Objective (RPO) best practices.  This policy is advising to use an RPO frequency that with hourly granularity if it has the appPriority: Mission Critical\n","category":"Kasten K10 by Veeam in CEL","filters":"validate::Kasten K10 by Veeam in CEL::1.11.0::Policy","link":"/policies/kasten-cel/k10-hourly-rpo/k10-hourly-rpo/","policy":"validate","subject":"Policy","title":"Check Hourly RPO in CEL expressions","version":"1.11.0"},{"body":"Container images can be built from a variety of sources, including other preexisting images. Ensuring images that are allowed to run are built from known, trusted images where their provenance is guaranteed can be an important step in ensuring overall cluster security. This policy ensures that any container image specifies some base image in its metadata from four possible sources: Docker BuildKit, OCI annotations (in manifest or config), or Buildpacks. Note that the ability to detect the presence of a base image is not implicit and requires the author to specify it using metadata or build directives of some sort (ex., Dockerfile FROM statements do not automatically expose this information).\n","category":"Other, EKS Best Practices","filters":"validate::Other, EKS Best Practices::1.7.0::Pod","link":"/policies/other/require-base-image/require-base-image/","policy":"validate","subject":"Pod","title":"Check Image Base","version":"1.7.0"},{"body":"The 3-2-1 rule of data protection recommends that you have at least 3 copies of data, on 2 different storage targets, with 1 being offsite. This approach ensures a health mix of redundancy options for data recovery of the application for localized \u0026 multi-region cloud failures or compromise. In Kubernetes, this translates to the original running resources, a local snapshot, and a copy of all application resources and volume data exported to an external repository. This policy accomplishes 3-2-1 validation by ensuring each policy contains both 'action: backup' and 'action: export'.\n","category":"Veeam Kasten","filters":"validate::Veeam Kasten::1.12.0::Policy","link":"/policies/kasten/kasten-3-2-1-backup/kasten-3-2-1-backup/","policy":"validate","subject":"Policy","title":"Check Kasten 3-2-1 Backup Policy","version":"1.12.0"},{"body":"Ensure Kasten Location Profiles have enabled immutability to prevent unintentional or malicious changes to backup data.\n","category":"Veeam Kasten","filters":"validate::Veeam Kasten::1.6.0::config.kio.kasten.io/v1alpha1/Profile","link":"/policies/kasten/kasten-immutable-location-profile/kasten-immutable-location-profile/","policy":"validate","subject":"config.kio.kasten.io/v1alpha1/Profile","title":"Check Kasten Location Profile is Immutable","version":"1.6.0"},{"body":"Kasten Policy resources can be required to adhere to common Recovery Point Objective (RPO) best practices.  This example policy validates that the Policy is set to run hourly if it explicitly protects any namespaces containing the `appPriority=critical` label. This policy can be adapted to enforce any Kasten Policy requirements based on a namespace label.\n","category":"Veeam Kasten","filters":"validate::Veeam Kasten::1.12.0::Policy","link":"/policies/kasten/kasten-hourly-rpo/kasten-hourly-rpo/","policy":"validate","subject":"Policy","title":"Check Kasten Policy RPO based on Namespace Label","version":"1.12.0"},{"body":"As of Linkerd 2.12, an AuthorizationPolicy is a resource used to selectively allow traffic to either a Server or HTTPRoute resource. Creating AuthorizationPolicies is needed when a Server exists in order to control what traffic is permitted within the mesh to the Pods selected by the Server or HTTPRoute. This policy, requiring Linkerd 2.12+, checks incoming AuthorizationPolicy resources to ensure that either a matching Server or HTTPRoute exists first.\n","category":"Linkerd","filters":"validate::Linkerd::%!s(\u003cnil\u003e)::AuthorizationPolicy","link":"/policies/linkerd/check-linkerd-authorizationpolicy/check-linkerd-authorizationpolicy/","policy":"validate","subject":"AuthorizationPolicy","title":"Check Linkerd AuthorizationPolicy","version":null},{"body":"Before version 1.24, Kubernetes automatically generated Secret-based tokens  for ServiceAccounts. To distinguish between automatically generated tokens  and manually created ones, Kubernetes checks for a reference from the  ServiceAccount's secrets field. If the Secret is referenced in the secrets  field, it is considered an auto-generated legacy token. These legacy Tokens can be of security concern and should be audited.\n","category":"Security","filters":"validate::Security::%!s(\u003cnil\u003e)::Secret,ServiceAccount","link":"/policies/other/check-serviceaccount-secrets/check-serviceaccount-secrets/","policy":"validate","subject":"Secret,ServiceAccount","title":"Check Long-Lived Secrets in ServiceAccounts","version":null},{"body":"Before version 1.24, Kubernetes automatically generated Secret-based tokens  for ServiceAccounts. To distinguish between automatically generated tokens  and manually created ones, Kubernetes checks for a reference from the  ServiceAccount's secrets field. If the Secret is referenced in the secrets  field, it is considered an auto-generated legacy token. These legacy Tokens can be of security concern and should be audited.\n","category":"Security in CEL","filters":"validate::Security in CEL::%!s(\u003cnil\u003e)::Secret,ServiceAccount","link":"/policies/other-cel/check-serviceaccount-secrets/check-serviceaccount-secrets/","policy":"validate","subject":"Secret,ServiceAccount","title":"Check Long-Lived Secrets in ServiceAccounts in CEL expressions","version":null},{"body":"Linux CVE-2022-0185 can allow a container escape in Kubernetes if left unpatched. The affected Linux kernel versions, at this time, are 5.10.84-1 and 5.15.5-2. For more information, refer to https://security-tracker.debian.org/tracker/CVE-2022-0185. This policy runs in background mode and flags an entry in the ClusterPolicyReport if any Node is reporting one of the affected kernel versions.\n","category":"Other","filters":"validate::Other::1.6.0::Node","link":"/policies/other/check-node-for-cve-2022-0185/check-node-for-cve-2022-0185/","policy":"validate","subject":"Node","title":"Check Node for CVE-2022-0185","version":"1.6.0"},{"body":"Linux CVE-2022-0185 can allow a container escape in Kubernetes if left unpatched. The affected Linux kernel versions, at this time, are 5.10.84-1 and 5.15.5-2. For more information, refer to https://security-tracker.debian.org/tracker/CVE-2022-0185. This policy runs in background mode and flags an entry in the ClusterPolicyReport if any Node is reporting one of the affected kernel versions.\n","category":"Other in CEL","filters":"validate::Other in CEL::1.11.0::Node","link":"/policies/other-cel/check-node-for-cve-2022-0185/check-node-for-cve-2022-0185/","policy":"validate","subject":"Node","title":"Check Node for CVE-2022-0185 in CEL expressions","version":"1.11.0"},{"body":"Containers which request use of an NVIDIA GPU often need to be authored to consume them via a CUDA environment variable called NVIDIA_VISIBLE_DEVICES. This policy checks the containers which request a GPU to ensure they have been authored with this environment variable.\n","category":"Other","filters":"validate::Other::1.6.0::Pod","link":"/policies/other/check-nvidia-gpu/check-nvidia-gpu/","policy":"validate","subject":"Pod","title":"Check NVIDIA GPUs","version":"1.6.0"},{"body":"When a Pod controller which can run multiple replicas is subject to an active PodDisruptionBudget, if the replicas field has a value equal to the minAvailable value of the PodDisruptionBudget it may prevent voluntary disruptions including Node drains which may impact routine maintenance tasks and disrupt operations. This policy checks incoming Deployments and StatefulSets which have a matching PodDisruptionBudget to ensure these two values do not match.\n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::PodDisruptionBudget, Deployment, StatefulSet","link":"/policies/other/pdb-minavailable/pdb-minavailable/","policy":"validate","subject":"PodDisruptionBudget, Deployment, StatefulSet","title":"Check PodDisruptionBudget minAvailable","version":null},{"body":"ServiceAccounts with privileges to create Pods may be able to do so and name a ServiceAccount other than the one used to create it. This policy checks the Pod, if created by a ServiceAccount, and ensures the `serviceAccountName` field matches the actual ServiceAccount.\n","category":"Sample","filters":"validate::Sample::1.6.0::Pod,ServiceAccount","link":"/policies/other/check-serviceaccount/check-serviceaccount/","policy":"validate","subject":"Pod,ServiceAccount","title":"Check ServiceAccount","version":"1.6.0"},{"body":"In some cases a validation check for one type of resource may need to take into consideration the requesting user's permissions on a different type of resource. Rather than parsing through all Roles and/or ClusterRoles to check if these permissions are held, Kyverno can perform a SubjectAccessReview request to the Kubernetes API server and have it figure out those permissions. This policy illustrates how to perform a POST request to the API server to subject a SubjectAccessReview for a user creating/updating a ConfigMap. It is intended to be used as a component in a more functional rule.\n","category":"Other","filters":"validate::Other::1.10.0::SubjectAccessReview","link":"/policies/other/check-subjectaccessreview/check-subjectaccessreview/","policy":"validate","subject":"SubjectAccessReview","title":"Check SubjectAccessReview","version":"1.10.0"},{"body":"Supplemental groups control which group IDs containers add and can coincide with restricted groups on the host. Pod Security Policies (PSP) allowed a range of these group IDs to be specified which were allowed. This policy ensures any Pod may only specify supplementalGroup IDs between 100-200 or 500-600.\n","category":"PSP Migration","filters":"validate::PSP Migration::1.6.0::Pod","link":"/policies/psp-migration/check-supplemental-groups/check-supplemental-groups/","policy":"validate","subject":"Pod","title":"Check supplementalGroups","version":"1.6.0"},{"body":"Supplemental groups control which group IDs containers add and can coincide with restricted groups on the host. Pod Security Policies (PSP) allowed a range of these group IDs to be specified which were allowed. This policy ensures any Pod may only specify supplementalGroup IDs between 100-200 or 500-600.\n","category":"PSP Migration in CEL","filters":"validate::PSP Migration in CEL::1.11.0::Pod","link":"/policies/psp-migration-cel/check-supplemental-groups/check-supplemental-groups/","policy":"validate","subject":"Pod","title":"Check supplementalGroups in CEL expressions","version":"1.11.0"},{"body":"A signed bundle is required and a vulnerability scan made by Grype must return no vulnerabilities greater than 8.0.\n","category":"Tekton","filters":"verifyImages::Tekton::1.7.0::TaskRun","link":"/policies/tekton/verify-tekton-taskrun-vuln-scan/verify-tekton-taskrun-vuln-scan/","policy":"verifyImages","subject":"TaskRun","title":"Check Tekton TaskRun Vulnerability Scan","version":"1.7.0"},{"body":"A bare Pod is any Pod created directly and not owned by a controller such as a Deployment or Job. Bare Pods are often create manually by users in an attempt to troubleshoot an issue. If left in the cluster, they create clutter, increase cost, and can be a security risk. Bare Pods can be cleaned up periodically through use of a policy. This policy finds and removes all bare Pods across the cluster.\n","category":"Other","filters":"cleanUp::Other::1.10.0::Pod","link":"/policies/cleanup/cleanup-bare-pods/cleanup-bare-pods/","policy":"cleanUp","subject":"Pod","title":"Cleanup Bare Pods","version":"1.10.0"},{"body":"ReplicaSets serve as an intermediate controller for various Pod controllers like Deployments. When a new version of a Deployment is initiated, it generates a new ReplicaSet with the specified number of replicas and scales down the current one to zero. Consequently, numerous empty ReplicaSets may accumulate in the cluster, leading to clutter and potential false positives in policy reports if enabled. This cleanup policy is designed to remove empty ReplicaSets across the cluster within a specified timeframe, for instance, ReplicaSets created one day ago, ensuring the ability to rollback to previous ReplicaSets in case of deployment issues\n","category":"Other","filters":"cleanUp::Other::1.9.0::ReplicaSet","link":"/policies/cleanup/cleanup-empty-replicasets/cleanup-empty-replicasets/","policy":"cleanUp","subject":"ReplicaSet","title":"Cleanup Empty ReplicaSets","version":"1.9.0"},{"body":"This policy generates a job which gathers troubleshooting data (including logs, kubectl describe output and events from the namespace) from pods that are in CrashLoopBackOff and have 3 restarts. This data can further be used to automatically create a Jira issue using some kind of automation or another Kyverno policy. For more information on the image used in this policy in addition to the necessary RBAC resources required in order for this policy to operate, see the documentation at https://github.com/nirmata/SRE-Operational-Usecases/tree/main/get-troubleshooting-data/get-debug-data. \n","category":"Other","filters":"generate::Other::%!s(\u003cnil\u003e)::Pod","link":"/policies/other/get-debug-information/get-debug-information/","policy":"generate","subject":"Pod","title":"Collect Debug Information for Pods in CrashLoopBackOff","version":null},{"body":"In some cases, an update to an existing resource should have downstream effects on a different resource in another Namespace. Rather than overwriting the target, the current state of the source can be concatenated to the target. This policy, triggered by an update to a source ConfigMap, concatenates that value of a target ConfigMap in a different Namespace.\n","category":"Other","filters":"mutate::Other::1.7.0::ConfigMap","link":"/policies/other/concatenate-configmaps/concatenate-configmaps/","policy":"mutate","subject":"ConfigMap","title":"Concatenate ConfigMaps","version":"1.7.0"},{"body":"It is common for Namespaced resources to need access to labels which have been assigned to the Namespace in which they reside. This policy demonstrates two different ways of assigning Namespace labels to a Deployment. The first method copies only the `owner` label while the second copies all labels except for `kubernetes.io/metadata.name`.\n","category":"Other","filters":"mutate::Other::%!s(\u003cnil\u003e)::Deployment, Label, Namespace","link":"/policies/other/copy-namespace-labels/copy-namespace-labels/","policy":"mutate","subject":"Deployment, Label, Namespace","title":"Copy Namespace Labels","version":null},{"body":"There are cases where either an operations or security incident may occur and Nodes should be evacuated and placed in an unused state for further analysis. For example, a Node is found to be running a vulnerable version of a CRI engine or kernel and to minimize chances of a compromise may need to be decommissioned so another can be built. This policy shows how to use Kyverno to both cordon and drain a given Node and uses a hypothetical label being written to it called `testing=drain` to illustrate the point. For production use, the match block should be modified to trigger on the appropriate condition.\n","category":"Other","filters":"mutate::Other::1.10.0::Node","link":"/policies/other/cordon-and-drain-node/cordon-and-drain-node/","policy":"mutate","subject":"Node","title":"Cordon and Drain Node","version":"1.10.0"},{"body":"An AuthorizationPolicy enables access controls on workloads in the mesh. It supports per-Namespace controls which can be a union of different behaviors. This policy creates a default deny AuthorizationPolicy for all new Namespaces. Further AuthorizationPolicies should be created to more granularly allow traffic as permitted. Use of this policy will likely require granting the Kyverno ServiceAccount additional privileges required to generate AuthorizationPolicy resources.\n","category":"Istio","filters":"generate::Istio::1.6.0::AuthorizationPolicy","link":"/policies/istio/create-authorizationpolicy/create-authorizationpolicy/","policy":"generate","subject":"AuthorizationPolicy","title":"Create Istio Deny AuthorizationPolicy","version":"1.6.0"},{"body":"Developers may feel compelled to use simple shell commands as a workaround to creating \"proper\" liveness or readiness probes for a Pod. Such a practice can be discouraged via detection of those commands. This policy prevents the use of certain commands `jcmd`, `ps`, or `ls` if found in a Pod's liveness exec probe.\n","category":"Other","filters":"validate::Other::1.9.0::Pod","link":"/policies/other/deny-commands-in-exec-probe/deny-commands-in-exec-probe/","policy":"validate","subject":"Pod","title":"Deny Commands in Exec Probe","version":"1.9.0"},{"body":"Developers may feel compelled to use simple shell commands as a workaround to creating \"proper\" liveness or readiness probes for a Pod. Such a practice can be discouraged via detection of those commands. This policy prevents the use of certain commands `jcmd`, `ps`, or `ls` if found in a Pod's liveness exec probe.\n","category":"Other in CEL","filters":"validate::Other in CEL::1.11.0::Pod","link":"/policies/other-cel/deny-commands-in-exec-probe/deny-commands-in-exec-probe/","policy":"validate","subject":"Pod","title":"Deny Commands in Exec Probe in CEL expressions","version":"1.11.0"},{"body":"This policy denies the creation and updating of resources specifically for Deployment  and Pod kinds during a specified time window. The policy is designed to enhance control  over resource modifications during critical periods, ensuring stability and consistency  within the Kubernetes environment.\n","category":"Other","filters":"validate::Other::1.9.0::Pod","link":"/policies/other/resource-creation-updating-denied/resource-creation-updating-denied/","policy":"validate","subject":"Pod","title":"Deny Creation and Updating of Resources","version":"1.9.0"},{"body":"When Pod Security Admission (PSA) is enforced at the cluster level via an AdmissionConfiguration file which defines a default level at baseline or restricted, setting of a label at the `privileged` profile will effectively cause unrestricted workloads in that Namespace, overriding the cluster default. This may effectively represent a circumvention attempt and should be closely controlled. This policy ensures that only those holding the cluster-admin ClusterRole may create Namespaces which assign the label `pod-security.kubernetes.io/enforce=privileged`.\n","category":"Pod Security Admission","filters":"validate::Pod Security Admission::1.6.0::Namespace","link":"/policies/psa/deny-privileged-profile/deny-privileged-profile/","policy":"validate","subject":"Namespace","title":"Deny Privileged Profile","version":"1.6.0"},{"body":"When Pod Security Admission (PSA) is enforced at the cluster level via an AdmissionConfiguration file which defines a default level at baseline or restricted, setting of a label at the `privileged` profile will effectively cause unrestricted workloads in that Namespace, overriding the cluster default. This may effectively represent a circumvention attempt and should be closely controlled. This policy ensures that only those holding the cluster-admin ClusterRole may create Namespaces which assign the label `pod-security.kubernetes.io/enforce=privileged`.\n","category":"Pod Security Admission in CEL expressions","filters":"validate::Pod Security Admission in CEL expressions::1.11.0::Namespace","link":"/policies/psa-cel/deny-privileged-profile/deny-privileged-profile/","policy":"validate","subject":"Namespace","title":"Deny Privileged Profile in CEL expressions","version":"1.11.0"},{"body":"Before version 1.24, Kubernetes automatically generated Secret-based tokens  for ServiceAccounts. When creating a Secret, you can specify its type using the  type field of the Secret resource . The type kubernetes.io/service-account-token is used for legacy ServiceAccount tokens . These legacy Tokens can be of security concern and should be audited.\n","category":"Security","filters":"validate::Security::%!s(\u003cnil\u003e)::Secret, ServiceAccount","link":"/policies/other/deny-secret-service-account-token-type/deny-secret-service-account-token-type/","policy":"validate","subject":"Secret, ServiceAccount","title":"Deny Secret Service Account Token Type","version":null},{"body":"Before version 1.24, Kubernetes automatically generated Secret-based tokens  for ServiceAccounts. When creating a Secret, you can specify its type using the  type field of the Secret resource . The type kubernetes.io/service-account-token is used for legacy ServiceAccount tokens . These legacy Tokens can be of security concern and should be audited.\n","category":"Security in CEL","filters":"validate::Security in CEL::%!s(\u003cnil\u003e)::Secret, ServiceAccount","link":"/policies/other-cel/deny-secret-service-account-token-type/deny-secret-service-account-token-type/","policy":"validate","subject":"Secret, ServiceAccount","title":"Deny Secret Service Account Token Type in CEL expressions","version":null},{"body":"A new ServiceAccount called `default` is created whenever a new Namespace is created. Pods spawned in that Namespace, unless otherwise set, will be assigned this ServiceAccount. This policy mutates any new `default` ServiceAccounts to disable auto-mounting of the token into Pods obviating the need to do so individually.\n","category":"Other, EKS Best Practices","filters":"mutate::Other, EKS Best Practices::1.6.0::ServiceAccount","link":"/policies/other/disable-automountserviceaccounttoken/disable-automountserviceaccounttoken/","policy":"mutate","subject":"ServiceAccount","title":"Disable automountServiceAccountToken","version":"1.6.0"},{"body":"Not all Pods require communicating with other Pods or resolving in-cluster Services. For those, disabling service discovery can increase security as the Pods are limited to what they can see. This policy mutates Pods to set dnsPolicy to `Default` and enableServiceLinks to `false`.\n","category":"Other, EKS Best Practices","filters":"mutate::Other, EKS Best Practices::1.6.0::Pod","link":"/policies/other/disable-service-discovery/disable-service-discovery/","policy":"mutate","subject":"Pod","title":"Disable Service Discovery","version":"1.6.0"},{"body":"Secrets often contain sensitive information which not all Pods need consume. This policy disables the use of all Secrets in a Pod definition. In order to work effectively, this Policy needs a separate Policy or rule to require `automountServiceAccountToken=false` at the Pod level or ServiceAccount level since this would otherwise result in a Secret being mounted.\n","category":"Other","filters":"validate::Other::1.6.0::Pod, Secret","link":"/policies/other/disallow-all-secrets/disallow-all-secrets/","policy":"validate","subject":"Pod, Secret","title":"Disallow all Secrets","version":"1.6.0"},{"body":"Secrets often contain sensitive information which not all Pods need consume. This policy disables the use of all Secrets in a Pod definition. In order to work effectively, this Policy needs a separate Policy or rule to require `automountServiceAccountToken=false` at the Pod level or ServiceAccount level since this would otherwise result in a Secret being mounted.\n","category":"Other in CEL","filters":"validate::Other in CEL::1.11.0::Pod, Secret","link":"/policies/other-cel/disallow-all-secrets/disallow-all-secrets/","policy":"validate","subject":"Pod, Secret","title":"Disallow all Secrets in CEL expressions","version":"1.11.0"},{"body":"This policy prevents binding to the self-provisioners role for strict control of OpenShift project creation.\n","category":"OpenShift","filters":"validate::OpenShift::1.6.0::ClusterRoleBinding, RBAC","link":"/policies/openshift/disallow-self-provisioner-binding/disallow-self-provisioner-binding/","policy":"validate","subject":"ClusterRoleBinding, RBAC","title":"Disallow binding to self-provisioner cluster role in OpenShift","version":"1.6.0"},{"body":"Adding capabilities beyond those listed in the policy must be disallowed.\n","category":"Pod Security Standards (Baseline)","filters":"validate::Pod Security Standards (Baseline)::1.6.0::Pod","link":"/policies/pod-security/baseline/disallow-capabilities/disallow-capabilities/","policy":"validate","subject":"Pod","title":"Disallow Capabilities","version":"1.6.0"},{"body":"Adding capabilities other than `NET_BIND_SERVICE` is disallowed. In addition, all containers must explicitly drop `ALL` capabilities.\n","category":"Pod Security Standards (Restricted)","filters":"validate::Pod Security Standards (Restricted)::1.6.0::Pod","link":"/policies/pod-security/restricted/disallow-capabilities-strict/disallow-capabilities-strict/","policy":"validate","subject":"Pod","title":"Disallow Capabilities (Strict)","version":"1.6.0"},{"body":"Adding capabilities other than `NET_BIND_SERVICE` is disallowed. In addition, all containers must explicitly drop `ALL` capabilities.\n","category":"Pod Security Standards (Restricted) in CEL","filters":"validate::Pod Security Standards (Restricted) in CEL::1.11.0::Pod","link":"/policies/pod-security-cel/restricted/disallow-capabilities-strict/disallow-capabilities-strict/","policy":"validate","subject":"Pod","title":"Disallow Capabilities (Strict) in CEL expressions","version":"1.11.0"},{"body":"Adding capabilities other than `NET_BIND_SERVICE` is disallowed. In addition, all containers must explicitly drop `ALL` capabilities.\n","category":"Pod Security Standards (Restricted) in ValidatingPolicy","filters":"validate::Pod Security Standards (Restricted) in ValidatingPolicy::1.14.0::Pod","link":"/policies/pod-security-vpol/restricted/disallow-capabilities-strict/disallow-capabilities-strict/","policy":"validate","subject":"Pod","title":"Disallow Capabilities (Strict) in ValidatingPolicy","version":"1.14.0"},{"body":"Adding capabilities beyond those listed in the policy must be disallowed.\n","category":"Pod Security Standards (Baseline) in CEL","filters":"validate::Pod Security Standards (Baseline) in CEL::1.11.0::Pod","link":"/policies/pod-security-cel/baseline/disallow-capabilities/disallow-capabilities/","policy":"validate","subject":"Pod","title":"Disallow Capabilities in CEL expressions","version":"1.11.0"},{"body":"Adding capabilities beyond those listed in the policy must be disallowed.\n","category":"Pod Security Standards (Baseline) in ValidatingPolicy","filters":"validate::Pod Security Standards (Baseline) in ValidatingPolicy::1.14.0::Pod","link":"/policies/pod-security-vpol/baseline/disallow-capabilities/disallow-capabilities/","policy":"validate","subject":"Pod","title":"Disallow Capabilities in ValidatingPolicy","version":"1.14.0"},{"body":"Container daemon socket bind mounts allows access to the container engine on the node. This access can be used for privilege escalation and to manage containers outside of Kubernetes, and hence should not be allowed. This policy validates that the sockets used for CRI engines Docker, Containerd, and CRI-O are not used. In addition to or replacement of this policy, preventing users from mounting the parent directories (/var/run and /var) may be necessary to completely prevent socket bind mounts.\n","category":"Best Practices, EKS Best Practices","filters":"validate::Best Practices, EKS Best Practices::1.6.0::Pod","link":"/policies/best-practices/disallow-cri-sock-mount/disallow-cri-sock-mount/","policy":"validate","subject":"Pod","title":"Disallow CRI socket mounts","version":"1.6.0"},{"body":"Container daemon socket bind mounts allows access to the container engine on the node. This access can be used for privilege escalation and to manage containers outside of Kubernetes, and hence should not be allowed. This policy validates that the sockets used for CRI engines Docker, Containerd, and CRI-O are not used. In addition to or replacement of this policy, preventing users from mounting the parent directories (/var/run and /var) may be necessary to completely prevent socket bind mounts.\n","category":"Best Practices, EKS Best Practices in CEL","filters":"validate::Best Practices, EKS Best Practices in CEL::1.11.0::Pod","link":"/policies/best-practices-cel/disallow-cri-sock-mount/disallow-cri-sock-mount/","policy":"validate","subject":"Pod","title":"Disallow CRI socket mounts in CEL expressions","version":"1.11.0"},{"body":"Users that can create or update ingress objects can use the custom snippets  feature to obtain all secrets in the cluster (CVE-2021-25742). This policy  disables allow-snippet-annotations in the ingress-nginx configuration and  blocks *-snippet annotations on an Ingress. See: https://github.com/kubernetes/ingress-nginx/issues/7837\n","category":"Security, NGINX Ingress","filters":"validate::Security, NGINX Ingress::1.6.0::ConfigMap, Ingress","link":"/policies/nginx-ingress/disallow-ingress-nginx-custom-snippets/disallow-ingress-nginx-custom-snippets/","policy":"validate","subject":"ConfigMap, Ingress","title":"Disallow Custom Snippets","version":"1.6.0"},{"body":"Users that can create or update ingress objects can use the custom snippets  feature to obtain all secrets in the cluster (CVE-2021-25742). This policy  disables allow-snippet-annotations in the ingress-nginx configuration and  blocks *-snippet annotations on an Ingress. See: https://github.com/kubernetes/ingress-nginx/issues/7837\n","category":"Security, NGINX Ingress in CEL","filters":"validate::Security, NGINX Ingress in CEL::1.11.0::ConfigMap, Ingress","link":"/policies/nginx-ingress-cel/disallow-ingress-nginx-custom-snippets/disallow-ingress-nginx-custom-snippets/","policy":"validate","subject":"ConfigMap, Ingress","title":"Disallow Custom Snippets in CEL expressions","version":"1.11.0"},{"body":"Kubernetes Namespaces are an optional feature that provide a way to segment and isolate cluster resources across multiple applications and users. As a best practice, workloads should be isolated with Namespaces. Namespaces should be required and the default (empty) Namespace should not be used. This policy validates that Pods specify a Namespace name other than `default`. Rule auto-generation is disabled here due to Pod controllers need to specify the `namespace` field under the top-level `metadata` object and not at the Pod template level.\n","category":"Multi-Tenancy","filters":"validate::Multi-Tenancy::1.6.0::Pod","link":"/policies/best-practices/disallow-default-namespace/disallow-default-namespace/","policy":"validate","subject":"Pod","title":"Disallow Default Namespace","version":"1.6.0"},{"body":"Kubernetes Namespaces are an optional feature that provide a way to segment and isolate cluster resources across multiple applications and users. As a best practice, workloads should be isolated with Namespaces. Namespaces should be required and the default (empty) Namespace should not be used. This policy validates that Pods specify a Namespace name other than `default`. Rule auto-generation is disabled here due to Pod controllers need to specify the `namespace` field under the top-level `metadata` object and not at the Pod template level.\n","category":"Multi-Tenancy in CEL","filters":"validate::Multi-Tenancy in CEL::1.11.0::Pod","link":"/policies/best-practices-cel/disallow-default-namespace/disallow-default-namespace/","policy":"validate","subject":"Pod","title":"Disallow Default Namespace in CEL expressions","version":"1.11.0"},{"body":"The TLSOption CustomResource sets cluster-wide TLS configuration options for Traefik when  none are specified in a TLS router. Since this can take effect for all Ingress resources, creating the `default` TLSOption is a restricted operation. This policy ensures that only a cluster-admin can create the `default` TLSOption resource.\n","category":"Traefik","filters":"validate::Traefik::%!s(\u003cnil\u003e)::TLSOption","link":"/policies/traefik/disallow-default-tlsoptions/disallow-default-tlsoptions/","policy":"validate","subject":"TLSOption","title":"Disallow Default TLSOptions","version":null},{"body":"The TLSOption CustomResource sets cluster-wide TLS configuration options for Traefik when  none are specified in a TLS router. Since this can take effect for all Ingress resources, creating the `default` TLSOption is a restricted operation. This policy ensures that only a cluster-admin can create the `default` TLSOption resource.\n","category":"Traefik in CEL","filters":"validate::Traefik in CEL::%!s(\u003cnil\u003e)::TLSOption","link":"/policies/traefik-cel/disallow-default-tlsoptions/disallow-default-tlsoptions/","policy":"validate","subject":"TLSOption","title":"Disallow Default TLSOptions in CEL expressions","version":null},{"body":"OpenShift APIs are sometimes deprecated and removed after a few releases. As a best practice, older API versions should be replaced with newer versions. This policy validates for APIs that are deprecated or scheduled for removal. Note that checking for some of these resources may require modifying the Kyverno ConfigMap to remove filters.      \n","category":"OpenShift","filters":"validate::OpenShift::1.6.0::ClusterRole,ClusterRoleBinding,Role,RoleBinding,RBAC","link":"/policies/openshift/disallow-deprecated-apis/disallow-deprecated-apis/","policy":"validate","subject":"ClusterRole,ClusterRoleBinding,Role,RoleBinding,RBAC","title":"Disallow deprecated APIs","version":"1.6.0"},{"body":"OpenShift APIs are sometimes deprecated and removed after a few releases. As a best practice, older API versions should be replaced with newer versions. This policy validates for APIs that are deprecated or scheduled for removal. Note that checking for some of these resources may require modifying the Kyverno ConfigMap to remove filters.      \n","category":"OpenShift in CEL","filters":"validate::OpenShift in CEL::1.11.0::ClusterRole,ClusterRoleBinding,Role,RoleBinding,RBAC","link":"/policies/openshift-cel/disallow-deprecated-apis/disallow-deprecated-apis/","policy":"validate","subject":"ClusterRole,ClusterRoleBinding,Role,RoleBinding,RBAC","title":"Disallow deprecated APIs in CEL expressions","version":"1.11.0"},{"body":"An ingress resource needs to define an actual host name in order to be valid. This policy ensures that there is a hostname for each rule defined.\n","category":"Best Practices","filters":"validate::Best Practices::1.6.0::Ingress","link":"/policies/best-practices/disallow-empty-ingress-host/disallow-empty-ingress-host/","policy":"validate","subject":"Ingress","title":"Disallow empty Ingress host","version":"1.6.0"},{"body":"An ingress resource needs to define an actual host name in order to be valid. This policy ensures that there is a hostname for each rule defined.\n","category":"Best Practices in CEL","filters":"validate::Best Practices in CEL::1.11.0::Ingress","link":"/policies/best-practices-cel/disallow-empty-ingress-host/disallow-empty-ingress-host/","policy":"validate","subject":"Ingress","title":"Disallow empty Ingress host in CEL expressions","version":"1.11.0"},{"body":"Tiller, found in Helm v2, has known security challenges. It requires administrative privileges and acts as a shared resource accessible to any authenticated user. Tiller can lead to privilege escalation as restricted users can impact other users. It is recommended to use Helm v3+ which does not contain Tiller for these reasons. This policy validates that there is not an image containing the name `tiller`.\n","category":"Sample","filters":"validate::Sample::1.6.0::Pod","link":"/policies/best-practices/disallow-helm-tiller/disallow-helm-tiller/","policy":"validate","subject":"Pod","title":"Disallow Helm Tiller","version":"1.6.0"},{"body":"Tiller, found in Helm v2, has known security challenges. It requires administrative privileges and acts as a shared resource accessible to any authenticated user. Tiller can lead to privilege escalation as restricted users can impact other users. It is recommend to use Helm v3+ which does not contain Tiller for these reasons. This policy validates that there is not an image containing the name `tiller`.\n","category":"Sample in CEL","filters":"validate::Sample in CEL::1.11.0::Pod","link":"/policies/best-practices-cel/disallow-helm-tiller/disallow-helm-tiller/","policy":"validate","subject":"Pod","title":"Disallow Helm Tiller in CEL expressions","version":"1.11.0"},{"body":"Host namespaces (Process ID namespace, Inter-Process Communication namespace, and network namespace) allow access to shared information and can be used to elevate privileges. Pods should not be allowed access to host namespaces. This policy ensures fields which make use of these host namespaces are unset or set to `false`.\n","category":"Pod Security Standards (Baseline)","filters":"validate::Pod Security Standards (Baseline)::%!s(\u003cnil\u003e)::Pod","link":"/policies/pod-security/baseline/disallow-host-namespaces/disallow-host-namespaces/","policy":"validate","subject":"Pod","title":"Disallow Host Namespaces","version":null},{"body":"Host namespaces (Process ID namespace, Inter-Process Communication namespace, and network namespace) allow access to shared information and can be used to elevate privileges. Pods should not be allowed access to host namespaces. This policy ensures fields which make use of these host namespaces are unset or set to `false`.\n","category":"Pod Security Standards (Baseline) in CEL","filters":"validate::Pod Security Standards (Baseline) in CEL::1.11.0::Pod","link":"/policies/pod-security-cel/baseline/disallow-host-namespaces/disallow-host-namespaces/","policy":"validate","subject":"Pod","title":"Disallow Host Namespaces in CEL expressions","version":"1.11.0"},{"body":"Host namespaces (Process ID namespace, Inter-Process Communication namespace, and network namespace) allow access to shared information and can be used to elevate privileges. Pods should not be allowed access to host namespaces. This policy ensures fields which make use of these host namespaces are unset or set to `false`.\n","category":"Pod Security Standards (Baseline) in ValidatingPolicy","filters":"validate::Pod Security Standards (Baseline) in ValidatingPolicy::1.14.0::Pod","link":"/policies/pod-security-vpol/baseline/disallow-host-namespaces/disallow-host-namespaces/","policy":"validate","subject":"Pod","title":"Disallow Host Namespaces in ValidatingPolicy","version":"1.14.0"},{"body":"HostPath volumes let Pods use host directories and volumes in containers. Using host resources can be used to access shared data or escalate privileges and should not be allowed. This policy ensures no hostPath volumes are in use.\n","category":"Pod Security Standards (Baseline)","filters":"validate::Pod Security Standards (Baseline)::%!s(\u003cnil\u003e)::Pod,Volume","link":"/policies/pod-security/baseline/disallow-host-path/disallow-host-path/","policy":"validate","subject":"Pod,Volume","title":"Disallow hostPath","version":null},{"body":"HostPath volumes let Pods use host directories and volumes in containers. Using host resources can be used to access shared data or escalate privileges and should not be allowed. This policy ensures no hostPath volumes are in use.\n","category":"Pod Security Standards (Baseline) in CEL","filters":"validate::Pod Security Standards (Baseline) in CEL::1.11.0::Pod,Volume","link":"/policies/pod-security-cel/baseline/disallow-host-path/disallow-host-path/","policy":"validate","subject":"Pod,Volume","title":"Disallow hostPath in CEL expressions","version":"1.11.0"},{"body":"HostPath volumes let Pods use host directories and volumes in containers. Using host resources can be used to access shared data or escalate privileges and should not be allowed. This policy ensures no hostPath volumes are in use.\n","category":"Pod Security Standards (Baseline) in ValidatingPolicy","filters":"validate::Pod Security Standards (Baseline) in ValidatingPolicy::1.14.0::Pod,Volume","link":"/policies/pod-security-vpol/baseline/disallow-host-path/disallow-host-path/","policy":"validate","subject":"Pod,Volume","title":"Disallow hostPath in ValidatingPolicy","version":"1.14.0"},{"body":"Access to host ports allows potential snooping of network traffic and should not be allowed, or at minimum restricted to a known list. This policy ensures the `hostPort` field is unset or set to `0`. \n","category":"Pod Security Standards (Baseline)","filters":"validate::Pod Security Standards (Baseline)::%!s(\u003cnil\u003e)::Pod","link":"/policies/pod-security/baseline/disallow-host-ports/disallow-host-ports/","policy":"validate","subject":"Pod","title":"Disallow hostPorts","version":null},{"body":"Access to host ports allows potential snooping of network traffic and should not be allowed, or at minimum restricted to a known list. This policy ensures the `hostPort` field is unset or set to `0`.\n","category":"Pod Security Standards (Baseline) in CEL","filters":"validate::Pod Security Standards (Baseline) in CEL::1.11.0::Pod","link":"/policies/pod-security-cel/baseline/disallow-host-ports/disallow-host-ports/","policy":"validate","subject":"Pod","title":"Disallow hostPorts in CEL expressions","version":"1.11.0"},{"body":"Access to host ports allows potential snooping of network traffic and should not be allowed, or at minimum restricted to a known list. This policy ensures the `hostPort` field is unset or set to `0`. \n","category":"Pod Security Standards (Baseline) in ValidatingPolicy","filters":"validate::Pod Security Standards (Baseline) in ValidatingPolicy::1.14.0::Pod","link":"/policies/pod-security-vpol/baseline/disallow-host-ports/disallow-host-ports/","policy":"validate","subject":"Pod","title":"Disallow hostPorts in ValidatingPolicy","version":"1.14.0"},{"body":"Access to host ports allows potential snooping of network traffic and should not be allowed by requiring host ports be undefined (recommended) or at minimum restricted to a known list. This policy ensures the `hostPort` field, if defined, is set to either a port in the specified range or to a value of zero. This policy is mutually exclusive of the disallow-host-ports policy. Note that Kubernetes Pod Security Admission does not support the host port range rule.\n","category":"Pod Security Standards (Baseline)","filters":"validate::Pod Security Standards (Baseline)::1.6.0::Pod","link":"/policies/pod-security/baseline/disallow-host-ports-range/disallow-host-ports-range/","policy":"validate","subject":"Pod","title":"Disallow hostPorts Range (Alternate)","version":"1.6.0"},{"body":"Access to host ports allows potential snooping of network traffic and should not be allowed, or at minimum restricted to a known list. This policy ensures the `hostPort` field is set to one in the designated list. Note that Kubernetes Pod Security Admission does not support this rule.\n","category":"Pod Security Standards (Baseline) in CEL","filters":"validate::Pod Security Standards (Baseline) in CEL::1.11.0::Pod","link":"/policies/pod-security-cel/baseline/disallow-host-ports-range/disallow-host-ports-range/","policy":"validate","subject":"Pod","title":"Disallow hostPorts Range (Alternate) in CEL expressions","version":"1.11.0"},{"body":"Windows pods offer the ability to run HostProcess containers which enables privileged access to the Windows node. Privileged access to the host is disallowed in the baseline policy. HostProcess pods are an alpha feature as of Kubernetes v1.22. This policy ensures the `hostProcess` field, if present, is set to `false`.\n","category":"Pod Security Standards (Baseline)","filters":"validate::Pod Security Standards (Baseline)::%!s(\u003cnil\u003e)::Pod","link":"/policies/pod-security/baseline/disallow-host-process/disallow-host-process/","policy":"validate","subject":"Pod","title":"Disallow hostProcess","version":null},{"body":"Windows pods offer the ability to run HostProcess containers which enables privileged access to the Windows node. Privileged access to the host is disallowed in the baseline policy. HostProcess pods are an alpha feature as of Kubernetes v1.22. This policy ensures the `hostProcess` field, if present, is set to `false`.\n","category":"Pod Security Standards (Baseline) in CEL","filters":"validate::Pod Security Standards (Baseline) in CEL::1.11.0::Pod","link":"/policies/pod-security-cel/baseline/disallow-host-process/disallow-host-process/","policy":"validate","subject":"Pod","title":"Disallow hostProcess in CEL expressions","version":"1.11.0"},{"body":"Windows pods offer the ability to run HostProcess containers which enables privileged access to the Windows node. Privileged access to the host is disallowed in the baseline policy. HostProcess pods are an alpha feature as of Kubernetes v1.22. This policy ensures the `hostProcess` field, if present, is set to `false`.\n","category":"Pod Security Standards (Baseline) in ValidatingPolicy","filters":"validate::Pod Security Standards (Baseline) in ValidatingPolicy::1.14.0::Pod","link":"/policies/pod-security-vpol/baseline/disallow-host-process/disallow-host-process/","policy":"validate","subject":"Pod","title":"Disallow hostProcess in ValidatingPolicy","version":"1.14.0"},{"body":"The ':latest' tag is mutable and can lead to unexpected errors if the image changes. A best practice is to use an immutable tag that maps to a specific version of an application Pod. This policy validates that the image specifies a tag and that it is not called `latest`.\n","category":"Best Practices","filters":"validate::Best Practices::1.6.0::Pod","link":"/policies/best-practices/disallow-latest-tag/disallow-latest-tag/","policy":"validate","subject":"Pod","title":"Disallow Latest Tag","version":"1.6.0"},{"body":"The ':latest' tag is mutable and can lead to unexpected errors if the image changes. A best practice is to use an immutable tag that maps to a specific version of an application Pod. This policy validates that the image specifies a tag and that it is not called `latest`.\n","category":"Best Practices in CEL","filters":"validate::Best Practices in CEL::1.11.0::Pod","link":"/policies/best-practices-cel/disallow-latest-tag/disallow-latest-tag/","policy":"validate","subject":"Pod","title":"Disallow Latest Tag in CEL expressions","version":"1.11.0"},{"body":"A Service of type ExternalName which points back to localhost can potentially be used to exploit vulnerabilities in some Ingress controllers. This policy audits Services of type ExternalName if the externalName field refers to localhost.\n","category":"Sample","filters":"validate::Sample::1.6.0::Service","link":"/policies/other/disallow-localhost-services/disallow-localhost-services/","policy":"validate","subject":"Service","title":"Disallow Localhost ExternalName Services","version":"1.6.0"},{"body":"A Service of type ExternalName which points back to localhost can potentially be used to exploit vulnerabilities in some Ingress controllers. This policy audits Services of type ExternalName if the externalName field refers to localhost.\n","category":"Sample in CEL","filters":"validate::Sample in CEL::1.11.0::Service","link":"/policies/other-cel/disallow-localhost-services/disallow-localhost-services/","policy":"validate","subject":"Service","title":"Disallow Localhost ExternalName Services in CEL expressions","version":"1.11.0"},{"body":"A Kubernetes Service of type NodePort uses a host port to receive traffic from any source. A NetworkPolicy cannot be used to control traffic to host ports. Although NodePort Services can be useful, their use must be limited to Services with additional upstream security checks. This policy validates that any new Services do not use the `NodePort` type.\n","category":"Best Practices","filters":"validate::Best Practices::1.6.0::Service","link":"/policies/best-practices/restrict-node-port/restrict-node-port/","policy":"validate","subject":"Service","title":"Disallow NodePort","version":"1.6.0"},{"body":"A Kubernetes Service of type NodePort uses a host port to receive traffic from any source. A NetworkPolicy cannot be used to control traffic to host ports. Although NodePort Services can be useful, their use must be limited to Services with additional upstream security checks. This policy validates that any new Services do not use the `NodePort` type.\n","category":"Best Practices in CEL","filters":"validate::Best Practices in CEL::1.11.0::Service","link":"/policies/best-practices-cel/restrict-node-port/restrict-node-port/","policy":"validate","subject":"Service","title":"Disallow NodePort in CEL expressions","version":"1.11.0"},{"body":"The Jenkins Pipeline Build Strategy has been deprecated. This policy prevents its use. Use OpenShift Pipelines instead.\n","category":"OpenShift","filters":"validate::OpenShift::1.6.0::BuildConfig","link":"/policies/openshift/disallow-jenkins-pipeline-strategy/disallow-jenkins-pipeline-strategy/","policy":"validate","subject":"BuildConfig","title":"Disallow OpenShift Jenkins Pipeline Build Strategy","version":"1.6.0"},{"body":"The Jenkins Pipeline Build Strategy has been deprecated. This policy prevents its use. Use OpenShift Pipelines instead.\n","category":"OpenShift in CEL","filters":"validate::OpenShift in CEL::1.11.0::BuildConfig","link":"/policies/openshift-cel/disallow-jenkins-pipeline-strategy/disallow-jenkins-pipeline-strategy/","policy":"validate","subject":"BuildConfig","title":"Disallow OpenShift Jenkins Pipeline Build Strategy in CEL expressions","version":"1.11.0"},{"body":"Privilege escalation, such as via set-user-ID or set-group-ID file mode, should not be allowed. This policy ensures the `allowPrivilegeEscalation` field is set to `false`.\n","category":"Pod Security Standards (Restricted)","filters":"validate::Pod Security Standards (Restricted)::%!s(\u003cnil\u003e)::Pod","link":"/policies/pod-security/restricted/disallow-privilege-escalation/disallow-privilege-escalation/","policy":"validate","subject":"Pod","title":"Disallow Privilege Escalation","version":null},{"body":"Privilege escalation, such as via set-user-ID or set-group-ID file mode, should not be allowed. This policy ensures the `allowPrivilegeEscalation` field is set to `false`.\n","category":"Pod Security Standards (Restricted) in CEL","filters":"validate::Pod Security Standards (Restricted) in CEL::1.11.0::Pod","link":"/policies/pod-security-cel/restricted/disallow-privilege-escalation/disallow-privilege-escalation/","policy":"validate","subject":"Pod","title":"Disallow Privilege Escalation in CEL","version":"1.11.0"},{"body":"Privilege escalation, such as via set-user-ID or set-group-ID file mode, should not be allowed. This policy ensures the `allowPrivilegeEscalation` field is set to `false`.\n","category":"Pod Security Standards (Restricted) in ValidatingPolicy","filters":"validate::Pod Security Standards (Restricted) in ValidatingPolicy::1.14.0::Pod","link":"/policies/pod-security-vpol/restricted/disallow-privilege-escalation/disallow-privilege-escalation/","policy":"validate","subject":"Pod","title":"Disallow Privilege Escalation in ValidatingPolicy","version":"1.14.0"},{"body":"Privileged mode disables most security mechanisms and must not be allowed. This policy ensures Pods do not call for privileged mode.\n","category":"Pod Security Standards (Baseline)","filters":"validate::Pod Security Standards (Baseline)::%!s(\u003cnil\u003e)::Pod","link":"/policies/pod-security/baseline/disallow-privileged-containers/disallow-privileged-containers/","policy":"validate","subject":"Pod","title":"Disallow Privileged Containers","version":null},{"body":"Privileged mode disables most security mechanisms and must not be allowed. This policy ensures Pods do not call for privileged mode.\n","category":"Pod Security Standards (Baseline) in CEL","filters":"validate::Pod Security Standards (Baseline) in CEL::1.11.0::Pod","link":"/policies/pod-security-cel/baseline/disallow-privileged-containers/disallow-privileged-containers/","policy":"validate","subject":"Pod","title":"Disallow Privileged Containers in CEL expressions","version":"1.11.0"},{"body":"Privileged mode disables most security mechanisms and must not be allowed. This policy ensures Pods do not call for privileged mode.\n","category":"Pod Security Standards (Baseline) in ValidatingPolicy","filters":"validate::Pod Security Standards (Baseline) in ValidatingPolicy::1.14.0::Pod","link":"/policies/pod-security-vpol/baseline/disallow-privileged-containers/disallow-privileged-containers/","policy":"validate","subject":"Pod","title":"Disallow Privileged Containers in ValidatingPolicy","version":"1.14.0"},{"body":"The default /proc masks are set up to reduce attack surface and should be required. This policy ensures nothing but the default procMount can be specified. Note that in order for users to deviate from the `Default` procMount requires setting a feature gate at the API server.\n","category":"Pod Security Standards (Baseline)","filters":"validate::Pod Security Standards (Baseline)::%!s(\u003cnil\u003e)::Pod","link":"/policies/pod-security/baseline/disallow-proc-mount/disallow-proc-mount/","policy":"validate","subject":"Pod","title":"Disallow procMount","version":null},{"body":"The default /proc masks are set up to reduce attack surface and should be required. This policy ensures nothing but the default procMount can be specified. Note that in order for users to deviate from the `Default` procMount requires setting a feature gate at the API server.\n","category":"Pod Security Standards (Baseline) in CEL","filters":"validate::Pod Security Standards (Baseline) in CEL::1.11.0::Pod","link":"/policies/pod-security-cel/baseline/disallow-proc-mount/disallow-proc-mount/","policy":"validate","subject":"Pod","title":"Disallow procMount in CEL expressions","version":"1.11.0"},{"body":"The default /proc masks are set up to reduce attack surface and should be required. This policy ensures nothing but the default procMount can be specified. Note that in order for users to deviate from the `Default` procMount requires setting a feature gate at the API server.\n","category":"Pod Security Standards (Baseline) in ValidatingPolicy","filters":"validate::Pod Security Standards (Baseline) in ValidatingPolicy::1.14.0::Pod","link":"/policies/pod-security-vpol/baseline/disallow-proc-mount/disallow-proc-mount/","policy":"validate","subject":"Pod","title":"Disallow procMount in ValidatingPolicy","version":"1.14.0"},{"body":"Secrets used as environment variables containing sensitive information may, if not carefully controlled,  be printed in log output which could be visible to unauthorized people and captured in forwarding applications. This policy disallows using Secrets as environment variables.\n","category":"Sample, EKS Best Practices","filters":"validate::Sample, EKS Best Practices::%!s(\u003cnil\u003e)::Pod, Secret","link":"/policies/other/disallow-secrets-from-env-vars/disallow-secrets-from-env-vars/","policy":"validate","subject":"Pod, Secret","title":"Disallow Secrets from Env Vars","version":null},{"body":"Secrets used as environment variables containing sensitive information may, if not carefully controlled,  be printed in log output which could be visible to unauthorized people and captured in forwarding applications. This policy disallows using Secrets as environment variables.\n","category":"Sample, EKS Best Practices in CEL","filters":"validate::Sample, EKS Best Practices in CEL::%!s(\u003cnil\u003e)::Pod, Secret","link":"/policies/other-cel/disallow-secrets-from-env-vars/disallow-secrets-from-env-vars/","policy":"validate","subject":"Pod, Secret","title":"Disallow Secrets from Env Vars in CEL expressions","version":null},{"body":"SELinux options can be used to escalate privileges and should not be allowed. This policy ensures that the `seLinuxOptions` field is undefined.\n","category":"Pod Security Standards (Baseline)","filters":"validate::Pod Security Standards (Baseline)::%!s(\u003cnil\u003e)::Pod","link":"/policies/pod-security/baseline/disallow-selinux/disallow-selinux/","policy":"validate","subject":"Pod","title":"Disallow SELinux","version":null},{"body":"SELinux options can be used to escalate privileges and should not be allowed. This policy ensures that the `seLinuxOptions` field is undefined.\n","category":"Pod Security Standards (Baseline) in CEL","filters":"validate::Pod Security Standards (Baseline) in CEL::1.11.0::Pod","link":"/policies/pod-security-cel/baseline/disallow-selinux/disallow-selinux/","policy":"validate","subject":"Pod","title":"Disallow SELinux in CEL expressions","version":"1.11.0"},{"body":"SELinux options can be used to escalate privileges and should not be allowed. This policy ensures that the `seLinuxOptions` field is undefined.\n","category":"Pod Security Standards (Baseline) in ValidatingPolicy","filters":"validate::Pod Security Standards (Baseline) in ValidatingPolicy::1.14.0::Pod","link":"/policies/pod-security-vpol/baseline/disallow-selinux/disallow-selinux/","policy":"validate","subject":"Pod","title":"Disallow SELinux in ValidatingPolicy","version":"1.14.0"},{"body":"Especially in cloud provider environments, a Service having type LoadBalancer will cause the provider to respond by creating a load balancer somewhere in the customer account. This adds cost and complexity to a deployment. Without restricting this ability, users may easily overrun established budgets and security practices set by the organization. This policy restricts use of the Service type LoadBalancer.\n","category":"Sample","filters":"validate::Sample::1.6.0::Service","link":"/policies/other/restrict-loadbalancer/restrict-loadbalancer/","policy":"validate","subject":"Service","title":"Disallow Service Type LoadBalancer","version":"1.6.0"},{"body":"Especially in cloud provider environments, a Service having type LoadBalancer will cause the provider to respond by creating a load balancer somewhere in the customer account. This adds cost and complexity to a deployment. Without restricting this ability, users may easily overrun established budgets and security practices set by the organization. This policy restricts use of the Service type LoadBalancer.\n","category":"Sample in CEL","filters":"validate::Sample in CEL::1.11.0::Service","link":"/policies/other-cel/restrict-loadbalancer/restrict-loadbalancer/","policy":"validate","subject":"Service","title":"Disallow Service Type LoadBalancer in CEL expressions","version":"1.11.0"},{"body":"Disallow the use of the SecurityContextConstraint (SCC) anyuid which allows a pod to run with the UID as declared in the image instead of a random UID\n","category":"Security","filters":"validate::Security::1.6.0::Role,ClusterRole,RBAC","link":"/policies/openshift/disallow-security-context-constraint-anyuid/disallow-security-context-constraint-anyuid/","policy":"validate","subject":"Role,ClusterRole,RBAC","title":"Disallow use of the SecurityContextConstraint (SCC) anyuid","version":"1.6.0"},{"body":"Disallow the use of the SecurityContextConstraint (SCC) anyuid which allows a pod to run with the UID as declared in the image instead of a random UID\n","category":"Security in CEL","filters":"validate::Security in CEL::1.11.0::Role,ClusterRole,RBAC","link":"/policies/openshift-cel/disallow-security-context-constraint-anyuid/disallow-security-context-constraint-anyuid/","policy":"validate","subject":"Role,ClusterRole,RBAC","title":"Disallow use of the SecurityContextConstraint (SCC) anyuid in CEL expressions","version":"1.11.0"},{"body":"Accessing a container engine's socket is for highly specialized use cases and should generally be disabled. If access must be granted, it should be done on an explicit basis. This policy requires that, for any Pod mounting the Docker socket, it must have the label `allow-docker` set to `true`.\n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::Pod","link":"/policies/other/docker-socket-requires-label/docker-socket-requires-label/","policy":"validate","subject":"Pod","title":"Docker Socket Requires Label","version":null},{"body":"Accessing a container engine's socket is for highly specialized use cases and should generally be disabled. If access must be granted, it should be done on an explicit basis. This policy requires that, for any Pod mounting the Docker socket, it must have the label `allow-docker` set to `true`.\n","category":"Other in CEL","filters":"validate::Other in CEL::%!s(\u003cnil\u003e)::Pod","link":"/policies/other-cel/docker-socket-requires-label/docker-socket-requires-label/","policy":"validate","subject":"Pod","title":"Docker Socket Requires Label in CEL expressions","version":null},{"body":"Capabilities permit privileged actions without giving full root access. All capabilities should be dropped from a Pod, with only those required added back. This policy ensures that all containers explicitly specify the `drop: [\"ALL\"]` ability. Note that this policy also illustrates how to cover drop entries in any case although this may not strictly conform to the Pod Security Standards.\n","category":"Best Practices","filters":"validate::Best Practices::1.6.0::Pod","link":"/policies/best-practices/require-drop-all/require-drop-all/","policy":"validate","subject":"Pod","title":"Drop All Capabilities","version":"1.6.0"},{"body":"Capabilities permit privileged actions without giving full root access. All capabilities should be dropped from a Pod, with only those required added back. This policy ensures that all containers explicitly specify the `drop: [\"ALL\"]` ability. Note that this policy also illustrates how to cover drop entries in any case although this may not strictly conform to the Pod Security Standards.\n","category":"Best Practices in CEL","filters":"validate::Best Practices in CEL::1.11.0::Pod","link":"/policies/best-practices-cel/require-drop-all/require-drop-all/","policy":"validate","subject":"Pod","title":"Drop All Capabilities in CEL expressions","version":"1.11.0"},{"body":"Capabilities permit privileged actions without giving full root access. The CAP_NET_RAW capability, enabled by default, allows processes in a container to forge packets and bind to any interface potentially leading to MitM attacks. This policy ensures that all containers explicitly drop the CAP_NET_RAW ability. Note that this policy also illustrates how to cover drop entries in any case although this may not strictly conform to the Pod Security Standards.\n","category":"Best Practices","filters":"validate::Best Practices::1.6.0::Pod","link":"/policies/best-practices/require-drop-cap-net-raw/require-drop-cap-net-raw/","policy":"validate","subject":"Pod","title":"Drop CAP_NET_RAW","version":"1.6.0"},{"body":"Capabilities permit privileged actions without giving full root access. The CAP_NET_RAW capability, enabled by default, allows processes in a container to forge packets and bind to any interface potentially leading to MitM attacks. This policy ensures that all containers explicitly drop the CAP_NET_RAW ability. Note that this policy also illustrates how to cover drop entries in any case although this may not strictly conform to the Pod Security Standards.\n","category":"Best Practices in CEL","filters":"validate::Best Practices in CEL::1.11.0::Pod","link":"/policies/best-practices-cel/require-drop-cap-net-raw/require-drop-cap-net-raw/","policy":"validate","subject":"Pod","title":"Drop CAP_NET_RAW in CEL expressions","version":"1.11.0"},{"body":"Kubecost is able to modify container resource requests and limits dynamically based upon observed utilization patterns and recommendations. This provides an easy way to automatically improve allocation of cluster resources by increasing efficiency. This policy will annotate all Deployments which have the label `env=test` with `request.autoscaling.kubecost.com/enabled=\"true\"` if the annotation is not already present. Other annotations may be added according to need and users should see the documentation for a complete list.\n","category":"Kubecost","filters":"mutate::Kubecost::%!s(\u003cnil\u003e)::Deployment, Annotation","link":"/policies/kubecost/enable-kubecost-continuous-rightsizing/enable-kubecost-continuous-rightsizing/","policy":"mutate","subject":"Deployment, Annotation","title":"Enable Kubecost Continuous Rightsizing","version":null},{"body":"An AppProject may optionally specify clusterResourceBlacklist which is a blacklisted group of cluster resources. This is often a good practice to ensure AppProjects do not allow more access than needed. This policy is a combination of two rules which enforce that all AppProjects specify clusterResourceBlacklist and that their group and kind have wildcards as values.\n","category":"Argo","filters":"validate::Argo::1.6.0::AppProject","link":"/policies/argo/appproject-clusterresourceblacklist/appproject-clusterresourceblacklist/","policy":"validate","subject":"AppProject","title":"Enforce AppProject with clusterResourceBlacklist","version":"1.6.0"},{"body":"An AppProject may optionally specify clusterResourceBlacklist which is a blacklisted group of cluster resources. This is often a good practice to ensure AppProjects do not allow more access than needed. This policy is a combination of two rules which enforce that all AppProjects specify clusterResourceBlacklist and that their group and kind have wildcards as values.\n","category":"Argo in CEL","filters":"validate::Argo in CEL::1.11.0::AppProject","link":"/policies/argo-cel/appproject-clusterresourceblacklist/appproject-clusterresourceblacklist/","policy":"validate","subject":"AppProject","title":"Enforce AppProject with clusterResourceBlacklist in CEL expressions","version":"1.11.0"},{"body":"This policy will check the TLS Min version to ensure that whenever the mesh is set, there is a minimum version of TLS set for all the service mesh proxies and this enforces that service mesh mTLS traffic uses TLS v1.2 or newer.\n","category":"Consul","filters":"validate::Consul::1.6.0::Mesh","link":"/policies/consul/enforce-min-tls-version/enforce-min-tls-version/","policy":"validate","subject":"Mesh","title":"Enforce Consul min TLS version","version":"1.6.0"},{"body":"This policy will check the TLS Min version to ensure that whenever the mesh is set, there is a minimum version of TLS set for all the service mesh proxies and this enforces that service mesh mTLS traffic uses TLS v1.2 or newer.\n","category":"Consul in CEL","filters":"validate::Consul in CEL::1.11.0::Mesh","link":"/policies/consul-cel/enforce-min-tls-version/enforce-min-tls-version/","policy":"validate","subject":"Mesh","title":"Enforce Consul min TLS version  in CEL expressions","version":"1.11.0"},{"body":"Encryption at rest is a security best practice. This policy ensures encryption is enabled for etcd in OpenShift clusters.\n","category":"OpenShift","filters":"validate::OpenShift::1.6.0::APIServer","link":"/policies/openshift/enforce-etcd-encryption/enforce-etcd-encryption/","policy":"validate","subject":"APIServer","title":"Enforce etcd encryption in OpenShift","version":"1.6.0"},{"body":"Encryption at rest is a security best practice. This policy ensures encryption is enabled for etcd in OpenShift clusters.\n","category":"OpenShift","filters":"validate::OpenShift::1.11.0::APIServer","link":"/policies/openshift-cel/enforce-etcd-encryption/enforce-etcd-encryption/","policy":"validate","subject":"APIServer","title":"Enforce etcd encryption in OpenShift in CEL expressions","version":"1.11.0"},{"body":"Check VirtualMachines and validate that they are using an instance type and preference.\n","category":"KubeVirt","filters":"validate::KubeVirt::%!s(\u003cnil\u003e)::VirtualMachine","link":"/policies/kubevirt/enforce-instancetype/enforce-instancetype/","policy":"validate","subject":"VirtualMachine","title":"Enforce instanceTypes","version":null},{"body":"In order for Istio to include namespaces in ambient mode, the label `istio.io/dataplane-mode` must be set to `ambient`. This policy ensures that all new Namespaces set `istio.io/dataplane-mode` to `ambient`.\n","category":"Istio","filters":"validate::Istio::1.6.0::Namespace","link":"/policies/istio/enforce-ambient-mode-namespace/enforce-ambient-mode-namespace/","policy":"validate","subject":"Namespace","title":"Enforce Istio Ambient Mode","version":"1.6.0"},{"body":"In order for Istio to inject sidecars to workloads deployed into Namespaces, the label `istio-injection` must be set to `enabled`. This policy ensures that all new Namespaces set `istio-inject` to `enabled`.\n","category":"Istio","filters":"validate::Istio::1.6.0::Namespace","link":"/policies/istio/enforce-sidecar-injection-namespace/enforce-sidecar-injection-namespace/","policy":"validate","subject":"Namespace","title":"Enforce Istio Sidecar Injection","version":"1.6.0"},{"body":"In order for Istio to inject sidecars to workloads deployed into Namespaces, the label `istio-injection` must be set to `enabled`. This policy ensures that all new Namespaces set `istio-inject` to `enabled`.\n","category":"Istio in CEL","filters":"validate::Istio in CEL::1.11.0::Namespace","link":"/policies/istio-cel/enforce-sidecar-injection-namespace/enforce-sidecar-injection-namespace/","policy":"validate","subject":"Namespace","title":"Enforce Istio Sidecar Injection in CEL expressions","version":"1.11.0"},{"body":"Strict mTLS requires that mutual TLS be enabled across the entire service mesh, which can be set using a PeerAuthentication resource on a per-Namespace basis and, if set on the `istio-system` Namespace could disable it across the entire mesh. Disabling mTLS can reduce the security for traffic within that portion of the mesh and should be controlled. This policy prevents disabling strict mTLS in a PeerAuthentication resource by requiring the `mode` be set to either `UNSET` or `STRICT`.\n","category":"Istio","filters":"validate::Istio::1.6.0::PeerAuthentication","link":"/policies/istio/enforce-strict-mtls/enforce-strict-mtls/","policy":"validate","subject":"PeerAuthentication","title":"Enforce Istio Strict mTLS","version":"1.6.0"},{"body":"Strict mTLS requires that mutual TLS be enabled across the entire service mesh, which can be set using a PeerAuthentication resource on a per-Namespace basis and, if set on the `istio-system` Namespace could disable it across the entire mesh. Disabling mTLS can reduce the security for traffic within that portion of the mesh and should be controlled. This policy prevents disabling strict mTLS in a PeerAuthentication resource by requiring the `mode` be set to either `UNSET` or `STRICT`.\n","category":"Istio in CEL","filters":"validate::Istio in CEL::1.11.0::PeerAuthentication","link":"/policies/istio-cel/enforce-strict-mtls/enforce-strict-mtls/","policy":"validate","subject":"PeerAuthentication","title":"Enforce Istio Strict mTLS in CEL expressions","version":"1.11.0"},{"body":"Once a routing decision has been made, a DestinationRule can be used to define how traffic should be sent to another service. The trafficPolicy object can control how TLS is handled to the destination host. This policy enforces that the TLS mode cannot be set to a value of `DISABLE`.\n","category":"Istio","filters":"validate::Istio::1.6.0::DestinationRule","link":"/policies/istio/enforce-tls-hosts-host-subnets/enforce-tls-hosts-host-subnets/","policy":"validate","subject":"DestinationRule","title":"Enforce Istio TLS on Hosts and Host Subnets","version":"1.6.0"},{"body":"This validation is valuable when annotations are used to define durations, such as to ensure a Pod lifetime annotation does not exceed some site specific max threshold. Pod lifetime annotation can be no greater than 8 hours.\n","category":"Sample","filters":"validate::Sample::1.6.0::Pod","link":"/policies/other/enforce-pod-duration/enforce-pod-duration/","policy":"validate","subject":"Pod","title":"Enforce pod duration","version":"1.6.0"},{"body":"This validation is valuable when annotations are used to define durations, such as to ensure a Pod lifetime annotation does not exceed some site specific max threshold. Pod lifetime annotation can be no greater than 8 hours.\n","category":"Sample in CEL","filters":"validate::Sample in CEL::1.11.0::Pod","link":"/policies/other-cel/enforce-pod-duration/enforce-pod-duration/","policy":"validate","subject":"Pod","title":"Enforce pod duration in CEL expressions","version":"1.11.0"},{"body":"Some stateful workloads with multiple replicas only allow a single Pod to write to a given volume at a time. Beginning in Kubernetes 1.22 and enabled by default in 1.27, a new setting called ReadWriteOncePod, available for CSI volumes only, allows volumes to be writable from only a single Pod. For more information see the blog https://kubernetes.io/blog/2023/04/20/read-write-once-pod-access-mode-beta/. This policy enforces that the accessModes for a PersistentVolumeClaim be set to ReadWriteOncePod.\n","category":"Sample","filters":"validate::Sample::%!s(\u003cnil\u003e)::PersistentVolumeClaim","link":"/policies/other/enforce-readwriteonce-pod/enforce-readwriteonce-pod/","policy":"validate","subject":"PersistentVolumeClaim","title":"Enforce ReadWriteOncePod","version":null},{"body":"Some stateful workloads with multiple replicas only allow a single Pod to write to a given volume at a time. Beginning in Kubernetes 1.22 and enabled by default in 1.27, a new setting called ReadWriteOncePod, available for CSI volumes only, allows volumes to be writable from only a single Pod. For more information see the blog https://kubernetes.io/blog/2023/04/20/read-write-once-pod-access-mode-beta/. This policy enforces that the accessModes for a PersistentVolumeClaim be set to ReadWriteOncePod.\n","category":"Sample in CEL","filters":"validate::Sample in CEL::1.11.0::PersistentVolumeClaim","link":"/policies/other-cel/enforce-readwriteonce-pod/enforce-readwriteonce-pod/","policy":"validate","subject":"PersistentVolumeClaim","title":"Enforce ReadWriteOncePod in CEL expressions","version":"1.11.0"},{"body":"Resource requests often need to be tailored to the type of workload in the container/Pod. With many different types of applications in a cluster, enforcing hard limits on requests or limits may not work and a ratio may be better suited instead. This policy checks every container in a Pod and ensures that memory limits are no more than 2.5x its requests.\n","category":"Other","filters":"validate::Other::1.6.0::Pod","link":"/policies/other/enforce-resources-as-ratio/enforce-resources-as-ratio/","policy":"validate","subject":"Pod","title":"Enforce Resources as Ratio","version":"1.6.0"},{"body":"This policy ensures that the name of the ApplicationSet is the same value provided in the project.\n","category":"Argo","filters":"validate::Argo::1.6.0::ApplicationSet","link":"/policies/argo/applicationset-name-matches-project/applicationset-name-matches-project/","policy":"validate","subject":"ApplicationSet","title":"Ensure ApplicationSet Name Matches Project","version":"1.6.0"},{"body":"This policy ensures that the name of the ApplicationSet is the same value provided in the project.\n","category":"Argo in CEL","filters":"validate::Argo in CEL::1.11.0::ApplicationSet","link":"/policies/argo-cel/applicationset-name-matches-project/applicationset-name-matches-project/","policy":"validate","subject":"ApplicationSet","title":"Ensure ApplicationSet Name Matches Project in CEL expressions","version":"1.11.0"},{"body":"PodDisruptionBudget resources are useful to ensuring minimum availability is maintained at all times. Introducing a PDB where there are already matching Pod controllers may pose a problem if the author is unaware of the existing replica count. This policy ensures that the minAvailable value is not greater not equal to the replica count of any matching existing Deployment. If other Pod controllers should also be included in this check, additional rules may be added to the policy which match those controllers.\n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::PodDisruptionBudget, Deployment","link":"/policies/other/deployment-replicas-higher-than-pdb/deployment-replicas-higher-than-pdb/","policy":"validate","subject":"PodDisruptionBudget, Deployment","title":"Ensure Deployment Replicas Higher Than PodDisruptionBudget","version":null},{"body":"This policy ensures that Deployments, ReplicaSets, StatefulSets, and DaemonSets are only allowed if they have a corresponding Horizontal Pod Autoscaler (HPA) configured in the same namespace. The policy checks for the presence of an HPA that targets the resource and denies the creation or update of the resource if no such HPA exists. This policy helps enforce scaling practices and ensures that resources are managed efficiently.\n","category":"Other","filters":"validate::Other::1.9.0::Deployment,ReplicaSet,StatefulSet,DaemonSet","link":"/policies/other/check-hpa-exists/check-hpa-exists/","policy":"validate","subject":"Deployment,ReplicaSet,StatefulSet,DaemonSet","title":"Ensure HPA for Deployments","version":"1.9.0"},{"body":"It is common to have two separate Namespaces such as staging and production in order to test and promote app deployments in a controlled manner. In order to ensure that level of control, certain guardrails must be present so as to minimize regressions or unintended behavior. This policy has a set of three rules to try and provide some sane defaults for app promotion across these two environments (Namespaces) called staging and production. First, it makes sure that every Deployment in production has a corresponding Deployment in staging. Second, that a production Deployment uses same image name as its staging counterpart. Third, that a production Deployment uses an older or equal image version as its staging counterpart.\n","category":"Other","filters":"validate::Other::1.6.0::Deployment","link":"/policies/other/ensure-production-matches-staging/ensure-production-matches-staging/","policy":"validate","subject":"Deployment","title":"Ensure Production Matches staging","version":"1.6.0"},{"body":"Pods which are allowed to mount hostPath volumes in read/write mode pose a security risk even if confined to a \"safe\" file system on the host and may escape those confines (see https://blog.aquasec.com/kubernetes-security-pod-escape-log-mounts). The only true way to ensure safety is to enforce that all Pods mounting hostPath volumes do so in read only mode. This policy checks all containers for any hostPath volumes and ensures they are explicitly mounted in readOnly mode.\n","category":"Other","filters":"validate::Other::1.6.0::Pod","link":"/policies/other/ensure-readonly-hostpath/ensure-readonly-hostpath/","policy":"validate","subject":"Pod","title":"Ensure Read Only hostPath","version":"1.6.0"},{"body":"Pods which are allowed to mount hostPath volumes in read/write mode pose a security risk even if confined to a \"safe\" file system on the host and may escape those confines (see https://blog.aquasec.com/kubernetes-security-pod-escape-log-mounts). The only true way to ensure safety is to enforce that all Pods mounting hostPath volumes do so in read only mode. This policy checks all containers for any hostPath volumes and ensures they are explicitly mounted in readOnly mode.\n","category":"Other in CEL","filters":"validate::Other in CEL::1.11.0::Pod","link":"/policies/other-cel/ensure-readonly-hostpath/ensure-readonly-hostpath/","policy":"validate","subject":"Pod","title":"Ensure Read Only hostPath in CEL expressions","version":"1.11.0"},{"body":"This policy ensures that Ingress resources do not have certain disallowed annotations and that the ingress-nginx controller Pod is running an appropriate version of the image. It checks for the presence of the  `nginx.ingress.kubernetes.io/server-snippet` annotation and disallows its usage, enforces specific values  for `auth-tls-verify-client`, and ensures that the ingress-nginx controller image is of the required version.\n","category":"Ingress, Security","filters":"validate::Ingress, Security::1.9.0::Ingress, Pod","link":"/policies/other/check-ingress-nginx-controller-version-and-annotation-policy/check-ingress-nginx-controller-version-and-annotation-policy/","policy":"validate","subject":"Ingress, Pod","title":"Ensure Valid Ingress NGINX Controller and Annotations","version":"1.9.0"},{"body":"It's common where policy lookups need to consider a mapping to many possible values rather than a static mapping. This is a sample which demonstrates how to dynamically look up an allow list of Namespaces from a ConfigMap where the ConfigMap stores an array of strings. This policy validates that any Pods created outside of the list of Namespaces have the label `foo` applied.\n","category":"Sample","filters":"validate::Sample::1.6.0::Namespace, Pod","link":"/policies/other/exclude-namespaces-dynamically/exclude-namespaces-dynamically/","policy":"validate","subject":"Namespace, Pod","title":"Exclude Namespaces Dynamically","version":"1.6.0"},{"body":"It's common where policy lookups need to consider a mapping to many possible values rather than a static mapping. This is a sample which demonstrates how to dynamically look up an allow list of Namespaces from a ConfigMap where the ConfigMap stores an array of strings. This policy validates that any Pods created outside of the list of Namespaces have the label `foo` applied.\n","category":"Sample in CEL","filters":"validate::Sample in CEL::1.11.0::Namespace, Pod","link":"/policies/other-cel/exclude-namespaces-dynamically/exclude-namespaces-dynamically/","policy":"validate","subject":"Namespace, Pod","title":"Exclude Namespaces Dynamically in CEL expressions","version":"1.11.0"},{"body":"In situations where Ops/Platform teams want to allow exceptions on a temporary basis, there must be a way to remove the PolicyException once the expiration time has been reached. After the exception is removed, the rule(s) for which the exception is granted go back into full effect. This policy generates a ClusterCleanupPolicy with a four hour expiration time after which the PolicyException is deleted. It may be necessary to grant both the Kyverno as well as cleanup controller ServiceAccounts additional permissions to operate this policy.\n","category":"Other","filters":"generate::Other::1.9.0::PolicyException","link":"/policies/other/expiration-for-policyexceptions/expiration-for-policyexceptions/","policy":"generate","subject":"PolicyException","title":"Expiration for PolicyExceptions","version":"1.9.0"},{"body":"Setting of CPU limits is a debatable poor practice as it can result, when defined, in potentially starving applications of much-needed CPU cycles even when they are available. Ensuring that CPU limits are not set may ensure apps run more effectively. This policy forbids any container in a Pod from defining CPU limits.\n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::Pod","link":"/policies/other/forbid-cpu-limits/forbid-cpu-limits/","policy":"validate","subject":"Pod","title":"Forbid CPU Limits","version":null},{"body":"Setting of CPU limits is a debatable poor practice as it can result, when defined, in potentially starving applications of much-needed CPU cycles even when they are available. Ensuring that CPU limits are not set may ensure apps run more effectively. This policy forbids any container in a Pod from defining CPU limits.\n","category":"Other in CEL","filters":"validate::Other in CEL::%!s(\u003cnil\u003e)::Pod","link":"/policies/other-cel/forbid-cpu-limits/forbid-cpu-limits/","policy":"validate","subject":"Pod","title":"Forbid CPU Limits in CEL expressions","version":null},{"body":"As part of the tenant provisioning process, Flux needs to generate RBAC resources. This policy will create a ServiceAccount and RoleBinding when a new or existing Namespace is labeled with `toolkit.fluxcd.io/tenant`. Use of this rule may require an additional binding for the Kyverno ServiceAccount so it has permissions to properly create the RoleBinding.\n","category":"Flux","filters":"generate::Flux::1.6.0::ServiceAccount, RoleBinding","link":"/policies/flux/generate-flux-multi-tenant-resources/generate-flux-multi-tenant-resources/","policy":"generate","subject":"ServiceAccount, RoleBinding","title":"Generate Flux Multi-Tenant Resources","version":"1.6.0"},{"body":"Generates a Kasten policy for a namespace that includes any Deployment or StatefulSet with a \"dataprotection=kasten-example\" label, if the policy does not already exist. This Kyverno policy can be used in combination with the \"kasten-data-protection-by-label\" policy to require \"dataprotection\" labeling on workloads.\n","category":"Veeam Kasten","filters":"generate::Veeam Kasten::1.12.0::Policy","link":"/policies/kasten/kasten-generate-example-backup-policy/kasten-generate-example-backup-policy/","policy":"generate","subject":"Policy","title":"Generate Kasten Backup Policy Based on Resource Label","version":"1.12.0"},{"body":"Generates a Kasten policy for a new namespace that includes a valid \"dataprotection\" label, if the policy does not already exist. Use with \"kasten-validate-ns-by-preset-label\" policy to require \"dataprotection\" labeling on new namespaces.\n","category":"Veeam Kasten","filters":"generate::Veeam Kasten::1.12.0::Policy","link":"/policies/kasten/kasten-generate-policy-by-preset-label/kasten-generate-policy-by-preset-label/","policy":"generate","subject":"Policy","title":"Generate Kasten Policy from Preset","version":"1.12.0"},{"body":"A NetworkPolicy is often a critical piece when provisioning new Namespaces, but there may be existing Namespaces which also need the same resource. Creating each one individually or manipulating each Namespace in order to trigger creation is additional overhead. This policy creates a new NetworkPolicy for existing Namespaces which results in a default deny behavior and labels it with created-by=kyverno.\n","category":"Other","filters":"generate::Other::1.7.0::Namespace, NetworkPolicy","link":"/policies/other/generate-networkpolicy-existing/generate-networkpolicy-existing/","policy":"generate","subject":"Namespace, NetworkPolicy","title":"Generate NetworkPolicy to Existing Namespaces","version":"1.7.0"},{"body":"Ingress resources which name a host name that is not present in the TLS section can produce ingress routing failures as a TLS certificate may not correspond to the destination host. This policy ensures that the host name in an Ingress rule is also found in the list of TLS hosts.\n","category":"Other","filters":"validate::Other::1.6.0::Ingress","link":"/policies/other/ingress-host-match-tls/ingress-host-match-tls/","policy":"validate","subject":"Ingress","title":"Ingress Host Match TLS","version":"1.6.0"},{"body":"Ingress resources which name a host name that is not present in the TLS section can produce ingress routing failures as a TLS certificate may not correspond to the destination host. This policy ensures that the host name in an Ingress rule is also found in the list of TLS hosts.\n","category":"Other in CEL","filters":"validate::Other in CEL::1.11.0::Ingress","link":"/policies/other-cel/ingress-host-match-tls/ingress-host-match-tls/","policy":"validate","subject":"Ingress","title":"Ingress Host Match TLS in CEL expressions","version":"1.11.0"},{"body":"Container images which use metadata such as the LABEL directive in a Dockerfile do not surface this information to apps running within. In some cases, running the image as a container may need access to this information. This policy injects the value of a label set in a Dockerfile named `maintainer` as an environment variable to the corresponding container in the Pod.\n","category":"Other","filters":"mutate::Other::1.7.0::Pod","link":"/policies/other/inject-env-var-from-image-label/inject-env-var-from-image-label/","policy":"mutate","subject":"Pod","title":"Inject Env Var from Image Label","version":"1.7.0"},{"body":"A required component of a MachineSet is the infrastructure name which is a random string created in a separate resource. It can be tedious or impossible to know this for each MachineSet created. This policy fetches the value of the infrastructure name from the Cluster resource and replaces all instances of TEMPLATE in a MachineSet with that name.\n","category":"OpenShift","filters":"mutate::OpenShift::1.10.0::MachineSet","link":"/policies/openshift/inject-infrastructurename/inject-infrastructurename/","policy":"mutate","subject":"MachineSet","title":"Inject Infrastructure Name","version":"1.10.0"},{"body":"The sidecar pattern is very common in Kubernetes whereby other applications can insert components via tacit modification of a submitted resource. This is, for example, often how service meshes and secrets applications are able to function transparently. This policy injects a sidecar container, initContainer, and volume into Pods that match an annotation called `vault.hashicorp.com/agent-inject: true`.\n","category":"Sample","filters":"mutate::Sample::1.6.0::Deployment,Volume","link":"/policies/other/inject-sidecar-deployment/inject-sidecar-deployment/","policy":"mutate","subject":"Deployment,Volume","title":"Inject Sidecar Container","version":"1.6.0"},{"body":"The Kubernetes API includes a CertificateSigningRequest resource which can be used to generate a certificate for an entity. Because this API can be abused to create a long-lived credential, it is important to be able to audit this API to understand who/what is creating these CSRs and for what actors they are being created. This policy, intended to always be run in Audit mode and produce failure results in a Policy Report, inspects all incoming CertificateSigningRequests and writes out into the Policy Report information on who/what requested it and parsing the CSR to show the Subject information of that CSR resource.\n","category":"Other","filters":"validate::Other::1.10.0::CertificateSigningRequest","link":"/policies/other/inspect-csr/inspect-csr/","policy":"validate","subject":"CertificateSigningRequest","title":"Inspect CertificateSigningRequest","version":"1.10.0"},{"body":"Kubecost Enterprise allows users to define budgets for Namespaces and clusters as well as predict the cost of new Deployments based on historical cost data. By combining these abilities, users can achieve proactive cost controls for clusters with Kubecost installed by denying Deployments which would exceed the remaining configured monthly budget, if applicable. This policy checks for the creation of Deployments and compares the predicted cost of the Deployment to the remaining amount in the monthly budget, if one is found. If the predicted cost is greater than the remaining budget, the Deployment is denied. This policy requires Kubecost Enterprise at a version of 1.108 or greater.\n","category":"Kubecost","filters":"validate::Kubecost::1.11.0::Deployment","link":"/policies/kubecost/kubecost-proactive-cost-control/kubecost-proactive-cost-control/","policy":"validate","subject":"Deployment","title":"Kubecost Proactive Cost Control","version":"1.11.0"},{"body":"This policy generates and synchronizes a Kubeops Config Syncer merged kubeconfig Secret from Rancher managed cluster CAPI secrets. This kubeconfig Secret is required by the Kubeops Config Syncer for it to sync ConfigMaps/Secrets from the Rancher management cluster to downstream clusters.\n","category":"Kubeops","filters":"generate::Kubeops::1.7.1::Secret","link":"/policies/kubeops/config-syncer-secret-generation-from-rancher-capi/config-syncer-secret-generation-from-rancher-capi/","policy":"generate","subject":"Secret","title":"Kubeops Config Syncer Secret Generation From Rancher CAPI Secret","version":"1.7.1"},{"body":"It is often needed to make decisions for resources based upon the version of the Kubernetes API server in the cluster. This policy serves as an example for how to retrieve the minor version of the Kubernetes API server and subsequently use in a policy behavior. It will mutate a Secret upon its creation with a label called `apiminorversion` the value of which is the minor version of the API server.\n","category":"Other","filters":"mutate::Other::1.8.0::Secret","link":"/policies/other/kubernetes-version-check/kubernetes-version-check/","policy":"mutate","subject":"Secret","title":"Kubernetes Version Check","version":"1.8.0"},{"body":"Namespaces which preexist may need to be labeled after the fact and it is time consuming to identify which ones should be labeled and either doing so manually or with a scripted approach. This policy, which triggers on any AdmissionReview request to any Namespace, will result in applying the label `mykey=myvalue` to all existing Namespaces. If this policy is updated to change the desired label key or value, it will cause another mutation which updates all Namespaces.\n","category":"Other","filters":"mutate::Other::1.7.0::Namespace","link":"/policies/other/label-existing-namespaces/label-existing-namespaces/","policy":"mutate","subject":"Namespace","title":"Label Existing Namespaces","version":"1.7.0"},{"body":"CRI engines log in different formats. Loggers deployed as DaemonSets don't know which format to apply because they can't see this information. By Kyverno writing a label to each node with its runtime, loggers can use node label selectors to know which parsing logic to use. This policy detects the CRI engine in use and writes a label to the Node called `runtime` with it. The Node resource filter should be removed and users may need to grant the Kyverno ServiceAccount permission to update Nodes.\n","category":"Other","filters":"mutate::Other::1.7.0::Node, Label","link":"/policies/other/label-nodes-cri/label-nodes-cri/","policy":"mutate","subject":"Node, Label","title":"Label Nodes with CRI Runtime","version":"1.7.0"},{"body":"This policy shows how to restrict certain operations on specific ConfigMaps by ServiceAccounts.\n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::ConfigMap, ServiceAccount","link":"/policies/other/limit-configmap-for-sa/limit-configmap-for-sa/","policy":"validate","subject":"ConfigMap, ServiceAccount","title":"Limit ConfigMap to ServiceAccounts for a User","version":null},{"body":"Pods can have many different containers which are tightly coupled. It may be desirable to limit the amount of containers that can be in a single Pod to control best practice application or so policy can be applied consistently. This policy checks all Pods to ensure they have no more than four containers.\n","category":"Sample","filters":"validate::Sample::1.6.0::Pod","link":"/policies/other/limit-containers-per-pod/limit-containers-per-pod/","policy":"validate","subject":"Pod","title":"Limit Containers per Pod","version":"1.6.0"},{"body":"Pods can have many different containers which are tightly coupled. It may be desirable to limit the amount of containers that can be in a single Pod to control best practice application or so policy can be applied consistently. This policy checks all Pods to ensure they have no more than four containers.\n","category":"Sample in CEL","filters":"validate::Sample in CEL::1.11.0::Pod","link":"/policies/other-cel/limit-containers-per-pod/limit-containers-per-pod/","policy":"validate","subject":"Pod","title":"Limit Containers per Pod in CEL expressions","version":"1.11.0"},{"body":"Some applications will not accept certificates containing more than a single name. This policy ensures that each certificate request contains only one DNS name entry.\n","category":"Cert-Manager","filters":"validate::Cert-Manager::1.6.0::Certificate","link":"/policies/cert-manager/limit-dnsnames/limit-dnsnames/","policy":"validate","subject":"Certificate","title":"Limit dnsNames","version":"1.6.0"},{"body":"hostPath persistentvolumes consume the underlying node's file system. If hostPath volumes are not to be universally disabled, they should be restricted to only certain host paths so as not to allow access to sensitive information. This policy ensures the only directory that can be mounted as a hostPath volume is /data.\n","category":"Other","filters":"validate::Other::1.6.0::PersistentVolume","link":"/policies/other/limit-hostpath-type-pv/limit-hostpath-type-pv/","policy":"validate","subject":"PersistentVolume","title":"Limit hostPath PersistentVolumes to Specific Directories","version":"1.6.0"},{"body":"hostPath persistentvolumes consume the underlying node's file system. If hostPath volumes are not to be universally disabled, they should be restricted to only certain host paths so as not to allow access to sensitive information. This policy ensures the only directory that can be mounted as a hostPath volume is /data.\n","category":"Other in CEL","filters":"validate::Other in CEL::1.11.0::PersistentVolume","link":"/policies/other-cel/limit-hostpath-type-pv/limit-hostpath-type-pv/","policy":"validate","subject":"PersistentVolume","title":"Limit hostPath PersistentVolumes to Specific Directories in CEL expressions","version":"1.11.0"},{"body":"hostPath volumes consume the underlying node's file system. If hostPath volumes are not to be universally disabled, they should be restricted to only certain host paths so as not to allow access to sensitive information. This policy ensures the only directory that can be mounted as a hostPath volume is /data. It is strongly recommended to pair this policy with a second to ensure readOnly access is enforced preventing directory escape.\n","category":"Other","filters":"validate::Other::1.6.0::Pod","link":"/policies/other/limit-hostpath-vols/limit-hostpath-vols/","policy":"validate","subject":"Pod","title":"Limit hostPath Volumes to Specific Directories","version":"1.6.0"},{"body":"hostPath volumes consume the underlying node's file system. If hostPath volumes are not to be universally disabled, they should be restricted to only certain host paths so as not to allow access to sensitive information. This policy ensures the only directory that can be mounted as a hostPath volume is /data. It is strongly recommended to pair this policy with a second to ensure readOnly access is enforced preventing directory escape.\n","category":"Other in CEL","filters":"validate::Other in CEL::1.11.0::Pod","link":"/policies/other-cel/limit-hostpath-vols/limit-hostpath-vols/","policy":"validate","subject":"Pod","title":"Limit hostPath Volumes to Specific Directories in CEL expressions","version":"1.11.0"},{"body":"In response to CVE-2021-44228 referred to as Log4Shell, a RCE vulnerability in the Log4j library, a partial yet incomplete workaround for versions 2.10 to 2.14.1 of the library is to set the environment variable LOG4J_FORMAT_MSG_NO_LOOKUPS to \"true\". While this does provide some benefit by limiting exposure, there are still code paths which can exploit this vulnerability. It is highly recommended to upgrade log4j as soon as possible. See https://logging.apache.org/log4j/2.x/security.html for more details. This policy will mutate all initContainers and containers in an incoming Pod to add this environment variable automatically.\n","category":"Sample","filters":"mutate::Sample::1.6.0::Pod","link":"/policies/other/mitigate-log4shell/mitigate-log4shell/","policy":"mutate","subject":"Pod","title":"Log4Shell Mitigation","version":"1.6.0"},{"body":"Pods which have memory limits equal to requests could be given a QoS class of Guaranteed if they also set CPU limits equal to requests. Guaranteed is the highest schedulable class.  This policy checks that all containers in a given Pod have memory requests equal to limits.\n","category":"Sample","filters":"validate::Sample::1.6.0::Pod","link":"/policies/other/memory-requests-equal-limits/memory-requests-equal-limits/","policy":"validate","subject":"Pod","title":"Memory Requests Equal Limits","version":"1.6.0"},{"body":"Pods which have memory limits equal to requests could be given a QoS class of Guaranteed if they also set CPU limits equal to requests. Guaranteed is the highest schedulable class.  This policy checks that all containers in a given Pod have memory requests equal to limits.\n","category":"Sample in CEL","filters":"validate::Sample in CEL::1.11.0::Pod","link":"/policies/other-cel/memory-requests-equal-limits/memory-requests-equal-limits/","policy":"validate","subject":"Pod","title":"Memory Requests Equal Limits in CEL expressions","version":"1.11.0"},{"body":"Rather than a simple check to see if given metadata such as labels and annotations are present, in some cases they need to be present and the values match a specified regular expression. This policy illustrates how to ensure a label with key `corp.org/version` is both present and matches a given regex, in this case ensuring semver is met.\n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::Pod, Label","link":"/policies/other/metadata-match-regex/metadata-match-regex/","policy":"validate","subject":"Pod, Label","title":"Metadata Matches Regex","version":null},{"body":"Rather than a simple check to see if given metadata such as labels and annotations are present, in some cases they need to be present and the values match a specified regular expression. This policy illustrates how to ensure a label with key `corp.org/version` is both present and matches a given regex, in this case ensuring semver is met.\n","category":"Other in CEL","filters":"validate::Other in CEL::1.11.0::Pod, Label","link":"/policies/other-cel/metadata-match-regex/metadata-match-regex/","policy":"validate","subject":"Pod, Label","title":"Metadata Matches Regex in CEL expressions","version":"1.11.0"},{"body":"Containers running in Pods may sometimes need access to node-specific information on which the Pod has been scheduled. Scheduling decisions are made by kube-scheduler after the Pod has been persisted and so only at that time may the Node to which the Pod is bound can be fetched. The Kubernetes API allows specifically the projection of annotations from these Binding resources to the Pods which are their subject. This policy watches for then mutates the /binding subresource of a Pod to add an annotation named `foo` the value of which comes from the bound Node's label also called `foo`. Use of this policy may require removal of the Binding resourceFilter in Kyverno's ConfigMap.\n","category":"Other","filters":"mutate::Other::1.10.0::Pod","link":"/policies/other/mutate-pod-binding/mutate-pod-binding/","policy":"mutate","subject":"Pod","title":"Mutate Pod Binding","version":"1.10.0"},{"body":"Pods with large terminationGracePeriodSeconds (tGPS) might prevent cluster nodes from getting drained, ultimately making the whole cluster unstable. This policy mutates all incoming Pods to set their tGPS under 50s. If the user creates a pod without specifying tGPS, then the Kubernetes default of 30s is maintained.\n","category":"Sample","filters":"mutate::Sample::1.6.0::Pod","link":"/policies/other/mutate-large-termination-gps/mutate-large-termination-gps/","policy":"mutate","subject":"Pod","title":"Mutate termination Grace Periods Seconds","version":"1.6.0"},{"body":"In cases such as multi-tenancy where new Namespaces must be fully provisioned before they can be used, it may not be easy to declare and understand if/when the Namespace is ready. Having a policy which defines all the resources which are required for each Namespace can assist in determining compliance. This policy, expected to be run in background mode only, performs a Namespace inventory check to ensure that all Namespaces have a ResourceQuota and NetworkPolicy. Additional rules may be written to extend the check for your needs. By default, background scans occur every one hour which may be changed with an additional container flag. Please see the installation documentation for details.\n","category":"Other","filters":"validate::Other::1.9.0::Namespace","link":"/policies/other/namespace-inventory-check/namespace-inventory-check/","policy":"validate","subject":"Namespace","title":"Namespace Inventory Check","version":"1.9.0"},{"body":"Cases where RBAC may be applied at a higher level and where Namespace-level protections may be necessary can be accomplished with a separate policy. For example, one may want to protect creates, updates, and deletes on only a single Namespace. This policy will block creates, updates, and deletes to any Namespace labeled with `freeze=true`. Caution should be exercised when using rules which match on all kinds (`\"*\"`) as this will involve, for larger clusters, a substantial amount of processing on Kyverno's part. Additional resource requests and/or limits may be required.\n","category":"Other","filters":"validate::Other::1.9.0::Namespace","link":"/policies/other/namespace-protection/namespace-protection/","policy":"validate","subject":"Namespace","title":"Namespace Protection","version":"1.9.0"},{"body":"The NFS subdir external provisioner project allows defining a StorageClass with a pathPattern, a template used to provision subdirectories on NFS exports. This can be controlled with an annotation on a PVC called `nfs.io/storage-path`. This policy ensures that if the StorageClass name `nfs-client` is used by a PVC, corresponding to the NFS subdir external provisioner, and if it sets the nfs.io/storage-path annotation that it cannot be empty, which may otherwise result in it consuming the root of the designated path.\n","category":"Other","filters":"validate::Other::1.6.0::PersistentVolumeClaim","link":"/policies/other/nfs-subdir-external-provisioner-storage-path/nfs-subdir-external-provisioner-storage-path/","policy":"validate","subject":"PersistentVolumeClaim","title":"NFS Subdirectory External Provisioner Enforce Storage Path","version":"1.6.0"},{"body":"Some containers must be built to run as root in order to function properly, but use of those images should be carefully restricted to prevent unneeded privileges. This policy blocks any image that runs as root if it does not come from a trustworthy registry, `ghcr.io` in this case.\n","category":"Other, EKS Best Practices","filters":"validate::Other, EKS Best Practices::1.6.0::Pod","link":"/policies/other/only-trustworthy-registries-set-root/only-trustworthy-registries-set-root/","policy":"validate","subject":"Pod","title":"Only Trustworthy Registries Set Root","version":"1.6.0"},{"body":"A PodDisruptionBudget which sets its maxUnavailable value to zero prevents all voluntary evictions including Node drains which may impact maintenance tasks. This policy enforces that if a PodDisruptionBudget specifies the maxUnavailable field it must be greater than zero.\n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::PodDisruptionBudget","link":"/policies/other/pdb-maxunavailable/pdb-maxunavailable/","policy":"validate","subject":"PodDisruptionBudget","title":"PodDisruptionBudget maxUnavailable Non-Zero","version":null},{"body":"A PodDisruptionBudget which sets its maxUnavailable value to zero prevents all voluntary evictions including Node drains which may impact maintenance tasks. This policy enforces that if a PodDisruptionBudget specifies the maxUnavailable field it must be greater than zero.\n","category":"Other in CEL","filters":"validate::Other in CEL::%!s(\u003cnil\u003e)::PodDisruptionBudget","link":"/policies/other-cel/pdb-maxunavailable/pdb-maxunavailable/","policy":"validate","subject":"PodDisruptionBudget","title":"PodDisruptionBudget maxUnavailable Non-Zero in CEL expressions","version":null},{"body":"A PodDisruptionBudget which sets its maxUnavailable value to zero prevents all voluntary evictions including Node drains which may impact maintenance tasks. This may be acceptable if there are no matching controllers, but if there are then creation of such a PDB could allow unintended disruption. This policy enforces that a PodDisruptionBudget may not specify the maxUnavailable field as zero if there are any existing matching Deployments having greater than zero replicas.\n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::PodDisruptionBudget, Deployment","link":"/policies/other/pdb-maxunavailable-with-deployments/pdb-maxunavailable-with-deployments/","policy":"validate","subject":"PodDisruptionBudget, Deployment","title":"PodDisruptionBudget maxUnavailable Non-Zero with Deployments","version":null},{"body":"A PolicyException grants the applicable resource(s) or subject(s) the ability to bypass an existing Kyverno policy. Care should be taken to ensure that the allowed PolicyExceptions are scoped fine enough and according to your organization's operation. This is a Kyverno policy intended to provide guardrails for Kyverno PolicyExceptions and contains a number of rules which may help with these scoping best practices. These rules may be changed/removed depending on the exception practices to be implemented.\n","category":"Sample","filters":"validate::Sample::1.9.0::PolicyException","link":"/policies/other/policy-for-exceptions/policy-for-exceptions/","policy":"validate","subject":"PolicyException","title":"Policy for PolicyExceptions","version":"1.9.0"},{"body":"Pulling images from outside registries may be undesirable due to untrustworthiness or simply because the traffic results in an excess of bandwidth usage. Instead of blocking them, they can be mutated to divert to an internal registry which may already contain them or function as a pull-through proxy. This policy prepends all images in both containers and initContainers to come from `registry.io`.\n","category":"Other","filters":"mutate::Other::1.6.0::Pod","link":"/policies/other/prepend-image-registry/prepend-image-registry/","policy":"mutate","subject":"Pod","title":"Prepend Image Registry","version":"1.6.0"},{"body":"Pods not created by workload controllers such as Deployments have no self-healing or scaling abilities and are unsuitable for production. This policy prevents such \"bare\" Pods from being created unless they originate from a higher-level workload controller of some sort.\n","category":"Other, EKS Best Practices","filters":"validate::Other, EKS Best Practices::1.6.0::Pod","link":"/policies/other/prevent-bare-pods/prevent-bare-pods/","policy":"validate","subject":"Pod","title":"Prevent Bare Pods","version":"1.6.0"},{"body":"Pods not created by workload controllers such as Deployments have no self-healing or scaling abilities and are unsuitable for production. This policy prevents such \"bare\" Pods from being created unless they originate from a higher-level workload controller of some sort.\n","category":"Other, EKS Best Practices in CEL","filters":"validate::Other, EKS Best Practices in CEL::1.11.0::Pod","link":"/policies/other-cel/prevent-bare-pods/prevent-bare-pods/","policy":"validate","subject":"Pod","title":"Prevent Bare Pods in CEL expressions","version":"1.11.0"},{"body":"A vulnerability \"cr8escape\" (CVE-2022-0811) in CRI-O the container runtime engine underpinning Kubernetes allows attackers to escape from a Kubernetes container and gain root access to the host. The recommended remediation is to disallow sysctl settings with + or = in their value.\n","category":"Other","filters":"validate::Other::1.6.0::Pod","link":"/policies/other/prevent-cr8escape/prevent-cr8escape/","policy":"validate","subject":"Pod","title":"Prevent cr8escape (CVE-2022-0811)","version":"1.6.0"},{"body":"A vulnerability \"cr8escape\" (CVE-2022-0811) in CRI-O the container runtime engine underpinning Kubernetes allows attackers to escape from a Kubernetes container and gain root access to the host. The recommended remediation is to disallow sysctl settings with + or = in their value.\n","category":"Other in CEL","filters":"validate::Other in CEL::1.11.0::Pod","link":"/policies/other-cel/prevent-cr8escape/prevent-cr8escape/","policy":"validate","subject":"Pod","title":"Prevent cr8escape (CVE-2022-0811) in CEL expressions","version":"1.11.0"},{"body":"One way sidecar injection in an Istio service mesh may be accomplished is by defining an annotation at the Pod level. Pods not receiving a sidecar cannot participate in the mesh thereby reducing visibility. This policy ensures that Pods cannot set the annotation `sidecar.istio.io/inject` to a value of `false`.\n","category":"Istio","filters":"validate::Istio::1.6.0::Pod","link":"/policies/istio/prevent-disabling-injection-pods/prevent-disabling-injection-pods/","policy":"validate","subject":"Pod","title":"Prevent Disabling Istio Sidecar Injection","version":"1.6.0"},{"body":"One way sidecar injection in an Istio service mesh may be accomplished is by defining an annotation at the Pod level. Pods not receiving a sidecar cannot participate in the mesh thereby reducing visibility. This policy ensures that Pods cannot set the annotation `sidecar.istio.io/inject` to a value of `false`.\n","category":"Istio in CEL","filters":"validate::Istio in CEL::1.11.0::Pod","link":"/policies/istio-cel/prevent-disabling-injection-pods/prevent-disabling-injection-pods/","policy":"validate","subject":"Pod","title":"Prevent Disabling Istio Sidecar Injection in CEL expressions","version":"1.11.0"},{"body":"HorizontalPodAutoscaler (HPA) is useful to automatically adjust the number of pods in a deployment or replication controller. It requires defining a specific target resource by kind and name. There are no built-in validation checks by the HPA controller to prevent the creation of multiple HPAs which target the same resource. This policy has two rules, the first of which ensures that the only targetRef kinds accepted are one of either Deployment, StatefulSet, ReplicaSet, or DaemonSet. The second prevents the creation of duplicate HPAs by validating that any new HPA targets a unique resource.\n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::HorizontalPodAutoscaler","link":"/policies/other/prevent-duplicate-hpa/prevent-duplicate-hpa/","policy":"validate","subject":"HorizontalPodAutoscaler","title":"Prevent Duplicate HorizontalPodAutoscalers","version":null},{"body":"VerticalPodAutoscaler (VPA) is useful to automatically adjust the resources assigned to Pods. It requires defining a specific target resource by kind and name. There are no built-in validation checks by the VPA controller to prevent the creation of multiple VPAs which target the same resource. This policy has two rules, the first of which ensures that the only targetRef kinds accepted are one of either Deployment, StatefulSet, ReplicaSet, or DaemonSet. The second prevents the creation of duplicate VPAs by validating that any new VPA targets a unique resource.\n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::VerticalPodAutoscaler","link":"/policies/other/prevent-duplicate-vpa/prevent-duplicate-vpa/","policy":"validate","subject":"VerticalPodAutoscaler","title":"Prevent Duplicate VerticalPodAutoscalers","version":null},{"body":"Setting the annotation on a Pod (or its controller) `linkerd.io/inject` to `disabled` may effectively disable mesh participation for that workload reducing security and visibility. This policy prevents setting the annotation `linkerd.io/inject` to `disabled` for Pods.\n","category":"Linkerd","filters":"validate::Linkerd::%!s(\u003cnil\u003e)::Pod","link":"/policies/linkerd/prevent-linkerd-pod-injection-override/prevent-linkerd-pod-injection-override/","policy":"validate","subject":"Pod","title":"Prevent Linkerd Pod Injection Override","version":null},{"body":"Setting the annotation on a Pod (or its controller) `linkerd.io/inject` to `disabled` may effectively disable mesh participation for that workload reducing security and visibility. This policy prevents setting the annotation `linkerd.io/inject` to `disabled` for Pods.\n","category":"Linkerd in CEL","filters":"validate::Linkerd in CEL::%!s(\u003cnil\u003e)::Pod","link":"/policies/linkerd-cel/prevent-linkerd-pod-injection-override/prevent-linkerd-pod-injection-override/","policy":"validate","subject":"Pod","title":"Prevent Linkerd Pod Injection Override in CEL expressions","version":null},{"body":"Linkerd has the ability to skip inbound and outbound ports assigned to Pods, exempting them from mTLS. This can be important in some narrow use cases but generally should be avoided. This policy prevents Pods from setting the annotations `config.linkerd.io/skip-inbound-ports` or `config.linkerd.io/skip-outbound-ports`.\n","category":"Linkerd","filters":"validate::Linkerd::%!s(\u003cnil\u003e)::Pod","link":"/policies/linkerd/prevent-linkerd-port-skipping/prevent-linkerd-port-skipping/","policy":"validate","subject":"Pod","title":"Prevent Linkerd Port Skipping","version":null},{"body":"Linkerd has the ability to skip inbound and outbound ports assigned to Pods, exempting them from mTLS. This can be important in some narrow use cases but generally should be avoided. This policy prevents Pods from setting the annotations `config.linkerd.io/skip-inbound-ports` or `config.linkerd.io/skip-outbound-ports`.\n","category":"Linkerd in CEL","filters":"validate::Linkerd in CEL::1.11.0::Pod","link":"/policies/linkerd-cel/prevent-linkerd-port-skipping/prevent-linkerd-port-skipping/","policy":"validate","subject":"Pod","title":"Prevent Linkerd Port Skipping in CEL expressions","version":"1.11.0"},{"body":"This policy prevents updates to the project field after an Application is created.\n","category":"Argo","filters":"validate::Argo::1.6.0::Application","link":"/policies/argo/application-prevent-updates-project/application-prevent-updates-project/","policy":"validate","subject":"Application","title":"Prevent Updates to Project","version":"1.6.0"},{"body":"This policy prevents updates to the project field after an Application is created.\n","category":"Argo in CEL","filters":"validate::Argo in CEL::%!s(\u003cnil\u003e)::Application","link":"/policies/argo-cel/application-prevent-updates-project/application-prevent-updates-project/","policy":"validate","subject":"Application","title":"Prevent Updates to Project in CEL expressions","version":null},{"body":"This policy prevents the use of the default project in an Application.\n","category":"Argo","filters":"validate::Argo::1.6.0::Application","link":"/policies/argo/application-prevent-default-project/application-prevent-default-project/","policy":"validate","subject":"Application","title":"Prevent Use of Default Project","version":"1.6.0"},{"body":"This policy prevents the use of the default project in an Application.\n","category":"Argo in CEL","filters":"validate::Argo in CEL::1.11.0::Application","link":"/policies/argo-cel/application-prevent-default-project/application-prevent-default-project/","policy":"validate","subject":"Application","title":"Prevent Use of Default Project in CEL expressions","version":"1.11.0"},{"body":"Node taints are often used as a control in multi-tenant use cases. If users can alter them, they may be able to affect scheduling of Pods which may impact other workloads. This sample prohibits altering of node taints unless by a user holding the `cluster-admin` ClusterRole. Use of this policy requires removal of the Node resource filter in the Kyverno ConfigMap ([Node,*,*]). Due to Kubernetes CVE-2021-25735, this policy requires, at minimum, one of the following versions of Kubernetes: v1.18.18, v1.19.10, v1.20.6, or v1.21.0.\n","category":"Other","filters":"validate::Other::1.6.0::Node","link":"/policies/other/protect-node-taints/protect-node-taints/","policy":"validate","subject":"Node","title":"Protect Node Taints","version":"1.6.0"},{"body":"Kubernetes by default does not make a record of who or what created a resource in that resource itself. It must be retrieved from an audit log, if enabled, which can make it difficult for cluster operators to know who was responsible for an object's creation. This policy writes an annotation with the key `kyverno.io/created-by` having all the userInfo fields present in the AdmissionReview request for any object being created. It then protects this annotation from tampering or removal making it immutable. Although this policy matches on all kinds (\"*\") it is highly recommend to more narrowly scope it to only the resources which should be labeled.\n","category":"Other","filters":"mutate::Other::1.6.0::Annotation","link":"/policies/other/record-creation-details/record-creation-details/","policy":"mutate","subject":"Annotation","title":"Record Creation Details","version":"1.6.0"},{"body":"When Pods consume Secrets or ConfigMaps through environment variables, should the contents of those source resources change, the downstream Pods are normally not aware of them. In order for the changes to be reflected, Pods must either restart or be respawned. This policy watches for changes to Secrets which have been marked for this refreshing process which contain the label `kyverno.io/watch=true` and will write an annotation to any Deployment Pod template which consume them as env vars. This will result in a new rollout of Pods which will pick up the changed values. See the related policy entitled \"Refresh Volumes in Pods\" for a similar reloading process when ConfigMaps and Secrets are consumed as volumes instead. Use of this policy may require providing the Kyverno ServiceAccount with permission to update Deployments.\n","category":"Other","filters":"mutate::Other::1.9.0::Pod,Deployment,Secret","link":"/policies/other/refresh-env-var-in-pod/refresh-env-var-in-pod/","policy":"mutate","subject":"Pod,Deployment,Secret","title":"Refresh Environment Variables in Pods","version":"1.9.0"},{"body":"Although ConfigMaps and Secrets mounted as volumes to a Pod, when the contents change, will eventually propagate to the Pods mounting them, this process may take between 60-90 seconds. In order to reduce that time, a modification made to downstream Pods will cause the changes to take effect almost instantly. This policy watches for changes to ConfigMaps which have been marked for this quick reloading process which contain the label `kyverno.io/watch=true` and will write an annotation to any Pods which mount them as volumes causing a fast refresh in their contents. See the related policy entitled \"Refresh Environment Variables in Pods\" for a similar reloading process when ConfigMaps and Secrets are consumed as environment variables instead. Use of this policy may require providing the Kyverno ServiceAccount with permission to update Pods.\n","category":"Other","filters":"mutate::Other::1.9.0::Pod,ConfigMap","link":"/policies/other/refresh-volumes-in-pods/refresh-volumes-in-pods/","policy":"mutate","subject":"Pod,ConfigMap","title":"Refresh Volumes in Pods","version":"1.9.0"},{"body":"Pods which mount hostPath volumes are provided access to the underlying filesystem of the Node on which they run. In most scenarios, this should be forbidden. In others, it may be useful to silently remove those hostPath volumes rather than blocking the Pod. This policy removes all hostPath volumes and their volumeMount references from all containers within a Pod.\n","category":"Other","filters":"mutate::Other::1.10.0::Pod,Volume","link":"/policies/other/remove-hostpath-volumes/remove-hostpath-volumes/","policy":"mutate","subject":"Pod,Volume","title":"Remove hostPath Volumes","version":"1.10.0"},{"body":"Pods running with a ServiceAccount are presented with a volume, containing the token, and volume mounts for all containers in the Pod. Applications that do not need to communicate with the Kubernetes API do not need a ServiceAccount and therefore limiting which Pods have access rights is important. Rather than, or in addition to, requiring that certain Pods disable mounting of a ServiceAccount, it is possible to silently remove this token if it has been presented. This policy ensures that Pods which do not have the label `corp.org/can-use-serviceaccount` and are consuming a ServiceAccount have that stripped away. It should be customized to restrict the scope of its operation as it will not distinguish between an explicitly-defined ServiceAccount or one provided by default.\n","category":"Other","filters":"mutate::Other::1.10.0::Pod,ServiceAccount,Volume","link":"/policies/other/remove-serviceaccount-token/remove-serviceaccount-token/","policy":"mutate","subject":"Pod,ServiceAccount,Volume","title":"Remove ServiceAccount Token","version":"1.10.0"},{"body":"Rather than blocking Pods which come from outside registries, it is also possible to mutate them so the pulls are directed to approved registries. In some cases, those registries may function as pull-through proxies and can fetch the image if not cached. This policy mutates all images either in the form 'image:tag' or 'registry.corp.com/image:tag' to be `myregistry.corp.com/`. Any path in the image name will be preserved. Note that this mutates Pods directly and not their controllers. It can be changed if desired but if so may need to not match on Pods.      \n","category":"Sample","filters":"mutate::Sample::1.6.0::Pod","link":"/policies/other/replace-image-registry/replace-image-registry/","policy":"mutate","subject":"Pod","title":"Replace Image Registry","version":"1.6.0"},{"body":"Some registries like Harbor offer pull-through caches for images from certain registries. Images can be re-written to be pulled from the redirected registry instead of the original and the registry will proxy pull the image, adding it to its internal cache. The imageData context variable in this policy provides a normalized view of the container image, allowing the policy to make decisions based on various  \"live\" image details. As a result, it requires access to the source registry and the existence of the target image to verify those details.\n","category":"Sample","filters":"mutate::Sample::%!s(\u003cnil\u003e)::Pod","link":"/policies/other/replace-image-registry-with-harbor/replace-image-registry-with-harbor/","policy":"mutate","subject":"Pod","title":"Replace Image Registry With Harbor","version":null},{"body":"An Ingress may specify host names at a variety of locations in the same resource. In some cases, those host names should be modified to, for example, update domain names silently. The replacement must be done in all the fields where a host name can be specified. This policy, illustrating the use of nested foreach loops and operable in Kyverno 1.9+, replaces host names that end with `old.com` with `new.com`.\n","category":"Other","filters":"mutate::Other::1.9.0::Ingress","link":"/policies/other/replace-ingress-hosts/replace-ingress-hosts/","policy":"mutate","subject":"Ingress","title":"Replace Ingress Hosts","version":"1.9.0"},{"body":"Define and use annotations that identify semantic attributes of your application or Deployment. A common set of annotations allows tools to work collaboratively, describing objects in a common manner that all tools can understand. The recommended annotations describe applications in a way that can be queried. This policy validates that the annotation `corp.org/department` is specified with some value.      \n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::Pod, Annotation","link":"/policies/other/require-annotations/require-annotations/","policy":"validate","subject":"Pod, Annotation","title":"Require Annotations","version":null},{"body":"Define and use annotations that identify semantic attributes of your application or Deployment. A common set of annotations allows tools to work collaboratively, describing objects in a common manner that all tools can understand. The recommended annotations describe applications in a way that can be queried. This policy validates that the annotation `corp.org/department` is specified with some value.      \n","category":"Other in CEL","filters":"validate::Other in CEL::%!s(\u003cnil\u003e)::Pod, Annotation","link":"/policies/other-cel/require-annotations/require-annotations/","policy":"validate","subject":"Pod, Annotation","title":"Require Annotations in CEL expressions","version":null},{"body":"According to EKS best practices, the `aws-node` DaemonSet is configured to use a role assigned to the EC2 instances to assign IPs to Pods. This role includes several AWS managed policies that effectively allow all Pods running on a Node to attach/detach ENIs, assign/unassign IP addresses, or pull images from ECR. Since this presents a risk to your cluster, it is recommended that you update the `aws-node` DaemonSet to use IRSA. This policy ensures that the `aws-node` DaemonSet running in the `kube-system` Namespace is not still using the `aws-node` ServiceAccount.\n","category":"AWS, EKS Best Practices","filters":"validate::AWS, EKS Best Practices::1.6.0::DaemonSet","link":"/policies/aws/require-aws-node-irsa/require-aws-node-irsa/","policy":"validate","subject":"DaemonSet","title":"Require aws-node DaemonSet use IRSA","version":"1.6.0"},{"body":"Containers may define ports on which they listen. In addition to a port number, a name field may optionally be used. Including a name makes it easier when defining Service resource definitions and others since the name may be referenced allowing the port number to change. This policy requires that for every containerPort defined there is also a name specified.      \n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::Pod","link":"/policies/other/require-container-port-names/require-container-port-names/","policy":"validate","subject":"Pod","title":"Require Container Port Names","version":null},{"body":"Containers may define ports on which they listen. In addition to a port number, a name field may optionally be used. Including a name makes it easier when defining Service resource definitions and others since the name may be referenced allowing the port number to change. This policy requires that for every containerPort defined there is also a name specified.      \n","category":"Other in CEL","filters":"validate::Other in CEL::%!s(\u003cnil\u003e)::Pod","link":"/policies/other-cel/require-container-port-names/require-container-port-names/","policy":"validate","subject":"Pod","title":"Require Container Port Names in CEL expressions","version":null},{"body":"Setting CPU limits on containers ensures fair distribution of resources, preventing any single container from monopolizing CPU and impacting the performance of other containers. This practice enhances stability, predictability, and cost control, while also mitigating the noisy neighbor problem and facilitating efficient scaling in Kubernetes clusters. This policy ensures that cpu limits are set on every container.\n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::Pod","link":"/policies/other/require-cpu-limits/require-cpu-limits/","policy":"validate","subject":"Pod","title":"Require CPU Limits","version":null},{"body":"Services of type LoadBalancer when deployed inside AWS have support for transport encryption if it is enabled via an annotation. This policy requires that Services of type LoadBalancer contain the annotation service.beta.kubernetes.io/aws-load-balancer-ssl-cert with some value.\n","category":"AWS, EKS Best Practices","filters":"validate::AWS, EKS Best Practices::1.6.0::Service","link":"/policies/aws/require-encryption-aws-loadbalancers/require-encryption-aws-loadbalancers/","policy":"validate","subject":"Service","title":"Require Encryption with AWS LoadBalancers","version":"1.6.0"},{"body":"Services of type LoadBalancer when deployed inside AWS have support for transport encryption if it is enabled via an annotation. This policy requires that Services of type LoadBalancer contain the annotation service.beta.kubernetes.io/aws-load-balancer-ssl-cert with some value.\n","category":"AWS, EKS Best Practices in CEL","filters":"validate::AWS, EKS Best Practices in CEL::%!s(\u003cnil\u003e)::Service","link":"/policies/aws-cel/require-encryption-aws-loadbalancers/require-encryption-aws-loadbalancers/","policy":"validate","subject":"Service","title":"Require Encryption with AWS LoadBalancers in CEL expressions","version":null},{"body":"Images can be built from a variety of source control locations and the name does not necessarily indicate this mapping. Ensuring that known good repositories are the source of images helps ensure supply chain security. This policy checks the container images and ensures that they specify the source in either a label `org.opencontainers.image.source` or a newer annotation in the manifest of the same name.\n","category":"Other","filters":"validate::Other::1.6.0::Pod","link":"/policies/other/require-image-source/require-image-source/","policy":"validate","subject":"Pod","title":"Require Image Source","version":"1.6.0"},{"body":"An important part of ensuring software supply chain integrity is performing periodic vulnerability scans on images. Initial scans as part of the build process is necessary, but as new vulnerabilities are discovered the scans must be refreshed. This policy ensures that images, signed with Cosign's keyless ability during a GitHub Actions workflow, have attested vulnerability scans not older than one week. This policy is expected to be customized based upon your signing strategy and applicable to the images you designate.\n","category":"Software Supply Chain Security","filters":"verifyImages::Software Supply Chain Security::1.8.3::Pod","link":"/policies/other/require-vulnerability-scan/require-vulnerability-scan/","policy":"verifyImages","subject":"Pod","title":"Require Image Vulnerability Scans","version":"1.8.3"},{"body":"If the `latest` tag is allowed for images, it is a good idea to have the imagePullPolicy field set to `Always` to ensure should that tag be overwritten that future pulls will get the updated image. This policy validates the imagePullPolicy is set to `Always` when the `latest` tag is specified explicitly or where a tag is not defined at all.\n","category":"Sample","filters":"validate::Sample::1.6.0::Pod","link":"/policies/other/imagepullpolicy-always/imagepullpolicy-always/","policy":"validate","subject":"Pod","title":"Require imagePullPolicy Always","version":"1.6.0"},{"body":"If the `latest` tag is allowed for images, it is a good idea to have the imagePullPolicy field set to `Always` to ensure should that tag be overwritten that future pulls will get the updated image. This policy validates the imagePullPolicy is set to `Always` when the `latest` tag is specified explicitly or where a tag is not defined at all.\n","category":"Sample in CEL","filters":"validate::Sample in CEL::1.11.0::Pod","link":"/policies/other-cel/imagepullpolicy-always/imagepullpolicy-always/","policy":"validate","subject":"Pod","title":"Require imagePullPolicy Always in CEL expressions","version":"1.11.0"},{"body":"Some registries, both public and private, require credentials in order to pull images from them. This policy checks those images and if they come from a registry other than ghcr.io or quay.io an `imagePullSecret` is required.\n","category":"Sample","filters":"validate::Sample::1.6.0::Pod","link":"/policies/other/require-imagepullsecrets/require-imagepullsecrets/","policy":"validate","subject":"Pod","title":"Require imagePullSecrets","version":"1.6.0"},{"body":"Use of a SHA checksum when pulling an image is often preferable because tags are mutable and can be overwritten. This policy checks to ensure that all images use SHA checksums rather than tags.\n","category":"Sample","filters":"validate::Sample::1.6.0::Pod","link":"/policies/other/require-image-checksum/require-image-checksum/","policy":"validate","subject":"Pod","title":"Require Images Use Checksums","version":"1.6.0"},{"body":"Use of a SHA checksum when pulling an image is often preferable because tags are mutable and can be overwritten. This policy checks to ensure that all images use SHA checksums rather than tags.\n","category":"Sample in CEL","filters":"validate::Sample in CEL::1.11.0::Pod","link":"/policies/other-cel/require-image-checksum/require-image-checksum/","policy":"validate","subject":"Pod","title":"Require Images Use Checksums in CEL expressions","version":"1.11.0"},{"body":"Ingress resources should only allow secure traffic by disabling HTTP and therefore only allowing HTTPS. This policy requires that all Ingress resources set the annotation `kubernetes.io/ingress.allow-http` to `\"false\"` and specify TLS in the spec.\n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::Ingress","link":"/policies/other/require-ingress-https/require-ingress-https/","policy":"validate","subject":"Ingress","title":"Require Ingress HTTPS","version":null},{"body":"Ingress resources should only allow secure traffic by disabling HTTP and therefore only allowing HTTPS. This policy requires that all Ingress resources set the annotation `kubernetes.io/ingress.allow-http` to `\"false\"` and specify TLS in the spec.\n","category":"Other in CEL","filters":"validate::Other in CEL::%!s(\u003cnil\u003e)::Ingress","link":"/policies/other-cel/require-ingress-https/require-ingress-https/","policy":"validate","subject":"Ingress","title":"Require Ingress HTTPS in CEL expressions","version":null},{"body":"An AuthorizationPolicy is used to provide access controls for traffic in the mesh and can be defined at multiple levels. For the Namespace level, all Namespaces should have at least one AuthorizationPolicy. This policy, designed to run in background mode for reporting purposes, ensures every Namespace has at least one AuthorizationPolicy.\n","category":"Istio","filters":"validate::Istio::1.6.0::AuthorizationPolicy","link":"/policies/istio/require-authorizationpolicy/require-authorizationpolicy/","policy":"validate","subject":"AuthorizationPolicy","title":"Require Istio AuthorizationPolicies","version":"1.6.0"},{"body":"Kubecost can use labels assigned to Pods in order to track and display cost allocation in a granular way. These labels, which can be customized, can be used to organize and group workloads in different ways. This policy requires that the labels `owner`, `team`, `department`, `app`, and `env` are all defined on Pods. With Kyverno autogen enabled (absence of the annotation `pod-policies.kyverno.io/autogen-controllers=none`), these labels will also be required for all Pod controllers.\n","category":"Kubecost","filters":"validate::Kubecost::%!s(\u003cnil\u003e)::Pod, Label","link":"/policies/kubecost/require-kubecost-labels/require-kubecost-labels/","policy":"validate","subject":"Pod, Label","title":"Require Kubecost Labels","version":null},{"body":"Kubecost can use labels assigned to Pods in order to track and display cost allocation in a granular way. These labels, which can be customized, can be used to organize and group workloads in different ways. This policy requires that the labels `owner`, `team`, `department`, `app`, and `env` are all defined on Pods. With Kyverno autogen enabled (absence of the annotation `pod-policies.kyverno.io/autogen-controllers=none`), these labels will also be required for all Pod controllers.\n","category":"Kubecost in CEL","filters":"validate::Kubecost in CEL::%!s(\u003cnil\u003e)::Pod, Label","link":"/policies/kubecost-cel/require-kubecost-labels/require-kubecost-labels/","policy":"validate","subject":"Pod, Label","title":"Require Kubecost Labels in CEL expressions","version":null},{"body":"Define and use labels that identify semantic attributes of your application or Deployment. A common set of labels allows tools to work collaboratively, describing objects in a common manner that all tools can understand. The recommended labels describe applications in a way that can be queried. This policy validates that the label `app.kubernetes.io/name` is specified with some value.\n","category":"Best Practices","filters":"validate::Best Practices::1.6.0::Pod, Label","link":"/policies/best-practices/require-labels/require-labels/","policy":"validate","subject":"Pod, Label","title":"Require Labels","version":"1.6.0"},{"body":"Define and use labels that identify semantic attributes of your application or Deployment. A common set of labels allows tools to work collaboratively, describing objects in a common manner that all tools can understand. The recommended labels describe applications in a way that can be queried. This policy validates that the label `app.kubernetes.io/name` is specified with some value.\n","category":"Best Practices in CEL","filters":"validate::Best Practices in CEL::1.11.0::Pod, Label","link":"/policies/best-practices-cel/require-labels/require-labels/","policy":"validate","subject":"Pod, Label","title":"Require Labels in CEL expressions","version":"1.11.0"},{"body":"As application workloads share cluster resources, it is important to limit resources requested and consumed by each Pod. It is recommended to require resource requests and limits per Pod, especially for memory and CPU. If a Namespace level request or limit is specified, defaults will automatically be applied to each Pod based on the LimitRange configuration. This policy validates that all containers have something specified for memory and CPU requests and memory limits.\n","category":"Best Practices, EKS Best Practices","filters":"validate::Best Practices, EKS Best Practices::1.6.0::Pod","link":"/policies/best-practices/require-pod-requests-limits/require-pod-requests-limits/","policy":"validate","subject":"Pod","title":"Require Limits and Requests","version":"1.6.0"},{"body":"As application workloads share cluster resources, it is important to limit resources requested and consumed by each Pod. It is recommended to require resource requests and limits per Pod, especially for memory and CPU. If a Namespace level request or limit is specified, defaults will automatically be applied to each Pod based on the LimitRange configuration. This policy validates that all containers have something specified for memory and CPU requests and memory limits.\n","category":"Best Practices, EKS Best Practices in CEL","filters":"validate::Best Practices, EKS Best Practices in CEL::1.11.0::Pod","link":"/policies/best-practices-cel/require-pod-requests-limits/require-pod-requests-limits/","policy":"validate","subject":"Pod","title":"Require Limits and Requests in CEL expressions","version":"1.11.0"},{"body":"Sidecar proxy injection in Linkerd may be handled at the Namespace level by setting the annotation `linkerd.io/inject` to `enabled`. This policy enforces that all Namespaces contain the annotation `linkerd.io/inject` set to `enabled`.\n","category":"Linkerd","filters":"validate::Linkerd::%!s(\u003cnil\u003e)::Namespace, Annotation","link":"/policies/linkerd/require-linkerd-mesh-injection/require-linkerd-mesh-injection/","policy":"validate","subject":"Namespace, Annotation","title":"Require Linkerd Mesh Injection","version":null},{"body":"Sidecar proxy injection in Linkerd may be handled at the Namespace level by setting the annotation `linkerd.io/inject` to `enabled`. This policy enforces that all Namespaces contain the annotation `linkerd.io/inject` set to `enabled`.\n","category":"Linkerd in CEL","filters":"validate::Linkerd in CEL::1.11.0::Namespace, Annotation","link":"/policies/linkerd-cel/require-linkerd-mesh-injection/require-linkerd-mesh-injection/","policy":"validate","subject":"Namespace, Annotation","title":"Require Linkerd Mesh Injection in CEL expressions","version":"1.11.0"},{"body":"In Linkerd 2.11, a Server resource selects ports on a set of Pods in the same Namespace and is used to deny traffic which then must be authorized later. Ensuring that Linkerd policy is enforced on Pods in the mesh is important to maintaining a secure environment. This policy, requiring Linkerd 2.11+, has two rules designed to check Deployments (exposing ports) and Services to ensure a corresponding Server resource exists first.\n","category":"Linkerd","filters":"validate::Linkerd::%!s(\u003cnil\u003e)::Deployment, Server","link":"/policies/linkerd/require-linkerd-server/require-linkerd-server/","policy":"validate","subject":"Deployment, Server","title":"Require Linkerd Server","version":null},{"body":"Deployments with a single replica cannot be highly available and thus the application may suffer downtime if that one replica goes down. This policy validates that Deployments have more than one replica.\n","category":"Sample","filters":"validate::Sample::1.6.0::Deployment","link":"/policies/other/require-deployments-have-multiple-replicas/require-deployments-have-multiple-replicas/","policy":"validate","subject":"Deployment","title":"Require Multiple Replicas","version":"1.6.0"},{"body":"Deployments with a single replica cannot be highly available and thus the application may suffer downtime if that one replica goes down. This policy validates that Deployments have more than one replica.\n","category":"Sample in CEL","filters":"validate::Sample in CEL::1.11.0::Deployment","link":"/policies/other-cel/require-deployments-have-multiple-replicas/require-deployments-have-multiple-replicas/","policy":"validate","subject":"Deployment","title":"Require Multiple Replicas in CEL expressions","version":"1.11.0"},{"body":"A Namespace is required for a PipelineRun and may not be set to `default`.\n","category":"Tekton","filters":"validate::Tekton::1.7.0::PipelineRun","link":"/policies/tekton/require-tekton-namespace-pipelinerun/require-tekton-namespace-pipelinerun/","policy":"validate","subject":"PipelineRun","title":"Require Namespace for Tekton PipelineRun","version":"1.7.0"},{"body":"NetworkPolicy is used to control Pod-to-Pod communication and is a good practice to ensure only authorized Pods can send/receive traffic. This policy checks incoming Deployments to ensure they have a matching, preexisting NetworkPolicy.\n","category":"Sample","filters":"validate::Sample::1.6.0::Deployment, NetworkPolicy","link":"/policies/other/require-netpol/require-netpol/","policy":"validate","subject":"Deployment, NetworkPolicy","title":"Require NetworkPolicy","version":"1.6.0"},{"body":"Containers should be forbidden from running with a root primary or supplementary GID. This policy ensures the `runAsGroup`, `supplementalGroups`, and `fsGroup` fields are set to a number greater than zero (i.e., non root). A known issue prevents a policy such as this using `anyPattern` from being persisted properly in Kubernetes 1.23.0-1.23.2.\n","category":"Sample, EKS Best Practices","filters":"validate::Sample, EKS Best Practices::1.3.6::Pod","link":"/policies/other/require-non-root-groups/require-non-root-groups/","policy":"validate","subject":"Pod","title":"Require Non-Root Groups","version":"1.3.6"},{"body":"Containers should be forbidden from running with a root primary or supplementary GID. This policy ensures the `runAsGroup`, `supplementalGroups`, and `fsGroup` fields are set to a number greater than zero (i.e., non root). A known issue prevents a policy such as this using `anyPattern` from being persisted properly in Kubernetes 1.23.0-1.23.2.\n","category":"Sample, EKS Best Practices in CEL","filters":"validate::Sample, EKS Best Practices in CEL::1.11.0::Pod","link":"/policies/other-cel/require-non-root-groups/require-non-root-groups/","policy":"validate","subject":"Pod","title":"Require Non-Root Groups in CEL expressions","version":"1.11.0"},{"body":"A Pod may optionally specify a priorityClassName which indicates the scheduling priority relative to others. This requires creation of a PriorityClass object in advance. With this created, a Pod may set this field to that value. In a multi-tenant environment, it is often desired to require this priorityClassName be set to make certain tenant scheduling guarantees. This policy requires that a Pod defines the priorityClassName field with some value.\n","category":"Multi-Tenancy, EKS Best Practices","filters":"validate::Multi-Tenancy, EKS Best Practices::%!s(\u003cnil\u003e)::Pod","link":"/policies/other/require-pod-priorityclassname/require-pod-priorityclassname/","policy":"validate","subject":"Pod","title":"Require Pod priorityClassName","version":null},{"body":"A Pod may optionally specify a priorityClassName which indicates the scheduling priority relative to others. This requires creation of a PriorityClass object in advance. With this created, a Pod may set this field to that value. In a multi-tenant environment, it is often desired to require this priorityClassName be set to make certain tenant scheduling guarantees. This policy requires that a Pod defines the priorityClassName field with some value.\n","category":"Multi-Tenancy, EKS Best Practices in CEL","filters":"validate::Multi-Tenancy, EKS Best Practices in CEL::%!s(\u003cnil\u003e)::Pod","link":"/policies/other-cel/require-pod-priorityclassname/require-pod-priorityclassname/","policy":"validate","subject":"Pod","title":"Require Pod priorityClassName in CEL expressions","version":null},{"body":"Liveness and readiness probes need to be configured to correctly manage a Pod's lifecycle during deployments, restarts, and upgrades. For each Pod, a periodic `livenessProbe` is performed by the kubelet to determine if the Pod's containers are running or need to be restarted. A `readinessProbe` is used by Services and Deployments to determine if the Pod is ready to receive network traffic. This policy validates that all containers have one of livenessProbe, readinessProbe, or startupProbe defined.\n","category":"Best Practices, EKS Best Practices","filters":"validate::Best Practices, EKS Best Practices::%!s(\u003cnil\u003e)::Pod","link":"/policies/best-practices/require-probes/require-probes/","policy":"validate","subject":"Pod","title":"Require Pod Probes","version":null},{"body":"Liveness and readiness probes need to be configured to correctly manage a Pod's lifecycle during deployments, restarts, and upgrades. For each Pod, a periodic `livenessProbe` is performed by the kubelet to determine if the Pod's containers are running or need to be restarted. A `readinessProbe` is used by Services and Deployments to determine if the Pod is ready to receive network traffic. This policy validates that all containers have one of livenessProbe, readinessProbe, or startupProbe defined.\n","category":"Best Practices, EKS Best Practices in CEL","filters":"validate::Best Practices, EKS Best Practices in CEL::1.11.0::Pod","link":"/policies/best-practices-cel/require-probes/require-probes/","policy":"validate","subject":"Pod","title":"Require Pod Probes in CEL expressions","version":"1.11.0"},{"body":"PodDisruptionBudget resources are useful to ensuring minimum availability is maintained at all times. This policy checks all incoming Deployments and StatefulSets to ensure they have a matching, preexisting PodDisruptionBudget. Note: This policy must be run in `enforce` mode to ensure accuracy.\n","category":"Sample, EKS Best Practices","filters":"validate::Sample, EKS Best Practices::1.6.0::Deployment, PodDisruptionBudget","link":"/policies/other/require-pdb/require-pdb/","policy":"validate","subject":"Deployment, PodDisruptionBudget","title":"Require PodDisruptionBudget","version":"1.6.0"},{"body":"Pod Quality of Service (QoS) is a mechanism to ensure Pods receive certain priority guarantees based upon the resources they define. When a Pod has at least one container which defines either requests or limits for either memory or CPU, Kubernetes grants the QoS class as burstable if it does not otherwise qualify for a QoS class of guaranteed. This policy requires that a Pod meet the criteria qualify for a QoS of burstable. This policy is provided with the intention that users will need to control its scope by using exclusions, preconditions, and other policy language mechanisms.\n","category":"Other, Multi-Tenancy","filters":"validate::Other, Multi-Tenancy::%!s(\u003cnil\u003e)::Pod","link":"/policies/other/require-qos-burstable/require-qos-burstable/","policy":"validate","subject":"Pod","title":"Require QoS Burstable","version":null},{"body":"Pod Quality of Service (QoS) is a mechanism to ensure Pods receive certain priority guarantees based upon the resources they define. When a Pod has at least one container which defines either requests or limits for either memory or CPU, Kubernetes grants the QoS class as burstable if it does not otherwise qualify for a QoS class of guaranteed. This policy requires that a Pod meet the criteria qualify for a QoS of burstable. This policy is provided with the intention that users will need to control its scope by using exclusions, preconditions, and other policy language mechanisms.\n","category":"Other, Multi-Tenancy in CEL","filters":"validate::Other, Multi-Tenancy in CEL::%!s(\u003cnil\u003e)::Pod","link":"/policies/other-cel/require-qos-burstable/require-qos-burstable/","policy":"validate","subject":"Pod","title":"Require QoS Burstable in CEL expressions","version":null},{"body":"Pod Quality of Service (QoS) is a mechanism to ensure Pods receive certain priority guarantees based upon the resources they define. When Pods define both requests and limits for both memory and CPU, and the requests and limits are equal to each other, Kubernetes grants the QoS class as guaranteed which allows them to run at a higher priority than others. This policy requires that all containers within a Pod run with this definition resulting in a guaranteed QoS. This policy is provided with the intention that users will need to control its scope by using exclusions, preconditions, and other policy language mechanisms.\n","category":"Other, Multi-Tenancy","filters":"validate::Other, Multi-Tenancy::%!s(\u003cnil\u003e)::Pod","link":"/policies/other/require-qos-guaranteed/require-qos-guaranteed/","policy":"validate","subject":"Pod","title":"Require QoS Guaranteed","version":null},{"body":"Pod Quality of Service (QoS) is a mechanism to ensure Pods receive certain priority guarantees based upon the resources they define. When Pods define both requests and limits for both memory and CPU, and the requests and limits are equal to each other, Kubernetes grants the QoS class as guaranteed which allows them to run at a higher priority than others. This policy requires that all containers within a Pod run with this definition resulting in a guaranteed QoS. This policy is provided with the intention that users will need to control its scope by using exclusions, preconditions, and other policy language mechanisms.\n","category":"Other, Multi-Tenancy in CEL","filters":"validate::Other, Multi-Tenancy in CEL::%!s(\u003cnil\u003e)::Pod","link":"/policies/other-cel/require-qos-guaranteed/require-qos-guaranteed/","policy":"validate","subject":"Pod","title":"Require QoS Guaranteed in CEL expressions","version":null},{"body":"A read-only root file system helps to enforce an immutable infrastructure strategy; the container only needs to write on the mounted volume that persists the state. An immutable root filesystem can also prevent malicious binaries from writing to the host system. This policy validates that containers define a securityContext with `readOnlyRootFilesystem: true`.\n","category":"Best Practices, EKS Best Practices, PSP Migration","filters":"validate::Best Practices, EKS Best Practices, PSP Migration::1.6.0::Pod","link":"/policies/best-practices/require-ro-rootfs/require-ro-rootfs/","policy":"validate","subject":"Pod","title":"Require Read-Only Root Filesystem","version":"1.6.0"},{"body":"A read-only root file system helps to enforce an immutable infrastructure strategy; the container only needs to write on the mounted volume that persists the state. An immutable root filesystem can also prevent malicious binaries from writing to the host system. This policy validates that containers define a securityContext with `readOnlyRootFilesystem: true`.\n","category":"Best Practices, EKS Best Practices, PSP Migration in CEL","filters":"validate::Best Practices, EKS Best Practices, PSP Migration in CEL::1.11.0::Pod","link":"/policies/best-practices-cel/require-ro-rootfs/require-ro-rootfs/","policy":"validate","subject":"Pod","title":"Require Read-Only Root Filesystem in CEL expressions","version":"1.11.0"},{"body":"PodDisruptionBudget resources are useful to ensuring minimum availability is maintained at all times. Achieving a balance between availability and maintainability is important. This policy validates that a PodDisruptionBudget, specified as percentages, allows 50% of the replicas to be out of service in that minAvailable should be no higher than 50% and maxUnavailable should be no lower than 50%.\n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::PodDisruptionBudget","link":"/policies/other/require-reasonable-pdbs/require-reasonable-pdbs/","policy":"validate","subject":"PodDisruptionBudget","title":"Require Reasonable PodDisruptionBudgets","version":null},{"body":"Existing PodDisruptionBudgets can apply to all future matching Pod controllers. If the minAvailable field is defined for such matching PDBs and the replica count of a new Deployment or StatefulSet is lower than that, then availability could be negatively impacted. This policy specifies that Deployment/StatefulSet replicas exceed the minAvailable value of all matching PodDisruptionBudgets which specify minAvailable as a number and not percentage.\n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::PodDisruptionBudget, Deployment, StatefulSet","link":"/policies/other/require-replicas-allow-disruption/require-replicas-allow-disruption/","policy":"validate","subject":"PodDisruptionBudget, Deployment, StatefulSet","title":"Require Replicas Allow Disruption","version":null},{"body":"Pods which mount emptyDir volumes may be allowed to potentially overrun the medium backing the emptyDir volume. This sample ensures that any initContainers or containers mounting an emptyDir volume have ephemeral-storage requests and limits set. Policy will be skipped if the volume has already a sizeLimit set.\n","category":"Other","filters":"validate::Other::1.9.0::Pod","link":"/policies/other/require-emptydir-requests-limits/require-emptydir-requests-limits/","policy":"validate","subject":"Pod","title":"Require Requests and Limits for emptyDir","version":"1.9.0"},{"body":"Pods which mount emptyDir volumes may be allowed to potentially overrun the medium backing the emptyDir volume. This sample ensures that any initContainers or containers mounting an emptyDir volume have ephemeral-storage requests and limits set. Policy will be skipped if the volume has already a sizeLimit set.\n","category":"Other in CEL","filters":"validate::Other in CEL::%!s(\u003cnil\u003e)::Pod","link":"/policies/other-cel/require-emptydir-requests-limits/require-emptydir-requests-limits/","policy":"validate","subject":"Pod","title":"Require Requests and Limits for emptyDir in CEL expressions","version":null},{"body":"Containers must be required to run as ContainerUser. This policy ensures that the fields  spec.securityContext.windowsOptions.runAsUserName, spec.containers[*].securityContext.windowsOptions.runAsUserName,  spec.initContainers[*].securityContext.windowsOptions.runAsUserName, and  is either unset or set to ContainerUser.\n","category":"Windows Security","filters":"validate::Windows Security::%!s(\u003cnil\u003e)::Pod","link":"/policies/windows-security/require-run-as-containeruser/require-run-as-containeruser/","policy":"validate","subject":"Pod","title":"Require Run As ContainerUser (Windows)","version":null},{"body":"Containers must be required to run as non-root users. This policy ensures `runAsUser` is either unset or set to a number greater than zero.\n","category":"Pod Security Standards (Restricted)","filters":"validate::Pod Security Standards (Restricted)::%!s(\u003cnil\u003e)::Pod","link":"/policies/pod-security/restricted/require-run-as-non-root-user/require-run-as-non-root-user/","policy":"validate","subject":"Pod","title":"Require Run As Non-Root User","version":null},{"body":"Containers must be required to run as non-root users. This policy ensures `runAsUser` is either unset or set to a number greater than zero.\n","category":"Pod Security Standards (Restricted) in CEL","filters":"validate::Pod Security Standards (Restricted) in CEL::1.11.0::Pod","link":"/policies/pod-security-cel/restricted/require-run-as-non-root-user/require-run-as-non-root-user/","policy":"validate","subject":"Pod","title":"Require Run As Non-Root User in CEL","version":"1.11.0"},{"body":"Containers must be required to run as non-root users. This policy ensures `runAsUser` is either unset or set to a number greater than zero.\n","category":"Pod Security Standards (Restricted) in ValidatingPolicy","filters":"validate::Pod Security Standards (Restricted) in ValidatingPolicy::1.14.0::Pod","link":"/policies/pod-security-vpol/restricted/require-run-as-non-root-user/require-run-as-non-root-user/","policy":"validate","subject":"Pod","title":"Require Run As Non-Root User in ValidatingPolicy","version":"1.14.0"},{"body":"Containers must be required to run as non-root users. This policy ensures `runAsNonRoot` is set to `true`. A known issue prevents a policy such as this using `anyPattern` from being persisted properly in Kubernetes 1.23.0-1.23.2.\n","category":"Pod Security Standards (Restricted)","filters":"validate::Pod Security Standards (Restricted)::%!s(\u003cnil\u003e)::Pod","link":"/policies/pod-security/restricted/require-run-as-nonroot/require-run-as-nonroot/","policy":"validate","subject":"Pod","title":"Require runAsNonRoot","version":null},{"body":"Containers must be required to run as non-root. This policy ensures `runAsNonRoot` is set to true.\n","category":"Pod Security Standards (Restricted) in CEL","filters":"validate::Pod Security Standards (Restricted) in CEL::1.11.0::Pod","link":"/policies/pod-security-cel/restricted/require-run-as-nonroot/require-run-as-nonroot/","policy":"validate","subject":"Pod","title":"Require runAsNonRoot in CEL","version":"1.11.0"},{"body":"Containers must be required to run as non-root. This policy ensures `runAsNonRoot` is set to true.\n","category":"Pod Security Standards (Restricted) in ValidatingPolicy","filters":"validate::Pod Security Standards (Restricted) in ValidatingPolicy::1.14.0::Pod","link":"/policies/pod-security-vpol/restricted/require-run-as-nonroot/require-run-as-nonroot/","policy":"validate","subject":"Pod","title":"Require runAsNonRoot in ValidatingPolicy","version":"1.14.0"},{"body":"A securityContext is required for each TaskRun step.\n","category":"Tekton","filters":"validate::Tekton::1.7.0::TaskRun","link":"/policies/tekton/require-tekton-securitycontext/require-tekton-securitycontext/","policy":"validate","subject":"TaskRun","title":"Require securityContext for Tekton TaskRun","version":"1.7.0"},{"body":"A signed bundle is required\n","category":"Tekton","filters":"verifyImages::Tekton::1.7.0::PipelineRun","link":"/policies/tekton/verify-tekton-pipeline-bundle-signatures/verify-tekton-pipeline-bundle-signatures/","policy":"verifyImages","subject":"PipelineRun","title":"Require Signed Tekton Pipeline","version":"1.7.0"},{"body":"A signed bundle is required.\n","category":"Tekton","filters":"verifyImages::Tekton::1.7.0::TaskRun","link":"/policies/tekton/verify-tekton-taskrun-signatures/verify-tekton-taskrun-signatures/","policy":"verifyImages","subject":"TaskRun","title":"Require Signed Tekton Task","version":"1.7.0"},{"body":"PersistentVolumeClaims (PVCs) and StatefulSets may optionally define a StorageClass to dynamically provision storage. In a multi-tenancy environment where StorageClasses are far more common, it is often better to require storage only be provisioned from these StorageClasses. This policy requires that PVCs and StatefulSets containing volumeClaimTemplates define the storageClassName field with some value.\n","category":"Other, Multi-Tenancy","filters":"validate::Other, Multi-Tenancy::%!s(\u003cnil\u003e)::PersistentVolumeClaim, StatefulSet","link":"/policies/other/require-storageclass/require-storageclass/","policy":"validate","subject":"PersistentVolumeClaim, StatefulSet","title":"Require StorageClass","version":null},{"body":"PersistentVolumeClaims (PVCs) and StatefulSets may optionally define a StorageClass to dynamically provision storage. In a multi-tenancy environment where StorageClasses are far more common, it is often better to require storage only be provisioned from these StorageClasses. This policy requires that PVCs and StatefulSets containing volumeClaimTemplates define the storageClassName field with some value.\n","category":"Other, Multi-Tenancy in CEL","filters":"validate::Other, Multi-Tenancy in CEL::%!s(\u003cnil\u003e)::PersistentVolumeClaim, StatefulSet","link":"/policies/other-cel/require-storageclass/require-storageclass/","policy":"validate","subject":"PersistentVolumeClaim, StatefulSet","title":"Require StorageClass in CEL expressions","version":null},{"body":"PipelineRun and TaskRun resources must be executed from a bundle\n","category":"Tekton","filters":"validate::Tekton::1.6.0::TaskRun, PipelineRun","link":"/policies/tekton/require-tekton-bundle/require-tekton-bundle/","policy":"validate","subject":"TaskRun, PipelineRun","title":"Require Tekton Bundle","version":"1.6.0"},{"body":"PipelineRun and TaskRun resources must be executed from a bundle\n","category":"Tekton in CEL","filters":"validate::Tekton in CEL::1.11.0::TaskRun, PipelineRun","link":"/policies/tekton-cel/require-tekton-bundle/require-tekton-bundle/","policy":"validate","subject":"TaskRun, PipelineRun","title":"Require Tekton Bundle in CEL expressions","version":"1.11.0"},{"body":"HTTP traffic is not encrypted and hence insecure. This policy prevents configuration of OpenShift HTTP routes.\n","category":"OpenShift","filters":"validate::OpenShift::1.6.0::Route","link":"/policies/openshift/check-routes/check-routes/","policy":"validate","subject":"Route","title":"Require TLS routes in OpenShift","version":"1.6.0"},{"body":"HTTP traffic is not encrypted and hence insecure. This policy prevents configuration of OpenShift HTTP routes.\n","category":"OpenShift in CEL expressions","filters":"validate::OpenShift in CEL expressions::1.11.0::Route","link":"/policies/openshift-cel/check-routes/check-routes/","policy":"validate","subject":"Route","title":"Require TLS routes in OpenShift in CEL expressions","version":"1.11.0"},{"body":"ExternalDNS, part of Kubernetes SIGs, triggers the creation of external DNS records in supported providers when the annotation`external-dns.alpha.kubernetes.io/hostname` is present. Like with internal DNS, duplicates must be avoided. This policy requires every such Service have a cluster-unique hostname present in the value of the annotation.\n","category":"Other","filters":"validate::Other::1.6.0::Service","link":"/policies/other/require-unique-external-dns/require-unique-external-dns/","policy":"validate","subject":"Service","title":"Require Unique External DNS Services","version":"1.6.0"},{"body":"An Route host is a URL at which services may be made available externally. In most cases, these hosts should be unique across the cluster to ensure no routing conflicts occur. This policy checks an incoming Route resource to ensure its hosts are unique to the cluster.\n","category":"OpenShift","filters":"validate::OpenShift::1.6.0::Route","link":"/policies/openshift/unique-routes/unique-routes/","policy":"validate","subject":"Route","title":"Require unique host names in OpenShift routes","version":"1.6.0"},{"body":"Services select eligible Pods by way of label matches. Having multiple Service apply based on same labels can cause conflicts and have unintended consequences. This policy ensures that within the same Namespace a Service has a unique set of labels as a selector.\n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::Service","link":"/policies/other/require-unique-service-selector/require-unique-service-selector/","policy":"validate","subject":"Service","title":"Require Unique Service Selector","version":null},{"body":"Two distinct workloads should not share a UID so that in a multitenant environment, applications  from different projects never run as the same user ID. When using persistent storage,  any files created by applications will also have different ownership in the file system. Running processes for applications as different user IDs means that if a security  vulnerability were ever discovered in the underlying container runtime, and an application  were able to break out of the container to the host, they would not be able to interact  with processes owned by other users, or from other applications, in other projects.\n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::Pod","link":"/policies/other/require-unique-uid-per-workload/require-unique-uid-per-workload/","policy":"validate","subject":"Pod","title":"Require Unique UID per Workload","version":null},{"body":"Image tags are mutable and the change of an image can result in the same tag. This policy resolves the image digest of each image in a container and replaces the image with the fully resolved reference which includes the digest rather than tag.\n","category":"Other","filters":"mutate::Other::1.6.0::Pod","link":"/policies/other/resolve-image-to-digest/resolve-image-to-digest/","policy":"mutate","subject":"Pod","title":"Resolve Image to Digest","version":"1.6.0"},{"body":"If Secrets are mounted in ways which do not naturally allow updates to be live refreshed it may be necessary to modify a Deployment. This policy watches a Secret and if it changes will write an annotation to one or more target Deployments thus triggering a new rollout and thereby refreshing the referred Secret. It may be necessary to grant additional privileges to the Kyverno ServiceAccount, via one of the existing ClusterRoleBindings or a new one, so it can modify Deployments.\n","category":"Other","filters":"mutate::Other::1.7.0::Deployment","link":"/policies/other/restart-deployment-on-secret-change/restart-deployment-on-secret-change/","policy":"mutate","subject":"Deployment","title":"Restart Deployment On Secret Change","version":"1.7.0"},{"body":"Adding capabilities is a way for containers in a Pod to request higher levels of ability than those with which they may be provisioned. Many capabilities allow system-level control and should be prevented. Pod Security Policies (PSP) allowed a list of \"good\" capabilities to be added. This policy checks ephemeralContainers, initContainers, and containers to ensure the only capabilities that can be added are either NET_BIND_SERVICE or CAP_CHOWN.\n","category":"PSP Migration","filters":"validate::PSP Migration::1.6.0::Pod","link":"/policies/psp-migration/restrict-adding-capabilities/restrict-adding-capabilities/","policy":"validate","subject":"Pod","title":"Restrict Adding Capabilities","version":"1.6.0"},{"body":"Adding capabilities is a way for containers in a Pod to request higher levels of ability than those with which they may be provisioned. Many capabilities allow system-level control and should be prevented. Pod Security Policies (PSP) allowed a list of \"good\" capabilities to be added. This policy checks ephemeralContainers, initContainers, and containers to ensure the only capabilities that can be added are either NET_BIND_SERVICE or CAP_CHOWN.\n","category":"PSP Migration in CEL","filters":"validate::PSP Migration in CEL::1.11.0::Pod","link":"/policies/psp-migration-cel/restrict-adding-capabilities/restrict-adding-capabilities/","policy":"validate","subject":"Pod","title":"Restrict Adding Capabilities in CEL expressions","version":"1.11.0"},{"body":"Some annotations control functionality driven by other cluster-wide tools and are not normally set by some class of users. This policy prevents the use of an annotation beginning with `fluxcd.io/`. This can be useful to ensure users either don't set reserved annotations or to force them to use a newer version of an annotation.\n","category":"Sample","filters":"validate::Sample::1.6.0::Pod, Annotation","link":"/policies/other/restrict-annotations/restrict-annotations/","policy":"validate","subject":"Pod, Annotation","title":"Restrict Annotations","version":"1.6.0"},{"body":"Some annotations control functionality driven by other cluster-wide tools and are not normally set by some class of users. This policy prevents the use of an annotation beginning with `fluxcd.io/`. This can be useful to ensure users either don't set reserved annotations or to force them to use a newer version of an annotation.\n","category":"Sample in CEL","filters":"validate::Sample in CEL::1.11.0::Pod, Annotation","link":"/policies/other-cel/restrict-annotations/restrict-annotations/","policy":"validate","subject":"Pod, Annotation","title":"Restrict Annotations in CEL expressions","version":"1.11.0"},{"body":"On supported hosts, the 'runtime/default' AppArmor profile is applied by default. The default policy should prevent overriding or disabling the policy, or restrict overrides to an allowed set of profiles. This policy ensures Pods do not specify any other AppArmor profiles than `runtime/default` or `localhost/*`.\n","category":"Pod Security Standards (Baseline)","filters":"validate::Pod Security Standards (Baseline)::1.3.0::Pod, Annotation","link":"/policies/pod-security/baseline/restrict-apparmor-profiles/restrict-apparmor-profiles/","policy":"validate","subject":"Pod, Annotation","title":"Restrict AppArmor","version":"1.3.0"},{"body":"Kubernetes automatically mounts ServiceAccount credentials in each Pod. The ServiceAccount may be assigned roles allowing Pods to access API resources. Blocking this ability is an extension of the least privilege best practice and should be followed if Pods do not need to speak to the API server to function. This policy ensures that mounting of these ServiceAccount tokens is blocked.\n","category":"Sample, EKS Best Practices","filters":"validate::Sample, EKS Best Practices::1.6.0::Pod,ServiceAccount","link":"/policies/other/restrict-automount-sa-token/restrict-automount-sa-token/","policy":"validate","subject":"Pod,ServiceAccount","title":"Restrict Auto-Mount of Service Account Tokens","version":"1.6.0"},{"body":"Kubernetes automatically mounts ServiceAccount credentials in each ServiceAccount. The ServiceAccount may be assigned roles allowing Pods to access API resources. Blocking this ability is an extension of the least privilege best practice and should be followed if Pods do not need to speak to the API server to function. This policy ensures that mounting of these ServiceAccount tokens is blocked.      \n","category":"Security","filters":"validate::Security::%!s(\u003cnil\u003e)::Secret,ServiceAccount","link":"/policies/other/restrict-sa-automount-sa-token/restrict-sa-automount-sa-token/","policy":"validate","subject":"Secret,ServiceAccount","title":"Restrict Auto-Mount of Service Account Tokens in Service Account","version":null},{"body":"Kubernetes automatically mounts ServiceAccount credentials in each ServiceAccount. The ServiceAccount may be assigned roles allowing Pods to access API resources. Blocking this ability is an extension of the least privilege best practice and should be followed if Pods do not need to speak to the API server to function. This policy ensures that mounting of these ServiceAccount tokens is blocked.      \n","category":"Security in CEL","filters":"validate::Security in CEL::%!s(\u003cnil\u003e)::Secret,ServiceAccount","link":"/policies/other-cel/restrict-sa-automount-sa-token/restrict-sa-automount-sa-token/","policy":"validate","subject":"Secret,ServiceAccount","title":"Restrict Auto-Mount of Service Account Tokens in Service Account in CEL expressions","version":null},{"body":"Certain system groups exist in Kubernetes which grant permissions that are used for certain system-level functions yet typically never appropriate for other users. This policy prevents creating bindings to some of these groups including system:anonymous, system:unauthenticated, and system:masters.\n","category":"Security, EKS Best Practices","filters":"validate::Security, EKS Best Practices::1.6.0::RoleBinding, ClusterRoleBinding, RBAC","link":"/policies/other/restrict-binding-system-groups/restrict-binding-system-groups/","policy":"validate","subject":"RoleBinding, ClusterRoleBinding, RBAC","title":"Restrict Binding System Groups","version":"1.6.0"},{"body":"Certain system groups exist in Kubernetes which grant permissions that are used for certain system-level functions yet typically never appropriate for other users. This policy prevents creating bindings to some of these groups including system:anonymous, system:unauthenticated, and system:masters.\n","category":"Security, EKS Best Practices in CEL","filters":"validate::Security, EKS Best Practices in CEL::1.11.0::RoleBinding, ClusterRoleBinding, RBAC","link":"/policies/other-cel/restrict-binding-system-groups/restrict-binding-system-groups/","policy":"validate","subject":"RoleBinding, ClusterRoleBinding, RBAC","title":"Restrict Binding System Groups in CEL expressions","version":"1.11.0"},{"body":"The cluster-admin ClusterRole allows any action to be performed on any resource in the cluster and its granting should be heavily restricted. This policy prevents binding to the cluster-admin ClusterRole in RoleBinding or ClusterRoleBinding resources.\n","category":"Security","filters":"validate::Security::1.6.0::RoleBinding, ClusterRoleBinding, RBAC","link":"/policies/other/restrict-binding-clusteradmin/restrict-binding-clusteradmin/","policy":"validate","subject":"RoleBinding, ClusterRoleBinding, RBAC","title":"Restrict Binding to Cluster-Admin","version":"1.6.0"},{"body":"The cluster-admin ClusterRole allows any action to be performed on any resource in the cluster and its granting should be heavily restricted. This policy prevents binding to the cluster-admin ClusterRole in RoleBinding or ClusterRoleBinding resources.\n","category":"Security in CEL","filters":"validate::Security in CEL::1.11.0::RoleBinding, ClusterRoleBinding, RBAC","link":"/policies/other-cel/restrict-binding-clusteradmin/restrict-binding-clusteradmin/","policy":"validate","subject":"RoleBinding, ClusterRoleBinding, RBAC","title":"Restrict Binding to Cluster-Admin in CEL expressions","version":"1.11.0"},{"body":"ClusterRoles that grant permissions to approve CertificateSigningRequests should be minimized to reduce powerful identities in the cluster. Approving CertificateSigningRequests allows one to issue new credentials for any user or group. As such, ClusterRoles that grant permissions to approve CertificateSigningRequests are granting cluster admin privileges. Minimize such ClusterRoles to limit the number of powerful credentials that if compromised could take over the entire cluster. For more information, refer to https://docs.prismacloud.io/en/enterprise-edition/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-clusterroles-that-grant-permissions-to-approve-certificatesigningrequests-are-minimized.\n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::ClusterRole","link":"/policies/other/restrict-clusterrole-csr/restrict-clusterrole-csr/","policy":"validate","subject":"ClusterRole","title":"Restrict Cluster Role CSR","version":null},{"body":"ClusterRoles that grant write permissions over admission webhook should be minimized to reduce powerful identities in the cluster. This policy checks to ensure write permissions are not provided to admission webhooks.\n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::ClusterRole","link":"/policies/other/restrict-clusterrole-mutating-validating-admission-webhooks/restrict-clusterrole-mutating-validating-admission-webhooks/","policy":"validate","subject":"ClusterRole","title":"Restrict Clusterrole for Mutating and Validating Admission Webhooks","version":null},{"body":"A ClusterRole with nodes/proxy resource access allows a user to perform anything the kubelet API allows. It also allows users to bypass the API server and talk directly to the kubelet potentially circumventing audits and admission controllers. See https://blog.aquasec.com/privilege-escalation-kubernetes-rbac for more info. This policy prevents the creation of a ClusterRole if it contains the nodes/proxy resource. \n","category":"Sample","filters":"validate::Sample::1.6.0::ClusterRole, RBAC","link":"/policies/other/restrict-clusterrole-nodesproxy/restrict-clusterrole-nodesproxy/","policy":"validate","subject":"ClusterRole, RBAC","title":"Restrict ClusterRole with Nodes Proxy","version":"1.6.0"},{"body":"A ClusterRole with nodes/proxy resource access allows a user to perform anything the kubelet API allows. It also allows users to bypass the API server and talk directly to the kubelet potentially circumventing audits and admission controllers. See https://blog.aquasec.com/privilege-escalation-kubernetes-rbac for more info. This policy prevents the creation of a ClusterRole if it contains the nodes/proxy resource. \n","category":"Sample in CEL","filters":"validate::Sample in CEL::1.11.0::ClusterRole, RBAC","link":"/policies/other-cel/restrict-clusterrole-nodesproxy/restrict-clusterrole-nodesproxy/","policy":"validate","subject":"ClusterRole, RBAC","title":"Restrict ClusterRole with Nodes Proxy in CEL expressions","version":"1.11.0"},{"body":"Scheduling non-system Pods to control plane nodes (which run kubelet) is often undesirable because it takes away resources from the control plane components and can represent a possible security threat vector. This policy prevents users from setting a toleration in a Pod spec which allows running on control plane nodes with the taint key `node-role.kubernetes.io/master`.\n","category":"Sample","filters":"validate::Sample::1.6.0::Pod","link":"/policies/other/restrict-controlplane-scheduling/restrict-controlplane-scheduling/","policy":"validate","subject":"Pod","title":"Restrict control plane scheduling","version":"1.6.0"},{"body":"Scheduling non-system Pods to control plane nodes (which run kubelet) is often undesirable because it takes away resources from the control plane components and can represent a possible security threat vector. This policy prevents users from setting a toleration in a Pod spec which allows running on control plane nodes with the taint key `node-role.kubernetes.io/master`.\n","category":"Sample in CEL","filters":"validate::Sample in CEL::1.11.0::Pod","link":"/policies/other-cel/restrict-controlplane-scheduling/restrict-controlplane-scheduling/","policy":"validate","subject":"Pod","title":"Restrict control plane scheduling in CEL expressions","version":"1.11.0"},{"body":"Legacy k8s.gcr.io container image registry will be frozen in early April 2023 k8s.gcr.io image registry will be frozen from the 3rd of April 2023.   Images for Kubernetes 1.27 will not be available in the k8s.gcr.io image registry. Please read our announcement for more details. https://kubernetes.io/blog/2023/02/06/k8s-gcr-io-freeze-announcement/     \n","category":"Best Practices, EKS Best Practices","filters":"validate::Best Practices, EKS Best Practices::1.9.0::Pod","link":"/policies/other/restrict-deprecated-registry/restrict-deprecated-registry/","policy":"validate","subject":"Pod","title":"Restrict Deprecated Registry","version":"1.9.0"},{"body":"Legacy k8s.gcr.io container image registry will be frozen in early April 2023 k8s.gcr.io image registry will be frozen from the 3rd of April 2023.   Images for Kubernetes 1.27 will not be available in the k8s.gcr.io image registry. Please read our announcement for more details. https://kubernetes.io/blog/2023/02/06/k8s-gcr-io-freeze-announcement/     \n","category":"Best Practices, EKS Best Practices in CEL","filters":"validate::Best Practices, EKS Best Practices in CEL::1.11.0::Pod","link":"/policies/other-cel/restrict-deprecated-registry/restrict-deprecated-registry/","policy":"validate","subject":"Pod","title":"Restrict Deprecated Registry in CEL expressions","version":"1.11.0"},{"body":"Clusters not initially installed with Kubernetes 1.22 may be vulnerable to an issue defined in CVE-2021-25740 which could enable users to send network traffic to locations they would otherwise not have access to via a confused deputy attack. This was due to the system:aggregate-to-edit ClusterRole having edit permission of Endpoints. This policy, intended to run in background mode, checks if your cluster is vulnerable to CVE-2021-25740 by ensuring the system:aggregate-to-edit ClusterRole does not have the edit permission of Endpoints.\n","category":"Security","filters":"validate::Security::%!s(\u003cnil\u003e)::ClusterRole","link":"/policies/other/restrict-edit-for-endpoints/restrict-edit-for-endpoints/","policy":"validate","subject":"ClusterRole","title":"Restrict Edit for Endpoints CVE-2021-25740","version":null},{"body":"Clusters not initially installed with Kubernetes 1.22 may be vulnerable to an issue defined in CVE-2021-25740 which could enable users to send network traffic to locations they would otherwise not have access to via a confused deputy attack. This was due to the system:aggregate-to-edit ClusterRole having edit permission of Endpoints. This policy, intended to run in background mode, checks if your cluster is vulnerable to CVE-2021-25740 by ensuring the system:aggregate-to-edit ClusterRole does not have the edit permission of Endpoints.\n","category":"Security in CEL","filters":"validate::Security in CEL::%!s(\u003cnil\u003e)::ClusterRole","link":"/policies/other-cel/restrict-edit-for-endpoints/restrict-edit-for-endpoints/","policy":"validate","subject":"ClusterRole","title":"Restrict Edit for Endpoints CVE-2021-25740 in CEL expressions","version":null},{"body":"The verbs `impersonate`, `bind`, and `escalate` may all potentially lead to privilege escalation and should be tightly controlled. This policy prevents use of these verbs in Role or ClusterRole resources.\n","category":"Security","filters":"validate::Security::1.6.0::Role, ClusterRole, RBAC","link":"/policies/other/restrict-escalation-verbs-roles/restrict-escalation-verbs-roles/","policy":"validate","subject":"Role, ClusterRole, RBAC","title":"Restrict Escalation Verbs in Roles","version":"1.6.0"},{"body":"The verbs `impersonate`, `bind`, and `escalate` may all potentially lead to privilege escalation and should be tightly controlled. This policy prevents use of these verbs in Role or ClusterRole resources.\n","category":"Security in CEL","filters":"validate::Security in CEL::1.11.0::Role, ClusterRole, RBAC","link":"/policies/other-cel/restrict-escalation-verbs-roles/restrict-escalation-verbs-roles/","policy":"validate","subject":"Role, ClusterRole, RBAC","title":"Restrict Escalation Verbs in Roles in CEL expressions","version":"1.11.0"},{"body":"Service externalIPs can be used for a MITM attack (CVE-2020-8554). Restrict externalIPs or limit to a known set of addresses. See: https://github.com/kyverno/kyverno/issues/1367. This policy validates that the `externalIPs` field is not set on a Service.\n","category":"Best Practices","filters":"validate::Best Practices::1.6.0::Service","link":"/policies/best-practices/restrict-service-external-ips/restrict-service-external-ips/","policy":"validate","subject":"Service","title":"Restrict External IPs","version":"1.6.0"},{"body":"Service externalIPs can be used for a MITM attack (CVE-2020-8554). Restrict externalIPs or limit to a known set of addresses. See: https://github.com/kyverno/kyverno/issues/1367. This policy validates that the `externalIPs` field is not set on a Service.\n","category":"Best Practices in CEL","filters":"validate::Best Practices in CEL::1.11.0::Service","link":"/policies/best-practices-cel/restrict-service-external-ips/restrict-service-external-ips/","policy":"validate","subject":"Service","title":"Restrict External IPs in CEL expressions","version":"1.11.0"},{"body":"Images from unknown, public registries can be of dubious quality and may not be scanned and secured, representing a high degree of risk. Requiring use of known, approved registries helps reduce threat exposure by ensuring image pulls only come from them. This policy validates that container images only originate from the registry `eu.foo.io` or `bar.io`. Use of this policy requires customization to define your allowable registries.\n","category":"Best Practices, EKS Best Practices","filters":"validate::Best Practices, EKS Best Practices::1.6.0::Pod","link":"/policies/best-practices/restrict-image-registries/restrict-image-registries/","policy":"validate","subject":"Pod","title":"Restrict Image Registries","version":"1.6.0"},{"body":"Images from unknown, public registries can be of dubious quality and may not be scanned and secured, representing a high degree of risk. Requiring use of known, approved registries helps reduce threat exposure by ensuring image pulls only come from them. This policy validates that container images only originate from the registry `eu.foo.io` or `bar.io`. Use of this policy requires customization to define your allowable registries.\n","category":"Best Practices, EKS Best Practices in CEL","filters":"validate::Best Practices, EKS Best Practices in CEL::1.11.0::Pod","link":"/policies/best-practices-cel/restrict-image-registries/restrict-image-registries/","policy":"validate","subject":"Pod","title":"Restrict Image Registries in CEL expressions","version":"1.11.0"},{"body":"Ingress classes should only be allowed which match up to deployed Ingress controllers in the cluster. Allowing users to define classes which cannot be satisfied by a deployed Ingress controller can result in either no or undesired functionality. This policy checks Ingress resources and only allows those which define `HAProxy` or `nginx` in the respective annotation. This annotation has largely been replaced as of Kubernetes 1.18 with the IngressClass resource.\n","category":"Sample","filters":"validate::Sample::1.6.0::Ingress","link":"/policies/other/restrict-ingress-classes/restrict-ingress-classes/","policy":"validate","subject":"Ingress","title":"Restrict Ingress Classes","version":"1.6.0"},{"body":"Ingress classes should only be allowed which match up to deployed Ingress controllers in the cluster. Allowing users to define classes which cannot be satisfied by a deployed Ingress controller can result in either no or undesired functionality. This policy checks Ingress resources and only allows those which define `HAProxy` or `nginx` in the respective annotation. This annotation has largely been replaced as of Kubernetes 1.18 with the IngressClass resource.\n","category":"Sample in CEL","filters":"validate::Sample in CEL::1.11.0::Ingress","link":"/policies/other-cel/restrict-ingress-classes/restrict-ingress-classes/","policy":"validate","subject":"Ingress","title":"Restrict Ingress Classes in CEL expressions","version":"1.11.0"},{"body":"An Ingress with no rules sends all traffic to a single default backend. The defaultBackend is conventionally a configuration option of the Ingress controller and is not specified in your Ingress resources. If none of the hosts or paths match the HTTP request in the Ingress objects, the traffic is routed to your default backend. In a multi-tenant environment, you want users to use explicit hosts, they should not be able to overwrite the global default backend service. This policy prohibits the use of the defaultBackend field.\n","category":"Best Practices","filters":"validate::Best Practices::1.6.0::Ingress","link":"/policies/other/restrict-ingress-defaultbackend/restrict-ingress-defaultbackend/","policy":"validate","subject":"Ingress","title":"Restrict Ingress defaultBackend","version":"1.6.0"},{"body":"An Ingress with no rules sends all traffic to a single default backend. The defaultBackend is conventionally a configuration option of the Ingress controller and is not specified in your Ingress resources. If none of the hosts or paths match the HTTP request in the Ingress objects, the traffic is routed to your default backend. In a multi-tenant environment, you want users to use explicit hosts, they should not be able to overwrite the global default backend service. This policy prohibits the use of the defaultBackend field.\n","category":"Best Practices in CEL","filters":"validate::Best Practices in CEL::1.11.0::Ingress","link":"/policies/other-cel/restrict-ingress-defaultbackend/restrict-ingress-defaultbackend/","policy":"validate","subject":"Ingress","title":"Restrict Ingress defaultBackend in CEL expressions","version":"1.11.0"},{"body":"Ingress hosts optionally accept a wildcard as an alternative to precise matching. In some cases, this may be too permissive as it would direct unintended traffic to the given Ingress resource. This policy enforces that any Ingress host does not contain a wildcard character.\n","category":"Other","filters":"validate::Other::1.6.0::Ingress","link":"/policies/other/restrict-ingress-wildcard/restrict-ingress-wildcard/","policy":"validate","subject":"Ingress","title":"Restrict Ingress Host with Wildcards","version":"1.6.0"},{"body":"Ingress hosts optionally accept a wildcard as an alternative to precise matching. In some cases, this may be too permissive as it would direct unintended traffic to the given Ingress resource. This policy enforces that any Ingress host does not contain a wildcard character.\n","category":"Other in CEL","filters":"validate::Other in CEL::1.11.0::Ingress","link":"/policies/other-cel/restrict-ingress-wildcard/restrict-ingress-wildcard/","policy":"validate","subject":"Ingress","title":"Restrict Ingress Host with Wildcards in CEL expressions","version":"1.11.0"},{"body":"Certificates for trusted domains should always be steered to a controlled issuer to ensure the chain of trust is appropriate for that application. Users may otherwise be able to create their own issuers and sign certificates for other domains. This policy ensures that a certificate request for a specific domain uses a designated ClusterIssuer.\n","category":"Cert-Manager","filters":"validate::Cert-Manager::%!s(\u003cnil\u003e)::Certificate","link":"/policies/cert-manager/restrict-issuer/restrict-issuer/","policy":"validate","subject":"Certificate","title":"Restrict issuer","version":null},{"body":"Jobs can be created directly and indirectly via a CronJob controller. In some cases, users may want to only allow Jobs if they are created via a CronJob. This policy restricts Jobs so they may only be created by a CronJob.\n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::Job","link":"/policies/other/restrict-jobs/restrict-jobs/","policy":"validate","subject":"Job","title":"Restrict Jobs","version":null},{"body":"Jobs can be created directly and indirectly via a CronJob controller. In some cases, users may want to only allow Jobs if they are created via a CronJob. This policy restricts Jobs so they may only be created by a CronJob.\n","category":"Other in CEL","filters":"validate::Other in CEL::%!s(\u003cnil\u003e)::Job","link":"/policies/other-cel/restrict-jobs/restrict-jobs/","policy":"validate","subject":"Job","title":"Restrict Jobs in CEL expressions","version":null},{"body":"By default, all pods in a Kubernetes cluster are allowed to communicate with each other, and all network traffic is unencrypted. It is recommended to not use an empty podSelector in order to more closely control the necessary traffic flows. This policy requires that all NetworkPolicies other than that of `default-deny` not use an empty podSelector.\n","category":"Other, Multi-Tenancy","filters":"validate::Other, Multi-Tenancy::%!s(\u003cnil\u003e)::NetworkPolicy","link":"/policies/other/restrict-networkpolicy-empty-podselector/restrict-networkpolicy-empty-podselector/","policy":"validate","subject":"NetworkPolicy","title":"Restrict NetworkPolicy with Empty podSelector","version":null},{"body":"By default, all pods in a Kubernetes cluster are allowed to communicate with each other, and all network traffic is unencrypted. It is recommended to not use an empty podSelector in order to more closely control the necessary traffic flows. This policy requires that all NetworkPolicies other than that of `default-deny` not use an empty podSelector.\n","category":"Other, Multi-Tenancy in CEL","filters":"validate::Other, Multi-Tenancy in CEL::1.11.0::NetworkPolicy","link":"/policies/other-cel/restrict-networkpolicy-empty-podselector/restrict-networkpolicy-empty-podselector/","policy":"validate","subject":"NetworkPolicy","title":"Restrict NetworkPolicy with Empty podSelector in CEL expressions","version":"1.11.0"},{"body":"This policy mitigates CVE-2021-25746 by restricting `metadata.annotations` to safe values. See: https://github.com/kubernetes/ingress-nginx/blame/main/internal/ingress/inspector/rules.go. This issue has been fixed in NGINX Ingress v1.2.0. For NGINX Ingress version 1.0.5+ the  \"annotation-value-word-blocklist\" configuration setting is also recommended.  Please refer to the CVE for details. \n","category":"Security, NGINX Ingress","filters":"validate::Security, NGINX Ingress::1.6.0::Ingress","link":"/policies/nginx-ingress/restrict-annotations/restrict-annotations/","policy":"validate","subject":"Ingress","title":"Restrict NGINX Ingress annotation values","version":"1.6.0"},{"body":"This policy mitigates CVE-2021-25746 by restricting `metadata.annotations` to safe values. See: https://github.com/kubernetes/ingress-nginx/blame/main/internal/ingress/inspector/rules.go. This issue has been fixed in NGINX Ingress v1.2.0. For NGINX Ingress version 1.0.5+ the  \"annotation-value-word-blocklist\" configuration setting is also recommended.  Please refer to the CVE for details. \n","category":"Security, NGINX Ingress in CEL","filters":"validate::Security, NGINX Ingress in CEL::1.11.0::Ingress","link":"/policies/nginx-ingress-cel/restrict-annotations/restrict-annotations/","policy":"validate","subject":"Ingress","title":"Restrict NGINX Ingress annotation values in CEL expressions","version":"1.11.0"},{"body":"This policy mitigates CVE-2021-25745 by restricting `spec.rules[].http.paths[].path` to safe values. Additional paths can be added as required. This issue has been fixed in NGINX Ingress v1.2.0.  Please refer to the CVE for details.\n","category":"Security, NGINX Ingress","filters":"validate::Security, NGINX Ingress::1.6.0::Ingress","link":"/policies/nginx-ingress/restrict-ingress-paths/restrict-ingress-paths/","policy":"validate","subject":"Ingress","title":"Restrict NGINX Ingress path values","version":"1.6.0"},{"body":"This policy mitigates CVE-2021-25745 by restricting `spec.rules[].http.paths[].path` to safe values. Additional paths can be added as required. This issue has been fixed in NGINX Ingress v1.2.0.  Please refer to the CVE for details.\n","category":"Security, NGINX Ingress in CEL","filters":"validate::Security, NGINX Ingress in CEL::1.11.0::Ingress","link":"/policies/nginx-ingress-cel/restrict-ingress-paths/restrict-ingress-paths/","policy":"validate","subject":"Ingress","title":"Restrict NGINX Ingress path values in CEL expressions","version":"1.11.0"},{"body":"Pods may use several mechanisms to prefer scheduling on a set of nodes, and nodeAffinity is one of them. nodeAffinity uses expressions to select eligible nodes for scheduling decisions and may override intended placement options by cluster administrators. This policy ensures that nodeAffinity is not used in a Pod spec.\n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::Pod","link":"/policies/other/restrict-node-affinity/restrict-node-affinity/","policy":"validate","subject":"Pod","title":"Restrict Node Affinity","version":null},{"body":"Pods may use several mechanisms to prefer scheduling on a set of nodes, and nodeAffinity is one of them. nodeAffinity uses expressions to select eligible nodes for scheduling decisions and may override intended placement options by cluster administrators. This policy ensures that nodeAffinity is not used in a Pod spec.\n","category":"Other in CEL","filters":"validate::Other in CEL::%!s(\u003cnil\u003e)::Pod","link":"/policies/other-cel/restrict-node-affinity/restrict-node-affinity/","policy":"validate","subject":"Pod","title":"Restrict Node Affinity in CEL expressions","version":null},{"body":"Node labels are critical pieces of metadata upon which many other applications and logic may depend and should not be altered or removed by regular users. This policy prevents changes or deletions to a label called `foo` on cluster Nodes. Use of this policy requires removal of the Node resource filter in the Kyverno ConfigMap ([Node,*,*]). Due to Kubernetes CVE-2021-25735, this policy requires, at minimum, one of the following versions of Kubernetes: v1.18.18, v1.19.10, v1.20.6, or v1.21.0.\n","category":"Sample","filters":"validate::Sample::1.6.0::Node, Label","link":"/policies/other/restrict-node-label-changes/restrict-node-label-changes/","policy":"validate","subject":"Node, Label","title":"Restrict node label changes","version":"1.6.0"},{"body":"Node labels are critical pieces of metadata upon which many other applications and logic may depend and should not be altered or removed by regular users. Many cloud providers also use Node labels to signal specific functions to applications. This policy prevents setting of a new label called `foo` on cluster Nodes. Use of this policy requires removal of the Node resource filter in the Kyverno ConfigMap ([Node,*,*]). Due to Kubernetes CVE-2021-25735, this policy requires, at minimum, one of the following versions of Kubernetes: v1.18.18, v1.19.10, v1.20.6, or v1.21.0.\n","category":"Sample","filters":"validate::Sample::1.6.0::Node, Label","link":"/policies/other/restrict-node-label-creation/restrict-node-label-creation/","policy":"validate","subject":"Node, Label","title":"Restrict node label creation","version":"1.6.0"},{"body":"Node labels are critical pieces of metadata upon which many other applications and logic may depend and should not be altered or removed by regular users. Many cloud providers also use Node labels to signal specific functions to applications. This policy prevents setting of a new label called `foo` on cluster Nodes. Use of this policy requires removal of the Node resource filter in the Kyverno ConfigMap ([Node,*,*]). Due to Kubernetes CVE-2021-25735, this policy requires, at minimum, one of the following versions of Kubernetes: v1.18.18, v1.19.10, v1.20.6, or v1.21.0.\n","category":"Sample in CEL","filters":"validate::Sample in CEL::%!s(\u003cnil\u003e)::Node, Label","link":"/policies/other-cel/restrict-node-label-creation/restrict-node-label-creation/","policy":"validate","subject":"Node, Label","title":"Restrict node label creation in CEL expressions","version":null},{"body":"The Kubernetes scheduler uses complex logic to determine the optimal placement for new Pods. Users who have access to set certain fields in a Pod spec may sidestep this logic which in many cases is undesirable. This policy prevents users from targeting specific Nodes for scheduling of Pods by prohibiting the use of the `nodeSelector` and `nodeName` fields. Note that this policy is only designed to work on initial creation and not in background mode.\n","category":"Sample","filters":"validate::Sample::1.6.0::Pod","link":"/policies/other/restrict-node-selection/restrict-node-selection/","policy":"validate","subject":"Pod","title":"Restrict node selection","version":"1.6.0"},{"body":"ServiceAccounts which have the ability to edit/patch workloads which they created may potentially use that privilege to update to a different ServiceAccount with higher privileges. This policy, intended to be run in `enforce` mode, blocks updates to Pod controllers if those updates modify the serviceAccountName field. Updates to Pods directly for this field are not possible as it is immutable once set.\n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::Pod","link":"/policies/other/restrict-pod-controller-serviceaccount-updates/restrict-pod-controller-serviceaccount-updates/","policy":"validate","subject":"Pod","title":"Restrict Pod Controller ServiceAccount Updates","version":null},{"body":"ServiceAccounts which have the ability to edit/patch workloads which they created may potentially use that privilege to update to a different ServiceAccount with higher privileges. This policy, intended to be run in `enforce` mode, blocks updates to Pod controllers if those updates modify the serviceAccountName field. Updates to Pods directly for this field are not possible as it is immutable once set.\n","category":"Other in CEL","filters":"validate::Other in CEL::%!s(\u003cnil\u003e)::Pod","link":"/policies/other-cel/restrict-pod-controller-serviceaccount-updates/restrict-pod-controller-serviceaccount-updates/","policy":"validate","subject":"Pod","title":"Restrict Pod Controller ServiceAccount Updates in CEL Expressions","version":null},{"body":"Sometimes Kubernetes Nodes may have a maximum number of Pods they can accommodate due to resources outside CPU and memory such as licensing, or in some development cases. This policy restricts Pod count on a Node named `minikube` to be no more than 10.\n","category":"Sample","filters":"validate::Sample::1.6.0::Pod","link":"/policies/other/restrict-pod-count-per-node/restrict-pod-count-per-node/","policy":"validate","subject":"Pod","title":"Restrict Pod Count per Node","version":"1.6.0"},{"body":"The runtimeClass field of a Pod spec defines which container engine runtime should be used. In the previous Pod Security Policy controller, defining restrictions on which classes were allowed was permitted. Limiting runtime classes to only those which have been defined can prevent unintended running states or Pods which may not come online. This policy restricts the runtimeClass field to the values `prodclass` or `expclass`.\n","category":"PSP Migration","filters":"validate::PSP Migration::%!s(\u003cnil\u003e)::Pod","link":"/policies/psp-migration/restrict-runtimeclassname/restrict-runtimeclassname/","policy":"validate","subject":"Pod","title":"Restrict runtimeClass","version":null},{"body":"The runtimeClass field of a Pod spec defines which container engine runtime should be used. In the previous Pod Security Policy controller, defining restrictions on which classes were allowed was permitted. Limiting runtime classes to only those which have been defined can prevent unintended running states or Pods which may not come online. This policy restricts the runtimeClass field to the values `prodclass` or `expclass`.\n","category":"PSP Migration in CEL","filters":"validate::PSP Migration in CEL::%!s(\u003cnil\u003e)::Pod","link":"/policies/psp-migration-cel/restrict-runtimeclassname/restrict-runtimeclassname/","policy":"validate","subject":"Pod","title":"Restrict runtimeClass in CEL expressions","version":null},{"body":"Pod controllers such as Deployments which implement replicas and permit the scale action use a `/scale` subresource to control this behavior. In addition to checks for creations of such controllers that their replica is in a certain shape, the scale operation and subresource needs to be accounted for as well. This policy, operable beginning in Kyverno 1.9, is a collection of rules which can be used to limit the replica count both upon creation of a Deployment and when a scale operation is performed.\n","category":"Other","filters":"validate::Other::1.9.0::Deployment","link":"/policies/other/restrict-scale/restrict-scale/","policy":"validate","subject":"Deployment","title":"Restrict Scale","version":"1.9.0"},{"body":"The seccomp profile must not be explicitly set to Unconfined. This policy,  requiring Kubernetes v1.19 or later, ensures that seccomp is unset or  set to `RuntimeDefault` or `Localhost`.\n","category":"Pod Security Standards (Baseline)","filters":"validate::Pod Security Standards (Baseline)::%!s(\u003cnil\u003e)::Pod","link":"/policies/pod-security/baseline/restrict-seccomp/restrict-seccomp/","policy":"validate","subject":"Pod","title":"Restrict Seccomp","version":null},{"body":"The seccomp profile in the Restricted group must not be explicitly set to Unconfined but additionally must also not allow an unset value. This policy,  requiring Kubernetes v1.19 or later, ensures that seccomp is  set to `RuntimeDefault` or `Localhost`. A known issue prevents a policy such as this using `anyPattern` from being persisted properly in Kubernetes 1.23.0-1.23.2.\n","category":"Pod Security Standards (Restricted)","filters":"validate::Pod Security Standards (Restricted)::%!s(\u003cnil\u003e)::Pod","link":"/policies/pod-security/restricted/restrict-seccomp-strict/restrict-seccomp-strict/","policy":"validate","subject":"Pod","title":"Restrict Seccomp (Strict)","version":null},{"body":"The seccomp profile in the Restricted group must not be explicitly set to Unconfined but additionally must also not allow an unset value. This policy,  requiring Kubernetes v1.19 or later, ensures that seccomp is  set to `RuntimeDefault` or `Localhost`. A known issue prevents a policy such as this using `anyPattern` from being persisted properly in Kubernetes 1.23.0-1.23.2.\n","category":"Pod Security Standards (Restricted) in CEL","filters":"validate::Pod Security Standards (Restricted) in CEL::1.11.0::Pod","link":"/policies/pod-security-cel/restricted/restrict-seccomp-strict/restrict-seccomp-strict/","policy":"validate","subject":"Pod","title":"Restrict Seccomp (Strict) in CEL","version":"1.11.0"},{"body":"The seccomp profile in the Restricted group must not be explicitly set to Unconfined but additionally must also not allow an unset value. This policy,  requiring Kubernetes v1.30 or later, ensures that seccomp is  set to `RuntimeDefault` or `Localhost`. A known issue prevents a policy such as this using `anyPattern` from being persisted properly in Kubernetes 1.23.0-1.23.2.\n","category":"Pod Security Standards (Restricted) in ValidatingPolicy","filters":"validate::Pod Security Standards (Restricted) in ValidatingPolicy::1.14.0::Pod","link":"/policies/pod-security-vpol/restricted/restrict-seccomp-strict/restrict-seccomp-strict/","policy":"validate","subject":"Pod","title":"Restrict Seccomp (Strict) in ValidatingPolicy","version":"1.14.0"},{"body":"The seccomp profile must not be explicitly set to Unconfined. This policy,  requiring Kubernetes v1.19 or later, ensures that seccomp is unset or  set to `RuntimeDefault` or `Localhost`.\n","category":"Pod Security Standards (Baseline) in CEL","filters":"validate::Pod Security Standards (Baseline) in CEL::1.11.0::Pod","link":"/policies/pod-security-cel/baseline/restrict-seccomp/restrict-seccomp/","policy":"validate","subject":"Pod","title":"Restrict Seccomp in CEL expressions","version":"1.11.0"},{"body":"The seccomp profile must not be explicitly set to Unconfined. This policy,  requiring Kubernetes v1.30 or later, ensures that seccomp is unset or  set to `RuntimeDefault` or `Localhost`.\n","category":"Pod Security Standards (Baseline) in ValidatingPolicy","filters":"validate::Pod Security Standards (Baseline) in ValidatingPolicy::1.14.0::Pod","link":"/policies/pod-security-vpol/baseline/restrict-seccomp/restrict-seccomp/","policy":"validate","subject":"Pod","title":"Restrict Seccomp in ValidatingPolicy","version":"1.14.0"},{"body":"The verbs `get`, `list`, and `watch` in a Role or ClusterRole, when paired with the Secrets resource, effectively allows Secrets to be read which may expose sensitive information. This policy prevents a Role or ClusterRole from using these verbs in tandem with Secret resources. In order to fully implement this control, it is recommended to pair this policy with another which also prevents use of the wildcard ('*') in the verbs list either when explicitly naming Secrets or when also using a wildcard in the base API group.\n","category":"Security","filters":"validate::Security::1.6.0::Role, ClusterRole, RBAC","link":"/policies/other/restrict-secret-role-verbs/restrict-secret-role-verbs/","policy":"validate","subject":"Role, ClusterRole, RBAC","title":"Restrict Secret Verbs in Roles","version":"1.6.0"},{"body":"The verbs `get`, `list`, and `watch` in a Role or ClusterRole, when paired with the Secrets resource, effectively allows Secrets to be read which may expose sensitive information. This policy prevents a Role or ClusterRole from using these verbs in tandem with Secret resources. In order to fully implement this control, it is recommended to pair this policy with another which also prevents use of the wildcard ('*') in the verbs list either when explicitly naming Secrets or when also using a wildcard in the base API group.\n","category":"Security in CEL","filters":"validate::Security in CEL::1.11.0::Role, ClusterRole, RBAC","link":"/policies/other-cel/restrict-secret-role-verbs/restrict-secret-role-verbs/","policy":"validate","subject":"Role, ClusterRole, RBAC","title":"Restrict Secret Verbs in Roles in CEL expressions","version":"1.11.0"},{"body":"Secrets often contain sensitive information and their access should be carefully controlled. Although Kubernetes RBAC can be effective at restricting them in several ways, it lacks the ability to use labels on referenced entities. This policy ensures that only Secrets not labeled with `status=protected` can be consumed by Pods.\n","category":"Other","filters":"validate::Other::1.6.0::Pod, Secret","link":"/policies/other/restrict-secrets-by-label/restrict-secrets-by-label/","policy":"validate","subject":"Pod, Secret","title":"Restrict Secrets by Label","version":"1.6.0"},{"body":"Secrets often contain sensitive information and their access should be carefully controlled. Although Kubernetes RBAC can be effective at restricting them in several ways, it lacks the ability to use wildcards in resource names. This policy ensures that only Secrets beginning with the name `safe-` can be consumed by Pods. In order to work effectively, this policy needs to be paired with a separate policy or rule to require `automountServiceAccountToken=false` since this would otherwise result in a Secret being mounted.\n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::Pod, Secret","link":"/policies/other/restrict-secrets-by-name/restrict-secrets-by-name/","policy":"validate","subject":"Pod, Secret","title":"Restrict Secrets by Name","version":null},{"body":"Secrets often contain sensitive information and their access should be carefully controlled. Although Kubernetes RBAC can be effective at restricting them in several ways, it lacks the ability to use wildcards in resource names. This policy ensures that only Secrets beginning with the name `safe-` can be consumed by Pods. In order to work effectively, this policy needs to be paired with a separate policy or rule to require `automountServiceAccountToken=false` since this would otherwise result in a Secret being mounted.\n","category":"Other in CEL","filters":"validate::Other in CEL::%!s(\u003cnil\u003e)::Pod, Secret","link":"/policies/other-cel/restrict-secrets-by-name/restrict-secrets-by-name/","policy":"validate","subject":"Pod, Secret","title":"Restrict Secrets by Name in CEL expressions","version":null},{"body":"Users may be able to specify any ServiceAccount which exists in their Namespace without restrictions. Confining Pods to a list of authorized ServiceAccounts can be useful to ensure applications in those Pods do not have more privileges than they should. This policy verifies that in the `staging` Namespace the ServiceAccount being specified is matched based on the image and name of the container. For example: 'sa-name: [\"registry/image-name\"]'\n","category":"Sample","filters":"validate::Sample::1.6.0::Pod,ServiceAccount","link":"/policies/other/restrict-service-account/restrict-service-account/","policy":"validate","subject":"Pod,ServiceAccount","title":"Restrict Service Account","version":"1.6.0"},{"body":"Services which are allowed to expose any port number may be able to impact other applications running on the Node which require them, or may make specifying security policy externally more challenging. This policy enforces that only the port range 32000 to 33000 may be used for Service resources.\n","category":"Other","filters":"validate::Other::1.6.0::Service","link":"/policies/other/restrict-service-port-range/restrict-service-port-range/","policy":"validate","subject":"Service","title":"Restrict Service Port Range","version":"1.6.0"},{"body":"Services which are allowed to expose any port number may be able to impact other applications running on the Node which require them, or may make specifying security policy externally more challenging. This policy enforces that only the port range 32000 to 33000 may be used for Service resources.\n","category":"Other in CEL","filters":"validate::Other in CEL::1.11.0::Service","link":"/policies/other-cel/restrict-service-port-range/restrict-service-port-range/","policy":"validate","subject":"Service","title":"Restrict Service Port Range in CEL expressions","version":"1.11.0"},{"body":"StorageClasses allow description of custom \"classes\" of storage offered by the cluster, based on quality-of-service levels, backup policies, or custom policies determined by the cluster administrators. For shared StorageClasses in a multi-tenancy environment, a reclaimPolicy of `Delete` should be used to ensure a PersistentVolume cannot be reused across Namespaces. This policy requires StorageClasses set a reclaimPolicy of `Delete`.\n","category":"Other, Multi-Tenancy","filters":"validate::Other, Multi-Tenancy::%!s(\u003cnil\u003e)::StorageClass","link":"/policies/other/restrict-storageclass/restrict-storageclass/","policy":"validate","subject":"StorageClass","title":"Restrict StorageClass","version":null},{"body":"StorageClasses allow description of custom \"classes\" of storage offered by the cluster, based on quality-of-service levels, backup policies, or custom policies determined by the cluster administrators. For shared StorageClasses in a multi-tenancy environment, a reclaimPolicy of `Delete` should be used to ensure a PersistentVolume cannot be reused across Namespaces. This policy requires StorageClasses set a reclaimPolicy of `Delete`.\n","category":"Other, Multi-Tenancy in CEL","filters":"validate::Other, Multi-Tenancy in CEL::%!s(\u003cnil\u003e)::StorageClass","link":"/policies/other-cel/restrict-storageclass/restrict-storageclass/","policy":"validate","subject":"StorageClass","title":"Restrict StorageClass in CEL expressions","version":null},{"body":"Sysctls can disable security mechanisms or affect all containers on a host, and should be disallowed except for an allowed \"safe\" subset. A sysctl is considered safe if it is namespaced in the container or the Pod, and it is isolated from other Pods or processes on the same Node. This policy ensures that only those \"safe\" subsets can be specified in a Pod.\n","category":"Pod Security Standards (Baseline)","filters":"validate::Pod Security Standards (Baseline)::%!s(\u003cnil\u003e)::Pod","link":"/policies/pod-security/baseline/restrict-sysctls/restrict-sysctls/","policy":"validate","subject":"Pod","title":"Restrict sysctls","version":null},{"body":"Sysctls can disable security mechanisms or affect all containers on a host, and should be disallowed except for an allowed \"safe\" subset. A sysctl is considered safe if it is namespaced in the container or the Pod, and it is isolated from other Pods or processes on the same Node. This policy ensures that only those \"safe\" subsets can be specified in a Pod.\n","category":"Pod Security Standards (Baseline) in CEL","filters":"validate::Pod Security Standards (Baseline) in CEL::1.11.0::Pod","link":"/policies/pod-security-cel/baseline/restrict-sysctls/restrict-sysctls/","policy":"validate","subject":"Pod","title":"Restrict sysctls in CEL expressions","version":"1.11.0"},{"body":"Sysctls can disable security mechanisms or affect all containers on a host, and should be disallowed except for an allowed \"safe\" subset. A sysctl is considered safe if it is namespaced in the container or the Pod, and it is isolated from other Pods or processes on the same Node. This policy ensures that only those \"safe\" subsets can be specified in a Pod.\n","category":"Pod Security Standards (Baseline) in ValidatingPolicy","filters":"validate::Pod Security Standards (Baseline) in ValidatingPolicy::1.14.0::Pod","link":"/policies/pod-security-vpol/baseline/restrict-sysctls/restrict-sysctls/","policy":"validate","subject":"Pod","title":"Restrict sysctls in ValidatingPolicy","version":"1.14.0"},{"body":"Virtual Services optionally accept a wildcard as an alternative to precise matching. In some cases, this may be too permissive as it would direct unintended traffic to the given resource. This policy enforces that any Virtual Service host does not contain a wildcard character and allows for more governance when a single mesh deployment  model is used.\n","category":"Istio","filters":"validate::Istio::1.6.0::VirtualService","link":"/policies/istio/restrict-virtual-service-wildcard/restrict-virtual-service-wildcard/","policy":"validate","subject":"VirtualService","title":"Restrict Virtual Service Host with Wildcards","version":"1.6.0"},{"body":"In addition to restricting HostPath volumes, the restricted pod security profile limits usage of non-core volume types to those defined through PersistentVolumes. This policy blocks any other type of volume other than those in the allow list.\n","category":"Pod Security Standards (Restricted)","filters":"validate::Pod Security Standards (Restricted)::1.6.0::Pod,Volume","link":"/policies/pod-security/restricted/restrict-volume-types/restrict-volume-types/","policy":"validate","subject":"Pod,Volume","title":"Restrict Volume Types","version":"1.6.0"},{"body":"In addition to restricting HostPath volumes, the restricted pod security profile limits usage of non-core volume types to those defined through PersistentVolumes. This policy blocks any other type of volume other than those in the allow list.\n","category":"Pod Security Standards (Restricted) in CEL","filters":"validate::Pod Security Standards (Restricted) in CEL::1.11.0::Pod,Volume","link":"/policies/pod-security-cel/restricted/restrict-volume-types/restrict-volume-types/","policy":"validate","subject":"Pod,Volume","title":"Restrict Volume Types in CEL","version":"1.11.0"},{"body":"In addition to restricting HostPath volumes, the restricted pod security profile limits usage of non-core volume types to those defined through PersistentVolumes. This policy blocks any other type of volume other than those in the allow list.\n","category":"Pod Security Standards (Restricted) in ValidatingPolicy","filters":"validate::Pod Security Standards (Restricted) in ValidatingPolicy::1.14.0::Pod,Volume","link":"/policies/pod-security-vpol/restricted/restrict-volume-types/restrict-volume-types/","policy":"validate","subject":"Pod,Volume","title":"Restrict Volume Types in ValidatingPolicy","version":"1.14.0"},{"body":"Wildcards ('*') in verbs grants all access to the resources referenced by it and does not follow the principal of least privilege. As much as possible, avoid such open verbs unless scoped to perhaps a custom API group. This policy blocks any Role or ClusterRole that contains a wildcard entry in the verbs list found in any rule.\n","category":"Security, EKS Best Practices","filters":"validate::Security, EKS Best Practices::1.6.0::Role, ClusterRole, RBAC","link":"/policies/other/restrict-wildcard-verbs/restrict-wildcard-verbs/","policy":"validate","subject":"Role, ClusterRole, RBAC","title":"Restrict Wildcard in Verbs","version":"1.6.0"},{"body":"Wildcards ('*') in verbs grants all access to the resources referenced by it and does not follow the principal of least privilege. As much as possible, avoid such open verbs unless scoped to perhaps a custom API group. This policy blocks any Role or ClusterRole that contains a wildcard entry in the verbs list found in any rule.\n","category":"Security, EKS Best Practices in CEL","filters":"validate::Security, EKS Best Practices in CEL::1.11.0::Role, ClusterRole, RBAC","link":"/policies/other-cel/restrict-wildcard-verbs/restrict-wildcard-verbs/","policy":"validate","subject":"Role, ClusterRole, RBAC","title":"Restrict Wildcard in Verbs in CEL expressions","version":"1.11.0"},{"body":"Wildcards ('*') in resources grants access to all of the resources referenced by the given API group and does not follow the principal of least privilege. As much as possible, avoid such open resources unless scoped to perhaps a custom API group. This policy blocks any Role or ClusterRole that contains a wildcard entry in the resources list found in any rule.\n","category":"Security, EKS Best Practices","filters":"validate::Security, EKS Best Practices::1.6.0::ClusterRole, Role, RBAC","link":"/policies/other/restrict-wildcard-resources/restrict-wildcard-resources/","policy":"validate","subject":"ClusterRole, Role, RBAC","title":"Restrict Wildcards in Resources","version":"1.6.0"},{"body":"Wildcards ('*') in resources grants access to all of the resources referenced by the given API group and does not follow the principal of least privilege. As much as possible, avoid such open resources unless scoped to perhaps a custom API group. This policy blocks any Role or ClusterRole that contains a wildcard entry in the resources list found in any rule.\n","category":"Security, EKS Best Practices in CEL","filters":"validate::Security, EKS Best Practices in CEL::1.11.0::ClusterRole, Role, RBAC","link":"/policies/other-cel/restrict-wildcard-resources/restrict-wildcard-resources/","policy":"validate","subject":"ClusterRole, Role, RBAC","title":"Restrict Wildcards in Resources in CEL expressions","version":"1.11.0"},{"body":"The restricted profile of the Pod Security Standards, which is inclusive of the baseline profile, is a collection of all the most common configurations that can be taken to secure Pods. Beginning with Kyverno 1.8, an entire profile may be assigned to the cluster through a single rule. This policy configures the restricted profile through the latest version of the Pod Security Standards cluster wide.\n","category":"Pod Security, EKS Best Practices","filters":"validate::Pod Security, EKS Best Practices::1.8.0::Pod","link":"/policies/pod-security/subrule/restricted/restricted-latest/restricted-latest/","policy":"validate","subject":"Pod","title":"Restricted Pod Security Standards","version":"1.8.0"},{"body":"The restricted profile of the Pod Security Standards, which is inclusive of the baseline profile, is a collection of all the most common configurations that can be taken to secure Pods. Beginning with Kyverno 1.8, an entire profile may be assigned to the cluster through a single rule. In some cases, specific exemptions must be made on a per-control basis. This policy configures the restricted profile through the latest version of the Pod Security Standards cluster wide while exempting `nginx` and `redis` container images from the Capabilities control check.\n","category":"Pod Security","filters":"validate::Pod Security::1.8.0::Pod","link":"/policies/pod-security/subrule/restricted/restricted-exclude-capabilities/restricted-exclude-capabilities/","policy":"validate","subject":"Pod","title":"Restricted Pod Security Standards with Container-Level Control Exemption","version":"1.8.0"},{"body":"The restricted profile of the Pod Security Standards, which is inclusive of the baseline profile, is a collection of all the most common configurations that can be taken to secure Pods. Beginning with Kyverno 1.8, an entire profile may be assigned to the cluster through a single rule. In some cases, specific exemptions must be made on a per-control basis. This policy configures the restricted profile through the latest version of the Pod Security Standards cluster wide while completely exempting Seccomp control check.\n","category":"Pod Security","filters":"validate::Pod Security::1.8.0::Pod","link":"/policies/pod-security/subrule/restricted/restricted-exclude-seccomp/restricted-exclude-seccomp/","policy":"validate","subject":"Pod","title":"Restricted Pod Security Standards with Spec and Container-Level Control Exemption","version":"1.8.0"},{"body":"If a Deployment's Pods are seen crashing multiple times it usually indicates there is an issue that must be manually resolved. Removing the failing Pods and marking the Deployment is often a useful troubleshooting step. This policy watches existing Pods and if any are observed to have restarted more than once, indicating a potential crashloop, Kyverno scales its parent deployment to zero and writes an annotation signaling to an SRE team that troubleshooting is needed. It may be necessary to grant additional privileges to the Kyverno ServiceAccount, via one of the existing ClusterRoleBindings or a new one, so it can modify Deployments. This policy scales down deployments with frequently restarting pods by monitoring `Pod.status`  for `restartCount`updates, which are performed by the kubelet. No `resourceFilter` modifications are needed if matching on `Pod`and `Pod.status`. Note: For this policy to work, you must modify Kyverno's ConfigMap to remove or change the line  `excludeGroups: system:nodes` since version 1.10.\n","category":"Other","filters":"mutate::Other::1.7.0::Deployment","link":"/policies/other/scale-deployment-zero/scale-deployment-zero/","policy":"mutate","subject":"Deployment","title":"Scale Deployment to Zero","version":"1.7.0"},{"body":"This policy is a variation of the disallow-capabilities policy that is a part of the Pod Security Standards (Baseline) category. It enforces the same control but with provisions for common service mesh initContainers from Istio and Linkerd which need the additional capabilities, NET_ADMIN and NET_RAW. For more information and context, see the Kyverno blog post at https://kyverno.io/blog/2024/02/04/securing-services-meshes-easier-with-kyverno/.\n","category":"Istio, Linkerd, Pod Security Standards (Baseline)","filters":"validate::Istio, Linkerd, Pod Security Standards (Baseline)::%!s(\u003cnil\u003e)::Pod","link":"/policies/istio/service-mesh-disallow-capabilities/service-mesh-disallow-capabilities/","policy":"validate","subject":"Pod","title":"Service Mesh Disallow Capabilities","version":null},{"body":"This policy is a variation of the Require runAsNonRoot policy that is a part of the Pod Security Standards (Restricted) category. It enforces the same control but with provisions for Istio's initContainer. For more information and context, see the Kyverno blog post at https://kyverno.io/blog/2024/02/04/securing-services-meshes-easier-with-kyverno/.\n","category":"Istio, Pod Security Standards (Restricted)","filters":"validate::Istio, Pod Security Standards (Restricted)::%!s(\u003cnil\u003e)::Pod","link":"/policies/istio/service-mesh-require-run-as-nonroot/service-mesh-require-run-as-nonroot/","policy":"validate","subject":"Pod","title":"Service Mesh Require runAsNonRoot","version":null},{"body":"Example Kyverno policy to enforce common compliance retention standards by modifying Kasten Policy backup retention settings. Based on regulation/compliance standard requirements, uncomment (1) of the desired GFS retention schedules to mutate existing and future Kasten Policies. Alternatively, this policy can be used to reduce retention lengths to enforce cost optimization. NOTE: This example only applies to Kasten Policies with an '@hourly' frequency. Refer to Kasten documentation for Policy API specification if modifications are necessary: https://docs.kasten.io/latest/api/policies.html#policy-api-type\n","category":"Veeam Kasten","filters":"mutate::Veeam Kasten::1.6.2::Policy","link":"/policies/kasten/kasten-minimum-retention/kasten-minimum-retention/","policy":"mutate","subject":"Policy","title":"Set Kasten Policy Minimum Backup Retention","version":"1.6.2"},{"body":"For correct node provisioning Karpenter should know exactly what the non-CPU resources are  that the pods will need. Otherwise Karpenter will put as many pods on a node as possible,  which may lead to memory pressure on nodes. This is especially important in consolidation  mode.\n","category":"Karpenter, EKS Best Practices","filters":"mutate::Karpenter, EKS Best Practices::1.6.0::Pod","link":"/policies/karpenter/set-karpenter-non-cpu-limits/set-karpenter-non-cpu-limits/","policy":"mutate","subject":"Pod","title":"Set non-CPU limits for pods to work well with Karpenter.","version":"1.6.0"},{"body":"Deployments to a Kubernetes cluster with multiple availability zones often need to distribute those replicas to align with those zones to ensure site-level failures do not impact availability. This policy matches Deployments with the label `distributed=required` and mutates them to spread Pods across zones.\n","category":"Sample","filters":"mutate::Sample::1.6.0::Deployment, Pod","link":"/policies/other/spread-pods-across-topology/spread-pods-across-topology/","policy":"mutate","subject":"Deployment, Pod","title":"Spread Pods Across Nodes","version":"1.6.0"},{"body":"Deployments to a Kubernetes cluster with multiple availability zones often need to distribute those replicas to align with those zones to ensure site-level failures do not impact availability. This policy ensures topologySpreadConstraints are defined,  to spread pods over nodes and zones. Deployments or Statefulsets with leass than 3  replicas are skipped.\n","category":"Sample","filters":"validate::Sample::1.8.0::Deployment, StatefulSet","link":"/policies/other/topologyspreadconstraints-policy/topologyspreadconstraints-policy/","policy":"validate","subject":"Deployment, StatefulSet","title":"Spread Pods Across Nodes \u0026 Zones","version":"1.8.0"},{"body":"Deployments to a Kubernetes cluster with multiple availability zones often need to distribute those replicas to align with those zones to ensure site-level failures do not impact availability. This policy ensures topologySpreadConstraints are defined,  to spread pods over nodes and zones. Deployments or Statefulsets with less than 3  replicas are skipped.\n","category":"Sample in CEL","filters":"validate::Sample in CEL::1.11.0::Deployment, StatefulSet","link":"/policies/other-cel/topologyspreadconstraints-policy/topologyspreadconstraints-policy/","policy":"validate","subject":"Deployment, StatefulSet","title":"Spread Pods Across Nodes \u0026 Zones in CEL expressions","version":"1.11.0"},{"body":"Secrets like registry credentials often need to exist in multiple Namespaces so Pods there have access. Manually duplicating those Secrets is time consuming and error prone. This policy will copy a Secret called `regcred` which exists in the `default` Namespace to new Namespaces when they are created. It will also push updates to the copied Secrets should the source Secret be changed.      \n","category":"Sample","filters":"generate::Sample::1.6.0::Secret","link":"/policies/other/sync-secrets/sync-secrets/","policy":"generate","subject":"Secret","title":"Sync Secrets","version":"1.6.0"},{"body":"Sometimes a policy should be active or inactive based on a time window determined as part of the policy. Whether the policy should come into play should be dependent on that time. This policy illustrates how to time-bound any policy by using preconditions with JMESPath time filters. In this case, the policy enforces that label `foo` be required on all ConfigMaps during the hours of 8am-5pm EST (expressed in UTC). Additional, similar preconditions may be added to perform other time checks, for example a range of days.\n","category":"Other","filters":"validate::Other::1.9.0::ConfigMap","link":"/policies/other/time-bound-policy/time-bound-policy/","policy":"validate","subject":"ConfigMap","title":"Time-Bound Policy","version":"1.9.0"},{"body":"An Ingress host is a URL at which services may be made available externally. In most cases, these hosts should be unique across the cluster to ensure no routing conflicts occur. This policy checks an incoming Ingress resource to ensure its hosts are unique to the cluster. It also ensures that only a single host may be specified in a given manifest.      \n","category":"Sample","filters":"validate::Sample::1.6.0::Ingress","link":"/policies/other/restrict-ingress-host/restrict-ingress-host/","policy":"validate","subject":"Ingress","title":"Unique Ingress Host","version":"1.6.0"},{"body":"Similar to the ability to check the uniqueness of hosts and paths independently, it is possible to check for uniqueness of them both together across a cluster. This policy ensures that no Ingress can be created or updated unless it is globally unique with respect to host plus path combination.\n","category":"Sample","filters":"validate::Sample::1.6.0::Ingress","link":"/policies/other/unique-ingress-host-and-path/unique-ingress-host-and-path/","policy":"validate","subject":"Ingress","title":"Unique Ingress Host and Path","version":"1.6.0"},{"body":"Just like the need to ensure uniqueness among Ingress hosts, there is a need to have the paths be unique as well. This policy checks an incoming Ingress to ensure its root path does not conflict with another root path in a different Namespace. It requires that incoming Ingress resources have a single rule with a single path only and assumes the root path is specified explicitly in an existing Ingress rule (ex., when blocking /foo/bar /foo must exist by itself and not part of /foo/baz).\n","category":"Sample","filters":"validate::Sample::1.6.0::Ingress","link":"/policies/other/unique-ingress-paths/unique-ingress-paths/","policy":"validate","subject":"Ingress","title":"Unique Ingress Path","version":"1.6.0"},{"body":"For use cases like sidecar injection, it is often the case where existing Deployments need the sidecar image updated without destroying the whole Deployment or Pods. This policy updates the image tag on containers named vault-agent for existing Deployments which have the annotation vault.hashicorp.com/agent-inject=\"true\". It may be necessary to grant additional privileges to the Kyverno ServiceAccount, via one of the existing ClusterRoleBindings or a new one, so it can modify Deployments.\n","category":"Other","filters":"mutate::Other::1.7.0::Deployment","link":"/policies/other/update-image-tag/update-image-tag/","policy":"mutate","subject":"Deployment","title":"Update Image Tag","version":"1.7.0"},{"body":"Kubernetes applications are typically deployed into a single, logical namespace.  Kasten K10 policies will discover and protect all resources within the selected namespace(s).  This policy ensures all new namespaces include a label referencing a valid K10 SLA  (Policy Preset) for data protection.This policy can be used in combination with generate  ClusterPolicy to automatically create a K10 policy based on the specified SLA.  The combination ensures that new applications are not inadvertently left unprotected.\n","category":"Kasten K10 by Veeam in CEL","filters":"validate::Kasten K10 by Veeam in CEL::1.11.0::Namespace","link":"/policies/kasten-cel/k10-validate-ns-by-preset-label/k10-validate-ns-by-preset-label/","policy":"validate","subject":"Namespace","title":"Validate Data Protection by Preset Label in CEL expressions","version":"1.11.0"},{"body":"Kubernetes applications are typically deployed into a single, logical namespace.  Veeam Kasten policies will discover and protect all resources within the selected namespace(s).  This policy ensures all new namespaces include a label referencing a valid Kasten SLA  (Policy Preset) for data protection.This policy can be used in combination with /Users/the `kasten-generate-policy-by-preset-label` ClusterPolicy to automatically create a Kasten policy based on the specified SLA.  The combination ensures that new applications are not inadvertently left unprotected.\n","category":"Veeam Kasten","filters":"validate::Veeam Kasten::1.9.0::Namespace","link":"/policies/kasten/kasten-validate-ns-by-preset-label/kasten-validate-ns-by-preset-label/","policy":"validate","subject":"Namespace","title":"Validate Data Protection with Kasten Preset Label","version":"1.9.0"},{"body":"Liveness and readiness probes accomplish different goals, and setting both to the same is an anti-pattern and often results in app problems in the future. This policy checks that liveness and readiness probes are not equal. Keep in mind that if both the  probes are not set, they are considered to be equal and hence fails the check.\n","category":"Sample","filters":"validate::Sample::1.6.0::Pod","link":"/policies/other/ensure-probes-different/ensure-probes-different/","policy":"validate","subject":"Pod","title":"Validate Probes","version":"1.6.0"},{"body":"Liveness and readiness probes accomplish different goals, and setting both to the same is an anti-pattern and often results in app problems in the future. This policy checks that liveness and readiness probes are not equal. Keep in mind that if both the  probes are not set, they are considered to be equal and hence fails the check.\n","category":"Sample in CEL","filters":"validate::Sample in CEL::1.11.0::Pod","link":"/policies/other-cel/ensure-probes-different/ensure-probes-different/","policy":"validate","subject":"Pod","title":"Validate Probes in CEL expressions","version":"1.11.0"},{"body":"A Velero Schedule is given in Cron format and must be accurate to ensure operation. This policy validates that the schedule is a valid Cron format.\n","category":"Velero","filters":"validate::Velero::%!s(\u003cnil\u003e)::Schedule","link":"/policies/velero/validate-cron-schedule/validate-cron-schedule/","policy":"validate","subject":"Schedule","title":"Validate Schedule","version":null},{"body":"A Velero Schedule is given in Cron format and must be accurate to ensure operation. This policy validates that the schedule is a valid Cron format.\n","category":"Velero in CEL","filters":"validate::Velero in CEL::%!s(\u003cnil\u003e)::Schedule","link":"/policies/velero-cel/validate-cron-schedule/validate-cron-schedule/","policy":"validate","subject":"Schedule","title":"Validate Schedule in CEL expressions","version":null},{"body":"Naming patterns are commonplace in clusters where creation activities are granted to other users. In order to maintain organization, it is often such that patterns should be established for organizational consistency. This policy denies the creation of a Namespace if the name of the Namespace does not follow a specific naming defined by the cluster admins.\n","category":"OpenShift","filters":"validate::OpenShift::1.6.0::Namespace","link":"/policies/openshift/team-validate-ns-name/team-validate-ns-name/","policy":"validate","subject":"Namespace","title":"Validate Team Namespace Schema","version":"1.6.0"},{"body":"All processes inside a Pod can be made to run with specific user and groupID by setting `runAsUser` and `runAsGroup` respectively. `fsGroup` can be specified to make sure any file created in the volume will have the specified groupID. This policy validates that these fields are set to the defined values.\n","category":"Sample","filters":"validate::Sample::1.6.0::Pod","link":"/policies/other/restrict-usergroup-fsgroup-id/restrict-usergroup-fsgroup-id/","policy":"validate","subject":"Pod","title":"Validate User ID, Group ID, and FS Group","version":"1.6.0"},{"body":"All processes inside a Pod can be made to run with specific user and groupID by setting `runAsUser` and `runAsGroup` respectively. `fsGroup` can be specified to make sure any file created in the volume will have the specified groupID. This policy validates that these fields are set to the defined values.\n","category":"Sample in CEL","filters":"validate::Sample in CEL::1.11.0::Pod","link":"/policies/other-cel/restrict-usergroup-fsgroup-id/restrict-usergroup-fsgroup-id/","policy":"validate","subject":"Pod","title":"Validate User ID, Group ID, and FS Group in CEL expressions","version":"1.11.0"},{"body":"Software Bill of Materials (SBOM) provide details on the composition of a given container image and may be represented in a couple different standards. Having an SBOM can be important to ensuring images are built using verified processes. This policy verifies that an image has an SBOM in CycloneDX format and was signed by the expected subject and issuer when produced through GitHub Actions and using Cosign's keyless signing. It requires configuration based upon your own values.\n","category":"Software Supply Chain Security","filters":"verifyImages::Software Supply Chain Security::1.8.3::Pod","link":"/policies/other/verify-sbom-cyclonedx/verify-sbom-cyclonedx/","policy":"verifyImages","subject":"Pod","title":"Verify CycloneDX SBOM (Keyless)","version":"1.8.3"},{"body":"Ensures that container images used to run Flux controllers in the cluster are signed with valid Cosign signatures. Prevents the deployment of untrusted or potentially compromised Flux images. Protects the integrity and security  of the Flux deployment process.\n","category":"Flux","filters":"verifyImages::Flux::1.6.0::GitRepository","link":"/policies/flux/verify-flux-images/verify-flux-images/","policy":"verifyImages","subject":"GitRepository","title":"Verify Flux Images","version":"1.6.0"},{"body":"Flux source APIs include a number of different sources such as GitRepository, Bucket, HelmRepository, and ImageRepository resources. Each of these by default can be pointed to any location. In a production environment, it may be desired to restrict these to only known sources to prevent accessing outside sources. This policy verifies that each of the Flux sources comes from a trusted location.\n","category":"Flux","filters":"validate::Flux::1.6.0::GitRepository, Bucket, HelmRepository, ImageRepository","link":"/policies/flux/verify-flux-sources/verify-flux-sources/","policy":"validate","subject":"GitRepository, Bucket, HelmRepository, ImageRepository","title":"Verify Flux Sources","version":"1.6.0"},{"body":"Flux source APIs include a number of different sources such as GitRepository, Bucket, HelmRepository, and ImageRepository resources. Each of these by default can be pointed to any location. In a production environment, it may be desired to restrict these to only known sources to prevent accessing outside sources. This policy verifies that each of the Flux sources comes from a trusted location.\n","category":"Flux in CEL","filters":"validate::Flux in CEL::1.11.0::GitRepository, Bucket, HelmRepository, ImageRepository","link":"/policies/flux-cel/verify-flux-sources/verify-flux-sources/","policy":"validate","subject":"GitRepository, Bucket, HelmRepository, ImageRepository","title":"Verify Flux Sources in CEL expressions","version":"1.11.0"},{"body":"Ensures that Git repositories used for Flux deployments in a cluster originate from a specific, trusted organization. Prevents the use of untrusted or potentially risky Git repositories. Protects the integrity and security of Flux deployments.\n","category":"Flux","filters":"validate::Flux::%!s(\u003cnil\u003e)::GitRepository","link":"/policies/flux/verify-git-repositories/verify-git-repositories/","policy":"validate","subject":"GitRepository","title":"Verify Git Repositories","version":null},{"body":"Ensures that Git repositories used for Flux deployments in a cluster originate from a specific, trusted organization. Prevents the use of untrusted or potentially risky Git repositories. Protects the integrity and security of Flux deployments.\n","category":"Flux in CEL","filters":"validate::Flux in CEL::1.11.0::GitRepository","link":"/policies/flux-cel/verify-git-repositories/verify-git-repositories/","policy":"validate","subject":"GitRepository","title":"Verify Git Repositories in CEL expressions","version":"1.11.0"},{"body":"Using the Cosign project, OCI images may be signed to ensure supply chain security is maintained. Those signatures can be verified before pulling into a cluster. This policy checks the signature of an image repo called ghcr.io/kyverno/test-verify-image to ensure it has been signed by verifying its signature against the provided public key. This policy serves as an illustration for how to configure a similar rule and will require replacing with your image(s) and keys.\n","category":"Software Supply Chain Security, EKS Best Practices","filters":"verifyImages::Software Supply Chain Security, EKS Best Practices::1.14.0::Pod","link":"/policies/other/verify-image-ivpol/verify-image-ivpol/","policy":"verifyImages","subject":"Pod","title":"Verify Image","version":"1.14.0"},{"body":"Using the Cosign project, OCI images may be signed to ensure supply chain security is maintained. Those signatures can be verified before pulling into a cluster. This policy checks the signature of an image repo called ghcr.io/kyverno/test-verify-image to ensure it has been signed by verifying its signature against the provided public key. This policy serves as an illustration for how to configure a similar rule and will require replacing with your image(s) and keys.\n","category":"Software Supply Chain Security, EKS Best Practices","filters":"verifyImages::Software Supply Chain Security, EKS Best Practices::1.7.0::Pod","link":"/policies/other/verify-image/verify-image/","policy":"verifyImages","subject":"Pod","title":"Verify Image","version":"1.7.0"},{"body":"CVE-2022-42889 is a critical vulnerability in the Apache Commons Text library which could lead to arbitrary code executions and occurs in versions 1.5 through 1.9. Detecting the affected package may be done in an SBOM by identifying the \"commons-text\" package with one of the affected versions. This policy checks attested SBOMs in CycloneDX format of an image specified under `imageReferences` and denies it if it contains versions 1.5-1.9 of the commons-text package. Using this for your own purposes will require customizing the `imageReferences`, `subject`, and `issuer` fields based on your image signatures and attestations.\n","category":"Software Supply Chain Security","filters":"verifyImages::Software Supply Chain Security::1.8.3::Pod","link":"/policies/other/verify-image-cve-2022-42889/verify-image-cve-2022-42889/","policy":"verifyImages","subject":"Pod","title":"Verify Image Check CVE-2022-42889","version":"1.8.3"},{"body":"Using the Cosign project, OCI images may be signed to ensure supply chain security is maintained. Those signatures can be verified before pulling into a cluster. This policy checks the signature of an image repo called ghcr.io/kyverno/test-verify-image to ensure it has been signed by verifying its signature against the provided public key. This policy serves as an illustration for how to configure a similar rule and will require replacing with your image(s) and keys.\n","category":"Software Supply Chain Security","filters":"verifyImages::Software Supply Chain Security::1.8.1::Pod","link":"/policies/other/verify-image-gcpkms/verify-image-gcpkms/","policy":"verifyImages","subject":"Pod","title":"Verify Image GCP KMS","version":"1.8.1"},{"body":"There may be multiple keys used to sign images based on the parties involved in the creation process. This image verification policy requires the named image be signed by two separate keys. It will search for a global \"production\" key in a ConfigMap called `keys` in the `default` Namespace and also a Namespace key in the same ConfigMap.\n","category":"Software Supply Chain Security","filters":"verifyImages::Software Supply Chain Security::1.7.0::Pod","link":"/policies/other/verify-image-with-multi-keys/verify-image-with-multi-keys/","policy":"verifyImages","subject":"Pod","title":"Verify Image with Multiple Keys","version":"1.7.0"},{"body":"Verifying the integrity of resources is important to ensure no tampering has occurred, and in some cases this may need to be extended to certain YAML manifests deployed to Kubernetes. Starting in Kyverno 1.8, these manifests may be signed with Sigstore and the signature(s) validated to prevent this tampering while still allowing some exceptions on a per-field basis. This policy verifies Deployments are signed with the expected key but ignores the `spec.replicas` field allowing other teams to change just this value.\n","category":"Other","filters":"validate::Other::1.8.0::Deployment","link":"/policies/other/verify-manifest-integrity/verify-manifest-integrity/","policy":"validate","subject":"Deployment","title":"Verify Manifest Integrity","version":"1.8.0"},{"body":"Provenance is used to identify how an artifact was produced and from where it originated. SLSA provenance is an industry-standard method of representing that provenance. This policy verifies that an image has SLSA provenance and was signed by the expected subject and issuer when produced through GitHub Actions. It requires configuration based upon your own values.\n","category":"Software Supply Chain Security","filters":"verifyImages::Software Supply Chain Security::1.8.3::Pod","link":"/policies/other/verify-image-slsa/verify-image-slsa/","policy":"verifyImages","subject":"Pod","title":"Verify SLSA Provenance (Keyless)","version":"1.8.3"},{"body":"VerticalPodAutoscaler (VPA) is useful to automatically adjust the resources assigned to Pods. It requires defining a specific target resource by kind and name. There are no built-in validation checks by the VPA controller to ensure that the target resource exists or that the target kind is specified correctly. This policy contains two rules, the first of which verifies that the kind is specified exactly as Deployment, StatefulSet, ReplicaSet, or DaemonSet, which helps avoid typos. The second rule verifies that the target resource exists before allowing the VPA to be created.\n","category":"Other","filters":"validate::Other::%!s(\u003cnil\u003e)::VerticalPodAutoscaler","link":"/policies/other/verify-vpa-target/verify-vpa-target/","policy":"validate","subject":"VerticalPodAutoscaler","title":"Verify VerticalPodAutoscaler Target","version":null}]