Skip to main content
Version: 2.8

kdmp-config ConfigMap

The kdmp-config ConfigMap is a central configuration component used by Portworx Backup (PXB/PX-Backup) to manage backup and restore operations for Kubernetes workloads using Kubernetes Data Management Platform (KDMP). This ConfigMap resides in the kube-system namespace (in the application cluster) and contains various settings to fine-tune backup behaviors, handle large PVCs, manage memory limits, control exclusions, and enable support for environments like OpenShift.

Exclude files and directories

Consider a scenario where you have directories or files (hidden, read-only, logs) with different extensions and you wish to exclude those from being included in the backup. Portworx Backup now allows you to exclude such directories or files from being backed up. To exclude or omit such directories and files from being included in the backup, you need to update the kdmp-config ConfigMap with the following steps:

  1. On the source application cluster, run the following command to edit the kdmp-config ConfigMap:

    kubectl edit cm kdmp-config -n kube-system
  2. Add the following key-value pair:

    KDMP_EXCLUDE_FILE_LIST: |
    <storageclassName1>=<dir-list>,<file-list1>,....
    <storageclassName2>=<dir-list>,<file-list1>,....

    Example

    apiVersion: v1
    data:
    KDMP_BACKUP_JOB_LIMIT: "5"
    KDMP_DELETE_JOB_LIMIT: "5"
    KDMP_EXCLUDE_FILE_LIST: |
    px-db=dir1,file1,dir2
    mysql=dir1,file1,dir2
    KDMP_KOPIAEXECUTOR_IMAGE: <kopiaexecutor-image-version>
    KDMP_KOPIAEXECUTOR_IMAGE_SECRET: ""
    KDMP_KOPIAEXECUTOR_LIMIT_CPU: "0.2"
    KDMP_KOPIAEXECUTOR_LIMIT_MEMORY: 1Gi
    KDMP_KOPIAEXECUTOR_REQUEST_CPU: "0.1"
    KDMP_KOPIAEXECUTOR_REQUEST_MEMORY: 700Mi
    KDMP_MAINTENANCE_JOB_LIMIT: "5"
    KDMP_RESTORE_JOB_LIMIT: "5"
    KDMP_RESTORE_PVC_SIZE_PERCENTAGE: "15"
    SNAPSHOT_TIMEOUT: ""
    pxb_job_node_affinity_label: kopia-backup
    KDMP_DISABLE_ISTIO_CONFIG: "true"
    PROVISIONERS_TO_USE_ANYUID: <provisioner-name1>, <provisioner-name2>
    kind: ConfigMap
    metadata:
    name: kdmp-config
    namespace: kube-system
    resourceVersion: "<resource-version>"
    uid: <uid>
    note
    • KDMP_DISABLE_ISTIO_CONFIG key in kdmp-config can either take true or false as the value. When set to true it disables automatic sidecar container injection on KDMP job pods created by px-backup for backup and restore on an Istio enabled application clusters.
    • You can add either single or multiple provisioners (with a comma separator) as value for PROVISIONERS_TO_USE_ANYUID key in the kdmp-config ConfigMap.
    • ENABLE_PX_GENERIC_BACKUP parameter is deprecated from Portworx Backup version 2.7.0 and you need to utilize the cross-cloud backup option in the Portworx Backup web console from PXB version 2.7.0 and above to create a generic/KDMP backup.

You can exclude directories and files from being backed up for both direct KDMP and KDMP with CSI (local snapshot) backup types. For the latter, you cannot exclude any directories or files from being backed up if the local snapshot already exists even after kdmp-config ConfigMap is updated with the key-value pair specified above. If you want to restore with excluded directories or files, delete the existing local snapshot and then restore the backup.

KDMP backups and restores with large volume PVCs

KDMP job pods consume increased amounts of memory for the large volume PVC backup and restore operations to backup locations. As a result, you may see out of memory alerts or a failure of the kdmp job pods that run on each application or target cluster. During these scenarios, Portworx by PureStorage recommends to increase the CPU and memory limit related parameters in the kdmp-config ConfigMap which resides in the kube-system namespace on the target cluster or application cluster.

  • Update the following parameters in kdmp-config ConfigMap to resolve Out Of Memory (OOM) errors:

    KDMP_KOPIAEXECUTOR_LIMIT_CPU: "0.4"
    KDMP_KOPIAEXECUTOR_REQUEST_CPU: "0.2"
    KDMP_KOPIAEXECUTOR_REQUEST_MEMORY: 2Gi
    KDMP_KOPIAEXECUTOR_LIMIT_MEMORY: 4Gi
note

Before you initiate a generic or KDMP or cross-cloud backup, ensure that you allocate additional buffer storage that is double the size of the original Persistent Volume Claim (PVC) in your application cluster.

Configure node affinity

Concepts

Live Backup

  • The PVC is in use by an application pod.

  • Node affinity is NOT supported for ReadWriteOnce (RWO) PVCs.

  • Node affinity is supported for ReadWriteMany (RWX) and ReadOnlyMany (ROX) PVCs.

Non-Live Backup

  • The PVC is not in use by any application pod.

  • Node affinity is supported for all PVC access modes: RWO, RWX, and ROX.

Backup Types and Node Affinity Support

Backup TypePVC Access ModeNode Affinity
Live BackupReadWriteOnce (RWO)
ReadWriteMany (RWX)
ReadOnlyMany (ROX)
Non-Live BackupReadWriteOnce (RWO)
ReadWriteMany (RWX)
ReadOnlyMany (ROX)

Configure node affinity for KDMP backup/restore jobs

To ensure that KDMP backup and restore jobs are scheduled on a specific node, follow these steps:

  1. Label the desired node: Select the node where you want the KDMP backup/restore jobs to be scheduled and label it with a the key pxb_job_node_affinity_label and a value of your choice.

    kubectl label node <node-name> pxb_job_node_affinity_label=<value>

    Replace <node-name> with the name of your desired node, and <value> with any valid kubernetes label.

    Example:

    kubectl label node ip-xx-xx-xx-xx.pwx.purestorage.com pxb_job_node_affinity_label=kopia-backup
  2. Verify the Label on the Node: Ensure that the label is applied correctly:

    kubectl get nodes -o wide --show-labels -l pxb_job_node_affinity_label=<value>

    Example:

    kubectl get nodes -o wide --show-labels -l pxb_job_node_affinity_label=kopia-backup
  3. Update the kdmp-config ConfigMap: Edit the kdmp-config ConfigMap in the appropriate namespace (usually kube-system), and add the same label key-value pair.

    kubectl edit configmap kdmp-config -n kube-system

    Then append the following entry under data:

    pxb_job_node_affinity_label: <value>

    Example:

    pxb_job_node_affinity_label: kopia-backup

  4. Create Kopia backups and restores for the desired applications/backups. Then, verify that the node affinity configuration is correctly applied by inspecting the YAML of the backup/restore job pods and ensure that the pod has been scheduled on the node where the label was applied:

    kubectl get pods <pod-name> -n<namespace> -oyaml

    Replace <pod-name> with the name of the kopia backup/restore pod and <namespace> with the namespace where kopia backup/restore pod is running.

    Example:

    kubectl get pods mysql-xxxxxxxx-xxxx -ntest -oyaml

    Sample output:

    .......
    spec:
    affinity:
    nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    nodeSelectorTerms:
    - matchExpressions:
    - key: pxb_job_node_affinity_label
    operator: In
    values:
    - kopia-backup
    .......

Configure KDMP job pods for anyuid

The anyuid annotation or security context constraint (SCC) is a policy in Kubernetes, especially relevant in OpenShift environments, that allows pods to run with any userID (UID) rather than being restricted to a specific range or user. Certain workloads or applications need to run with specific UIDs or as root (for example, system-level utilities, backup or restore jobs). For example, KDMP (Kubernetes Data Management Platform) jobs might require elevated privileges to access and manipulate persistent volumes. For KDMP job pods, this can be controlled via annotations or ConfigMap settings that dictate whether the pod uses the anyuid SCC.

To enable KDMP job pods to run with the anyuid annotation, follow these steps:

  1. Update the kdmp-config ConfigMap to include the following entry:

    PROVISIONERS_TO_USE_ANYUID: <provisioner-name>
  2. Apply the ConfigMap to both the clusters. Ensure the updated ConfigMap is applied to both the backup cluster and the restore cluster. This configuration is essential whenever you want KDMP job pod to run with the anyuid annotation. To update the kdmp-config ConfigMap:

    1. Retrieve the existing kdmp-config ConfigMap:

      kubectl get configmap kdmp-config -n <namespace> -o yaml > kdmp-config.yaml
    2. Edit the kdmp-config.yaml file and add or update the PROVISIONERS_TO_USE_ANYUID entry:

      kubectl edit cm kdmp-config -n kube-system
    3. Apply the updated ConfigMap to the respective clusters:

      kubectl apply -f kdmp-config.yaml -n <namespace>
    4. Verify the changes to ensure that the updated ConfigMap reflects in the required clusters:

      kubectl describe configmap kdmp-config -n <namespace>

    Confirm that KDMP job pods now run with the anyuid annotation.

By applying these changes to both the backup and restore clusters, you ensure consistent behavior for KDMP job pods requiring the anyuid annotation.

Related topics: