Skip to main content
Version: 2.10

Configure kdmp-config ConfigMap parameters

The kdmp-config ConfigMap is a central configuration component used by Portworx Backup to manage backup and restore operations for Kubernetes workloads using Kubernetes Data Management Platform (KDMP). This ConfigMap resides in the kube-system namespace (in the application cluster) and contains various settings to fine-tune backup behaviors, handle large PVCs, manage memory limits, control exclusions, and enable support for environments like OpenShift.

You might need to update the kdmp-config ConfigMap in scenarios similar to the following:

  • When working with large volume PVCs that require additional resources
  • To optimize backup performance by adjusting concurrent job limits
  • For platform-specific configurations like OpenShift or Istio
  • To handle timeout issues during backup/restore operations

For a full list of all available parameters, see the kdmp-config parameters reference.

How to update the kdmp-config ConfigMap

Prerequisites

Before configuring the kdmp-config ConfigMap, ensure the following:

  • Your kubeconfig is connected to the application cluster where the kdmp-config ConfigMap resides
  • The kdmp-config ConfigMap exists in the kube-system namespace
  • You have sufficient permissions to edit ConfigMaps and right kubeconfig to the cluster hosting Portworx Backup instance.

All kubectl commands in this document must be executed against the application cluster unless otherwise specified.

To modify the ConfigMap:

  1. Edit the ConfigMap using kubectl:

    kubectl edit cm kdmp-config -n kube-system
  2. Add or modify settings as needed. For example, to update memory limits:

    apiVersion: v1
    data:
    KDMP_BACKUP_JOB_LIMIT: "5"
    KDMP_DELETE_JOB_LIMIT: "5"
    KDMP_EXCLUDE_FILE_LIST: |
    px-db=dir1,file1,dir2
    mysql=dir1,file1,dir2
    KDMP_KOPIAEXECUTOR_LIMIT_CPU: "0.2"
    KDMP_KOPIAEXECUTOR_LIMIT_MEMORY: 1Gi
    KDMP_KOPIAEXECUTOR_REQUEST_CPU: "0.1"
    KDMP_KOPIAEXECUTOR_REQUEST_MEMORY: 700Mi
    KDMP_RESTORE_JOB_LIMIT: "5"
    KDMP_RESTORE_PVC_SIZE_PERCENTAGE: "15"
    SNAPSHOT_TIMEOUT: ""
    pxb_job_node_affinity_label: kopia-backup
    KDMP_DISABLE_ISTIO_CONFIG: "true"
    PROVISIONERS_TO_USE_ANYUID: <provisioner-name1>, <provisioner-name2>
    kind: ConfigMap
    metadata:
    name: kdmp-config
    namespace: kube-system
    resourceVersion: "<resource-version>"
    uid: <uid>
  3. Save the changes. The new settings will apply to subsequent operations.

note

Changes to kdmp-config take effect for new backup and restore operations. Existing operations continue with their original settings.

Common Configurations Use Cases

Exclude files and directories

Consider a scenario where you have directories or files (hidden, read-only, logs) with different extensions and you wish to exclude those from being included in the backup. Portworx Backup allows you to exclude such directories or files from being backed up. To exclude or omit such directories and files from being included in the backup, update the kdmp-config ConfigMap as follows:

  1. On the source application cluster, run:

    kubectl edit cm kdmp-config -n kube-system
  2. Add the following key-value pair:

    KDMP_EXCLUDE_FILE_LIST: |
    <storageclassName1>=<dir-list>,<file-list1>,....
    <storageclassName2>=<dir-list>,<file-list1>,....

    Example

    apiVersion: v1
    data:
    KDMP_EXCLUDE_FILE_LIST: |
    px-db=dir1,file1,dir2
    mysql=dir1,file1,dir2
    KDMP_DISABLE_ISTIO_CONFIG: "true"
    PROVISIONERS_TO_USE_ANYUID: <provisioner-name1>, <provisioner-name2>
    kind: ConfigMap
    metadata:
    name: kdmp-config
    namespace: kube-system
    resourceVersion: "<resource-version>"
    uid: <uid>

    KDMP_DISABLE_ISTIO_CONFIG can be true or false. When set to true, it disables automatic sidecar container injection on KDMP job pods created by px-backup for backup and restore on an Istio enabled application cluster.

You can exclude directories and files from being backed up for both direct KDMP and KDMP with CSI (local snapshot) backup types. For the latter, you cannot exclude any directories or files from being backed up if the local snapshot already exists, even after updating the ConfigMap. To restore with excluded directories or files, delete the existing local snapshot and then restore the backup.

KDMP backups and restores with large volume PVCs

KDMP job pods consume increased amounts of memory for large volume PVC backup and restore operations. You may see out-of-memory alerts or failures of the kdmp job pods. In these scenarios, increase the CPU and memory limit parameters in the kdmp-config ConfigMap in the kube-system namespace on the target or application cluster.

Update the following parameters to resolve Out Of Memory (OOM) errors:

KDMP_KOPIAEXECUTOR_LIMIT_CPU: "0.4"
KDMP_KOPIAEXECUTOR_REQUEST_CPU: "0.2"
KDMP_KOPIAEXECUTOR_REQUEST_MEMORY: 2Gi
KDMP_KOPIAEXECUTOR_LIMIT_MEMORY: 4Gi
note

Before you initiate a generic, KDMP, or cross-cloud backup, ensure you allocate additional buffer storage that is double the size of the original Persistent Volume Claim (PVC) in your application cluster.

Configure KDMP job pods for anyuid

The anyuid annotation or security context constraint (SCC) is a policy in Kubernetes, especially relevant in OpenShift environments, that allows pods to run with any userID (UID) rather than being restricted to a specific range or user. Certain workloads or applications need to run with specific UIDs or as root (for example, system-level utilities, backup or restore jobs). KDMP jobs might require elevated privileges to access and manipulate persistent volumes. For KDMP job pods, this can be controlled via annotations or ConfigMap settings that dictate whether the pod uses the anyuid SCC.

To enable KDMP job pods to run with the anyuid annotation:

  1. Update the kdmp-config ConfigMap:

    PROVISIONERS_TO_USE_ANYUID: <provisioner-name>

    You can add single or multiple provisioners (comma-separated).

  2. Apply the ConfigMap to both the clusters. Ensure the updated ConfigMap is applied to both the backup cluster and the restore cluster. This configuration is essential whenever you want KDMP job pod to run with the anyuid annotation. To update the kdmp-config ConfigMap:

    • Retrieve the existing ConfigMap:

      kubectl get configmap kdmp-config -n <namespace> -o yaml > kdmp-config.yaml
    • Edit the file and add/update the entry.

    • Apply the updated ConfigMap:

      kubectl apply -f kdmp-config.yaml -n <namespace>
    • Verify:

      kubectl describe configmap kdmp-config -n <namespace>

    Confirm that KDMP job pods now run with the anyuid annotation.

By applying these changes to both the backup and restore clusters, you ensure consistent behavior for KDMP job pods requiring the anyuid annotation.

Configure CloudSnap load handling parameters

Portworx Backup provides CloudSnap load handling controls (from Portworx Stork version 25.6.1 onwards) to manage concurrent operations and maintain system performance. These controls allow you to throttle concurrent CloudSnaps and serialize volume processing to prevent resource bottlenecks and storage latency in environments with high-density VM schedules and multi-disk VMs.

To configure CloudSnap load handling, update the kdmp-config ConfigMap in the kube-system namespace on all application clusters with the following parameters:

  • DISABLE_PX_CS_DISTRIBUTION: Controls Portworx CloudSnap requests distribution across nodes. Set to "True" to disable distribution if it causes significant pool load or latency in your environment. Default is "False".

  • PX_CS_VOLUME_BATCH_COUNT: Specifies the maximum number of volumes per VM that can have active CloudSnap operations concurrently. Default is "5", allowed value is greater than or equal to 1 and there is no upper limit.

  • PX_CS_VOLUME_BATCH_SNAPSHOTS: Controls whether Portworx Backup waits for the snapshot ID/completion of the current volume batch before proceeding. This ensures serialized VM-level backup progression. Default is "true", setting this to "false" is not recommended.

  • PX_CS_VOLUME_SNAPSHOT_TIMEOUT: Specifies the timeout (in seconds) for volume snapshots in a VM. This timeout is applied per VM. In case of multiple VMs in a schedule, the timeout is reset to the configured value for each VM. If freeze/unfreeze timeouts are configured, they take precedence over this value. Default is "300" seconds. You can increase or decrease this value as needed. VM backups will fail if snapshots do not complete within the specified time.

Example:

apiVersion: v1
kind: ConfigMap
metadata:
name: kdmp-config
namespace: kube-system
resourceVersion: "<resource-version>"
uid: <resource-uid>
data:
...
DISABLE_PX_CS_DISTRIBUTION: "True"
PX_CS_VOLUME_BATCH_COUNT: "10"
PX_CS_VOLUME_BATCH_SNAPSHOTS: "false"
PX_CS_VOLUME_SNAPSHOT_TIMEOUT: "600"
...

Configure node affinity

Live Backup

  • The PVC is in use by an application pod.
  • Node affinity is NOT supported for ReadWriteOnce (RWO) PVCs.
  • Node affinity is supported for ReadWriteMany (RWX) and ReadOnlyMany (ROX) PVCs.

Non-Live Backup

  • The PVC is not in use by any application pod.
  • Node affinity is supported for all PVC access modes: RWO, RWX, and ROX.

Backup Types and Node Affinity Support

Backup TypePVC Access ModeNode Affinity
Live BackupReadWriteOnce (RWO)
ReadWriteMany (RWX)
ReadOnlyMany (ROX)
ReadWriteOncePod (RWOP)
Non-Live BackupReadWriteOnce (RWO)
ReadWriteMany (RWX)
ReadOnlyMany (ROX)
ReadWriteOncePod (RWOP)

Configure node affinity for KDMP backup/restore jobs

To ensure that KDMP backup and restore jobs are scheduled on a specific node:

  1. Label the desired node:

    kubectl label node <node-name> pxb_job_node_affinity_label=<value>

    Replace <node-name> with your node name, and <value> with any valid Kubernetes label.

    Example:

    kubectl label node ip-xx-xx-xx-xx.pwx.purestorage.com pxb_job_node_affinity_label=kopia-backup
  2. Verify the label:

    kubectl get nodes -o wide --show-labels -l pxb_job_node_affinity_label=<value>
  3. Update the kdmp-config ConfigMap:

    kubectl edit configmap kdmp-config -n kube-system

    Add the same label key-value pair under data:

    pxb_job_node_affinity_label: <value>
  4. Verify pod scheduling: Create Kopia backups and restores for the desired applications/backups. Then, verify that the node affinity configuration is correctly applied by inspecting the YAML of the backup/restore job pods and ensure that the pod has been scheduled on the node where the label was applied:

    kubectl get pods <pod-name> -n<namespace> -oyaml

    Replace <pod-name> with the name of the kopia backup/restore pod and <namespace> with the namespace where kopia backup/restore pod is running.

    Example:

    kubectl get pods mysql-xxxxxxxx-xxxx -ntest -oyaml

    Sample output:

    .......
    spec:
    affinity:
    nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    nodeSelectorTerms:
    - matchExpressions:
    - key: pxb_job_node_affinity_label
    operator: In
    values:
    - kopia-backup
    .......