Skip to main content
Version: 2.8

kdmp-config ConfigMap

Exclude files and directories

Consider a scenario where you have directories or files (hidden, read-only, logs) with different extensions and you wish to exclude those from being included in the backup. Portworx Backup now allows you to exclude such directories or files from being backed up. To exclude or omit such directories and files from being included in the backup, you need to update the kdmp-config ConfigMap with the following steps:

  1. On the source application cluster, run the following command to edit the kdmp-config ConfigMap:

    kubectl edit cm kdmp-config -n kube-system
  2. Add the following key-value pair:

    KDMP_EXCLUDE_FILE_LIST: |
    <storageclassName1>=<dir-list>,<file-list1>,....
    <storageclassName2>=<dir-list>,<file-list1>,....

    Example

    apiVersion: v1
    data:
    KDMP_BACKUP_JOB_LIMIT: "5"
    KDMP_DELETE_JOB_LIMIT: "5"
    KDMP_EXCLUDE_FILE_LIST: |
    px-db=dir1,file1,dir2
    mysql=dir1,file1,dir2
    KDMP_KOPIAEXECUTOR_IMAGE: <kopiaexecutor-image-version>
    KDMP_KOPIAEXECUTOR_IMAGE_SECRET: ""
    KDMP_KOPIAEXECUTOR_LIMIT_CPU: "0.2"
    KDMP_KOPIAEXECUTOR_LIMIT_MEMORY: 1Gi
    KDMP_KOPIAEXECUTOR_REQUEST_CPU: "0.1"
    KDMP_KOPIAEXECUTOR_REQUEST_MEMORY: 700Mi
    KDMP_MAINTENANCE_JOB_LIMIT: "5"
    KDMP_RESTORE_JOB_LIMIT: "5"
    KDMP_RESTORE_PVC_SIZE_PERCENTAGE: "15"
    SNAPSHOT_TIMEOUT: ""
    pxb_job_node_affinity_label=<value>
    PROVISIONERS_TO_USE_ANYUID: <provisioner-name>
    kind: ConfigMap
    metadata:
    name: kdmp-config
    namespace: kube-system
    resourceVersion: "<resource-version>"
    uid: <uid>
    note

    ENABLE_PX_GENERIC_BACKUP parameter is deprecated from Portworx Backup version 2.7.0 and you need to utilize the cross-cloud backup option in the Portworx Backup web console from PXB version 2.7.0 and above to create a generic/KDMP backup.

You can exclude directories and files from being backed up for both direct KDMP and KDMP with CSI (local snapshot) backup types. For the latter, you cannot exclude any directories or files from being backed up if the local snapshot already exists even after kdmp-config ConfigMap is updated with the key-value pair specified above. If you want to restore with excluded directories or files, delete the existing local snapshot and then restore the backup.

KDMP backups and restores with large volume PVCs

KDMP job pods consume increased amounts of memory for the large volume PVC backup and restore operations to backup locations. As a result, you may see out of memory alerts or a failure of the kdmp job pods that run on each application or target cluster. During these scenarios, Portworx by PureStorage recommends to increase the CPU and memory limit related parameters in the kdmp-config ConfigMap which resides in the kube-system namespace on the target cluster or application cluster.

  • Update the following parameters in kdmp-config ConfigMap to resolve Out Of Memory (OOM) errors:

    KDMP_KOPIAEXECUTOR_LIMIT_CPU: "0.4"
    KDMP_KOPIAEXECUTOR_REQUEST_CPU: "0.2"
    KDMP_KOPIAEXECUTOR_REQUEST_MEMORY: 2Gi
    KDMP_KOPIAEXECUTOR_LIMIT_MEMORY: 4Gi
note

Before you initiate a generic or KDMP or cross-cloud backup, ensure that you allocate additional buffer storage that is double the size of the original Persistent Volume Claim (PVC) in your application cluster.

Configure node affinity

Backup and restore job pods for Kopia/NFS and S3 are now configured with node affinity, ensuring they are scheduled on specific nodes where the relevant application pods are running. This improvement aligns the scheduling of job pods with application placement, minimizing the risk of failed backups due to network restrictions

To configure node affinity:

  1. Select the required node on application cluster and append this key-value pair in kdmp-config ConfigMap.

    pxb_job_node_affinity_label=<value>

  2. Ensure that the label is applied:

    kubectl get nodes -o wide --show-labels -l pxb_job_node_affinity_label=<value>
  3. Create backups and restore on such nodes and check the yaml details with the following command to ensure that node-affinity configuration is working as expected:

    kubectl get pods -n <namespace> <backup-pod-name>  -o yaml

    Sample output:

    .......
    spec:
    affinity:
    nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    nodeSelectorTerms:
    - matchExpressions:
    - key: pxb_job_node_affinity_label
    operator: In
    values:
    - "<value>"
    .......

Configure KDMP job pods for anyuid

The anyuid annotation or security context constraint (SCC) is a policy in Kubernetes, especially relevant in OpenShift environments, that allows pods to run with any userID (UID) rather than being restricted to a specific range or user. Certain workloads or applications need to run with specific UIDs or as root (for example, system-level utilities, backup or restore jobs). For example, KDMP (Kubernetes Data Management Platform) jobs might require elevated privileges to access and manipulate persistent volumes. For KDMP job pods, this can be controlled via annotations or ConfigMap settings that dictate whether the pod uses the anyuid SCC.

To enable KDMP job pods to run with the anyuid annotation, follow these steps:

  1. Update the kdmp-config ConfigMap to include the following entry:

    PROVISIONERS_TO_USE_ANYUID: <provisioner-name>
  2. Apply the ConfigMap to both the clusters. Ensure the updated ConfigMap is applied to both the backup cluster and the restore cluster. This configuration is essential whenever you want KDMP job pod to run with the anyuid annotation. To update the kdmp-config ConfigMap:

    1. Retrieve the existing kdmp-config ConfigMap:

      kubectl get configmap kdmp-config -n <namespace> -o yaml > kdmp-config.yaml
    2. Edit the kdmp-config.yaml file and add or update the PROVISIONERS_TO_USE_ANYUID entry:

      kubectl edit cm kdmp-config -n kube-system
    3. Apply the updated ConfigMap to the respective clusters:

      kubectl apply -f kdmp-config.yaml -n <namespace>
    4. Verify the changes to ensure that the updated ConfigMap reflects in the required clusters:

      kubectl describe configmap kdmp-config -n <namespace>

    Confirm that KDMP job pods now run with the anyuid annotation.

By applying these changes to both the backup and restore clusters, you ensure consistent behavior for KDMP job pods requiring the anyuid annotation.

Related topics: