Skip to main content
Version: 2.10

kdmp-config ConfigMap parameters

This reference guide provides a list of all parameters available in the kdmp-config ConfigMap. These parameters are used to configure KDMP (Kubernetes Data Management Platform) backup and restore operations, including resource limits, job controls, file exclusions, and platform-specific settings. The kdmp-config ConfigMap resides in the kube-system namespace on application clusters and controls backup and restore behavior for Kubernetes workloads.

For instructions on how to update the kdmp-config ConfigMap and common configuration use cases, see Configure kdmp-config ConfigMap.

Kopia Executor

Key/ParameterDefault ValueDescription
KDMP_KOPIAEXECUTOR_REQUEST_CPU0.1

Sets the minimum CPU resources requested by each backup and restore pod that is used in backup and restore of volumes with cross cloud enabled or when using the CSI with offload mechanism. Adjust this value based on the size of the data on the PVC being backed up or restored. Set a higher value if you are backing up or restoring larger data sets to ensure stable performance for each pod.
Usage: KDMP_KOPIAEXECUTOR_REQUEST_CPU: "0.2" sets the minimum CPU to 0.2 cores.
Configmap: kdmp-config
Cluster: App cluster and backup cluster

KDMP_KOPIAEXECUTOR_LIMIT_CPU0.2

Sets the maximum CPU resources allowed for each backup and restore pod that is used in backup and restore of volumes with cross cloud enabled or when using the CSI with offload mechanism. Increase this value if you are backing up or restoring large amounts of data on a PVC or if you observe CPU throttling during operations.
Usage: KDMP_KOPIAEXECUTOR_LIMIT_CPU: "0.5" sets the maximum CPU to 0.5 cores.
Configmap: kdmp-config
Cluster: App cluster and backup cluster

KDMP_KOPIAEXECUTOR_REQUEST_MEMORY700Mi

Sets the minimum memory requested by each backup and restore pod that is used in backup and restore of volumes with cross clouds enabled or when using the CSI with offload mechanism. Increase this value if the size of the data on the PVC is large to prevent OOM (Out Of Memory) kills and ensure each pod can process large data volumes.
Usage: KDMP_KOPIAEXECUTOR_REQUEST_MEMORY: "1Gi" sets the minimum memory to 1 GiB.
Configmap: kdmp-config
Cluster: App cluster and backup cluster

KDMP_KOPIAEXECUTOR_LIMIT_MEMORY1Gi

Sets the maximum memory allowed for each backup and restore pod that is used in backup and restore of volumes with cross cloud enabled or when using the CSI with offload mechanism. If you see OOM (Out Of Memory) errors or are working with large PVC data sets, increase this value to allow each pod to handle the workload without failing.
Usage: KDMP_KOPIAEXECUTOR_LIMIT_MEMORY: "2Gi" sets the maximum memory to 2 GiB.
Configmap: kdmp-config
Cluster: App cluster and backup cluster

NFS Executor

Key/ParameterDefault ValueDescription
KDMP_NFSEXECUTOR_REQUEST_CPU0.1

Sets the minimum CPU resources requested by each NFS Executor pod that is used in backup/restore to/from an NFS backup location. Increase this value if you are backing up or restoring a large number of resources, PVCs, or namespaces to ensure stable performance.
Usage: KDMP_NFSEXECUTOR_REQUEST_CPU: "0.2" sets the minimum CPU to 0.2 cores.
Configmap: kdmp-config
Cluster: App cluster and backup cluster

KDMP_NFSEXECUTOR_LIMIT_CPU0.5

Sets the maximum CPU resources allowed for each NFS Executor pod that is used in backup and restore to an NFS backup location. Increase this value if you observe CPU throttling or are working with many resources, PVCs, or namespaces.
Usage: KDMP_NFSEXECUTOR_LIMIT_CPU: "1" sets the maximum CPU to 1 core.
Configmap: kdmp-config
Cluster: App cluster and backup cluster

KDMP_NFSEXECUTOR_REQUEST_MEMORY700Mi

Sets the minimum memory requested by each NFS Executor pod that is used in backup and restore from an NFS backup location. Increase this value if you are backing up or restoring a large number of resources, PVCs, or namespaces to prevent OOM (Out Of Memory) kills.
Usage: KDMP_NFSEXECUTOR_REQUEST_MEMORY: "1Gi" sets the minimum memory to 1 GiB.
Configmap: kdmp-config
Cluster: App cluster and backup cluster

KDMP_NFSEXECUTOR_LIMIT_MEMORY1Gi

Sets the maximum memory allowed for each NFS Executor pod that is used in backup and restore to an NFS backup location. If you see OOM (Out Of Memory) errors or are working with many resources, PVCs, or namespaces, increase this value to allow each pod to handle the workload without failing.
Usage: KDMP_NFSEXECUTOR_LIMIT_MEMORY: "2Gi" sets the maximum memory to 2 GiB.
Configmap: kdmp-config
Cluster: App cluster and backup cluster

Job Rate Limits

Key/ParameterDefault ValueDescription
KDMP_BACKUP_JOB_LIMIT5

Sets the maximum number of backup job pods spinned up by stork that can run concurrently on the application cluster with cross cloud enabled or when using the CSI with offload mechanism. Increase this value if you want to back up more PVCs in single or multiple backups at any given time, but be aware that this will increase overall CPU and memory consumption on the Application Cluster.
Usage: KDMP_BACKUP_JOB_LIMIT: "10" sets the limit to 10 concurrent backup jobs.
Configmap: kdmp-config
Cluster: App cluster

KDMP_RESTORE_JOB_LIMIT5

Sets the maximum number of restore job pods spinned up per volume by stork that can run concurrently on the application cluster when restoring backups. Increase this value if you want to restore more backups at the same time. Higher values will increase overall CPU and memory consumption on the Application Cluster.
Usage: KDMP_RESTORE_JOB_LIMIT: "10" sets the limit to 10 concurrent restore jobs.
Configmap: kdmp-config
Cluster: App cluster

KDMP_DELETE_JOB_LIMIT5

Sets the maximum number of delete job pods spinned up per volume by stork that can run concurrently on the backup cluster when deleting backups. Increase this value if you want to delete more number of backups at once. Higher values will increase overall CPU and memory consumption on the Backup Cluster. For new value to be used it should be set on both App cluster and Backup cluster.
Usage: KDMP_DELETE_JOB_LIMIT: "10" sets the limit to 10 concurrent delete jobs.
Configmap: kdmp-config
Cluster: App cluster and Backup cluster

Backup/Restore Configuration

Key/ParameterDefault ValueDescription
KDMP_EXCLUDE_FILE_LISTNone

Sets a list of files and directories to exclude from backups for a given storage class. Use this parameter to skip hidden, read-only, or unnecessary files and directories when using cross cloud or when using the CSI with offload. Only modify this if you need to optimize backup size or avoid backing up irrelevant data.
Usage: KDMP_EXCLUDE_FILE_LIST: "storageclass1:/tmp,/var/log;storageclass2:/cache" excludes /tmp and /var/log directories for storageclass1 and /cache directory for storageclass2.
Configmap: kdmp-config
Cluster: App cluster

KDMP_RESTORE_PVC_SIZE_PERCENTAGE10%

Sets the percentage by which your target PVC should be larger than the source PVC when restoring data using cross cloud or when using the CSI with offload mechanism, If a restore fails due to insufficient when PVCs which are 90-100% full, review the restore pod logs for errors about space requirements. These types of restores can require 3-5% more space than the original PVC, especially if the restore is performed from an offloaded copy and a local snapshot is not available.Increase this percentage as needed to provide enough buffer and ensure the restore completes successfully.
Usage: KDMP_RESTORE_PVC_SIZE_PERCENTAGE: "15" sets the PVC size buffer to 15%.
Configmap: kdmp-config
Cluster: App cluster

Kubernetes API Configuration (More master nodes & Kubeapi CPUs and memory)

Key/ParameterDefault ValueDescription
K8S_QPS100

Sets the maximum number of queries per second that stork, px-backup, and middleware deployments can send to the kubeapi server. Adjust this value based on your cluster's capacity and workload to control the load these components place on the kubeapi server and to optimize cluster performance.
Usage: K8S_QPS: "200" sets the query rate to 200 queries per second.
Location: Stork (App Cluster), Backup (Backup Cluster) and Middleware (Backup Cluster) spec env
Cluster: Backup cluster and App cluster

K8S_BURST100

Sets the maximum number of burst queries that stork, px-backup, and middleware deployments can send in a single set to the kubeapi server. Adjust this value based on your system's capacity and current load for optimal results.
Usage: K8S_BURST: "200" sets the burst limit to 200 queries.
Location: Stork (App Cluster), Backup (Backup Cluster) and Middleware (Backup Cluster) spec env
Cluster: Backup cluster and App cluster

Security & Platform Configuration

Key/ParameterDefault ValueDescription
PROVISIONERS_TO_USE_ANYUIDNone

Sets a comma-separated list of provisioners for which across cloud backup jobs should run with the SCC context UID matching the application pod. Use this in OpenShift clusters where the security context uses a user range instead of a single UID.
Usage: PROVISIONERS_TO_USE_ANYUID: "kubernetes.io/rbd,kubernetes.io/aws-ebs" configures RBD and AWS EBS provisioners to use anyuid SCC.
Configmap: kdmp-config
Cluster: App cluster

KDMP_DISABLE_ISTIO_CONFIGfalse

Sets this parameter to add a label that disables the Istio sidecar container for backup, restore, and NFS executor pods. Use this if you see px-backup pods getting stuck during cleanup in case of backups taken by cross cloud or when using the CSI with offload with S3 or NFS backup locations, especially on Kubernetes 1.28 or Istio 1.19.
Usage: KDMP_DISABLE_ISTIO_CONFIG: "true" disables Istio sidecar injection for KDMP pods.
Configmap: kdmp-config
Cluster: App cluster

Timeout Configuration

Key/ParameterDefault ValueDescription
SNAPSHOT_TIMEOUT30m

Sets the maximum time to wait for a CSI snapshot to complete during backup. Increase this value if your CSI snapshots are taking longer than expected and backups are failing due to snapshot timeout errors.
Usage: SNAPSHOT_TIMEOUT: "1h" sets the timeout to 1 hour.
Configmap: kdmp-config
Cluster: App cluster

MOUNT_FAILURE_RETRY_TIMEOUT150

Sets the timeout value for mount failure errors during across cloud backups or backups to an NFS backup location. Increase this value to wait longer for mounts to be successful in case of a busy backend where the mount is taking more time to complete.
Usage: MOUNT_FAILURE_RETRY_TIMEOUT: "300" sets the timeout to 300 seconds (5 minutes).
Configmap: kdmp-config
Cluster: App cluster

CloudSnap configuration

Key/ParameterDefault ValueDescription
DISABLE_PX_CS_DISTRIBUTIONFalse

Controls Portworx CloudSnap requests distribution across nodes. Set to "True" to disable distribution if it causes significant pool load or latency in your environment.
Usage: DISABLE_PX_CS_DISTRIBUTION: "True" disables CloudSnap distribution across nodes.
Configmap: kdmp-config
Cluster: App cluster

PX_CS_VOLUME_BATCH_COUNT5

Specifies the maximum number of volumes per VM that can have active CloudSnap operations concurrently. Allowed value is greater than or equal to 1 and there is no upper limit.
Usage: PX_CS_VOLUME_BATCH_COUNT: "10" sets the batch count to 10 volumes per VM.
Configmap: kdmp-config
Cluster: App cluster

PX_CS_VOLUME_BATCH_SNAPSHOTStrue

Controls whether Portworx Backup waits for the snapshot ID/completion of the current volume batch before proceeding. This ensures serialized VM-level backup progression. Setting this to "false" is not recommended.
Usage: PX_CS_VOLUME_BATCH_SNAPSHOTS: "true" enables waiting for batch completion before proceeding.
Configmap: kdmp-config
Cluster: App cluster

PX_CS_VOLUME_SNAPSHOT_TIMEOUT300

Specifies the timeout (in seconds) for volume snapshots in a VM. This timeout is applied per VM. In case of multiple VMs in a schedule, the timeout is reset to the configured value for each VM. If freeze/unfreeze timeouts are configured, they take precedence over this value. You can increase or decrease this value as needed. VM backups will fail if snapshots do not complete within the specified time.
Usage: PX_CS_VOLUME_SNAPSHOT_TIMEOUT: "600" sets the timeout to 600 seconds (10 minutes).
Configmap: kdmp-config
Cluster: App cluster