kdmp-config ConfigMap parameters
This reference guide provides a list of all parameters available in the kdmp-config ConfigMap. These parameters are used to configure KDMP (Kubernetes Data Management Platform) backup and restore operations, including resource limits, job controls, file exclusions, and platform-specific settings. The kdmp-config ConfigMap resides in the kube-system namespace on application clusters and controls backup and restore behavior for Kubernetes workloads.
For instructions on how to update the kdmp-config ConfigMap and common configuration use cases, see Configure kdmp-config ConfigMap.
Kopia Executor
| Key/Parameter | Default Value | Description |
|---|---|---|
KDMP_KOPIAEXECUTOR_REQUEST_CPU | 0.1 | Sets the minimum CPU resources requested by each backup and restore pod that is used in backup and restore of volumes with cross cloud enabled or when using the CSI with offload mechanism. Adjust this value based on the size of the data on the PVC being backed up or restored. Set a higher value if you are backing up or restoring larger data sets to ensure stable performance for each pod. |
KDMP_KOPIAEXECUTOR_LIMIT_CPU | 0.2 | Sets the maximum CPU resources allowed for each backup and restore pod that is used in backup and restore of volumes with cross cloud enabled or when using the CSI with offload mechanism. Increase this value if you are backing up or restoring large amounts of data on a PVC or if you observe CPU throttling during operations. |
KDMP_KOPIAEXECUTOR_REQUEST_MEMORY | 700Mi | Sets the minimum memory requested by each backup and restore pod that is used in backup and restore of volumes with cross clouds enabled or when using the CSI with offload mechanism. Increase this value if the size of the data on the PVC is large to prevent OOM (Out Of Memory) kills and ensure each pod can process large data volumes. |
KDMP_KOPIAEXECUTOR_LIMIT_MEMORY | 1Gi | Sets the maximum memory allowed for each backup and restore pod that is used in backup and restore of volumes with cross cloud enabled or when using the CSI with offload mechanism. If you see OOM (Out Of Memory) errors or are working with large PVC data sets, increase this value to allow each pod to handle the workload without failing. |
NFS Executor
| Key/Parameter | Default Value | Description |
|---|---|---|
KDMP_NFSEXECUTOR_REQUEST_CPU | 0.1 | Sets the minimum CPU resources requested by each NFS Executor pod that is used in backup/restore to/from an NFS backup location. Increase this value if you are backing up or restoring a large number of resources, PVCs, or namespaces to ensure stable performance. |
KDMP_NFSEXECUTOR_LIMIT_CPU | 0.5 | Sets the maximum CPU resources allowed for each NFS Executor pod that is used in backup and restore to an NFS backup location. Increase this value if you observe CPU throttling or are working with many resources, PVCs, or namespaces. |
KDMP_NFSEXECUTOR_REQUEST_MEMORY | 700Mi | Sets the minimum memory requested by each NFS Executor pod that is used in backup and restore from an NFS backup location. Increase this value if you are backing up or restoring a large number of resources, PVCs, or namespaces to prevent OOM (Out Of Memory) kills. |
KDMP_NFSEXECUTOR_LIMIT_MEMORY | 1Gi | Sets the maximum memory allowed for each NFS Executor pod that is used in backup and restore to an NFS backup location. If you see OOM (Out Of Memory) errors or are working with many resources, PVCs, or namespaces, increase this value to allow each pod to handle the workload without failing. |
Job Rate Limits
| Key/Parameter | Default Value | Description |
|---|---|---|
KDMP_BACKUP_JOB_LIMIT | 5 | Sets the maximum number of backup job pods spinned up by stork that can run concurrently on the application cluster with cross cloud enabled or when using the CSI with offload mechanism. Increase this value if you want to back up more PVCs in single or multiple backups at any given time, but be aware that this will increase overall CPU and memory consumption on the Application Cluster. |
KDMP_RESTORE_JOB_LIMIT | 5 | Sets the maximum number of restore job pods spinned up per volume by stork that can run concurrently on the application cluster when restoring backups. Increase this value if you want to restore more backups at the same time. Higher values will increase overall CPU and memory consumption on the Application Cluster. |
KDMP_DELETE_JOB_LIMIT | 5 | Sets the maximum number of delete job pods spinned up per volume by stork that can run concurrently on the backup cluster when deleting backups. Increase this value if you want to delete more number of backups at once. Higher values will increase overall CPU and memory consumption on the Backup Cluster. For new value to be used it should be set on both App cluster and Backup cluster. |
Backup/Restore Configuration
| Key/Parameter | Default Value | Description |
|---|---|---|
KDMP_EXCLUDE_FILE_LIST | None | Sets a list of files and directories to exclude from backups for a given storage class. Use this parameter to skip hidden, read-only, or unnecessary files and directories when using cross cloud or when using the CSI with offload. Only modify this if you need to optimize backup size or avoid backing up irrelevant data. |
KDMP_RESTORE_PVC_SIZE_PERCENTAGE | 10% | Sets the percentage by which your target PVC should be larger than the source PVC when restoring data using cross cloud or when using the CSI with offload mechanism, If a restore fails due to insufficient when PVCs which are 90-100% full, review the restore pod logs for errors about space requirements. These types of restores can require 3-5% more space than the original PVC, especially if the restore is performed from an offloaded copy and a local snapshot is not available.Increase this percentage as needed to provide enough buffer and ensure the restore completes successfully. |
Kubernetes API Configuration (More master nodes & Kubeapi CPUs and memory)
| Key/Parameter | Default Value | Description |
|---|---|---|
K8S_QPS | 100 | Sets the maximum number of queries per second that stork, px-backup, and middleware deployments can send to the kubeapi server. Adjust this value based on your cluster's capacity and workload to control the load these components place on the kubeapi server and to optimize cluster performance. |
K8S_BURST | 100 | Sets the maximum number of burst queries that stork, px-backup, and middleware deployments can send in a single set to the kubeapi server. Adjust this value based on your system's capacity and current load for optimal results. |
Security & Platform Configuration
| Key/Parameter | Default Value | Description |
|---|---|---|
PROVISIONERS_TO_USE_ANYUID | None | Sets a comma-separated list of provisioners for which across cloud backup jobs should run with the SCC context UID matching the application pod. Use this in OpenShift clusters where the security context uses a user range instead of a single UID. |
KDMP_DISABLE_ISTIO_CONFIG | false | Sets this parameter to add a label that disables the Istio sidecar container for backup, restore, and NFS executor pods. Use this if you see px-backup pods getting stuck during cleanup in case of backups taken by cross cloud or when using the CSI with offload with S3 or NFS backup locations, especially on Kubernetes 1.28 or Istio 1.19. |
Timeout Configuration
| Key/Parameter | Default Value | Description |
|---|---|---|
SNAPSHOT_TIMEOUT | 30m | Sets the maximum time to wait for a CSI snapshot to complete during backup. Increase this value if your CSI snapshots are taking longer than expected and backups are failing due to snapshot timeout errors. |
MOUNT_FAILURE_RETRY_TIMEOUT | 150 | Sets the timeout value for mount failure errors during across cloud backups or backups to an NFS backup location. Increase this value to wait longer for mounts to be successful in case of a busy backend where the mount is taking more time to complete. |
CloudSnap configuration
| Key/Parameter | Default Value | Description |
|---|---|---|
DISABLE_PX_CS_DISTRIBUTION | False | Controls Portworx CloudSnap requests distribution across nodes. Set to |
PX_CS_VOLUME_BATCH_COUNT | 5 | Specifies the maximum number of volumes per VM that can have active CloudSnap operations concurrently. Allowed value is greater than or equal to 1 and there is no upper limit. |
PX_CS_VOLUME_BATCH_SNAPSHOTS | true | Controls whether Portworx Backup waits for the snapshot ID/completion of the current volume batch before proceeding. This ensures serialized VM-level backup progression. Setting this to |
PX_CS_VOLUME_SNAPSHOT_TIMEOUT | 300 | Specifies the timeout (in seconds) for volume snapshots in a VM. This timeout is applied per VM. In case of multiple VMs in a schedule, the timeout is reset to the configured value for each VM. If freeze/unfreeze timeouts are configured, they take precedence over this value. You can increase or decrease this value as needed. VM backups will fail if snapshots do not complete within the specified time. |