Skip to main content
Version: 25.8

Configure resource limits, placements, tolerations, and nodeAffinity for PX-CSI components

Starting from PX-CSI 25.8.0, a new ComponentK8sConfig custom resource (CR) named px-pure-csi-k8s-config is created by default. This CR instance can be used to configure CPU and memory limits, tolerations, and node affinity across PX-CSI components. Using this CR helps manage configuration and maintain consistency in large environments.

With ComponentK8sConfig, you can:

  • Centralize Kubernetes configuration for all PX-CSI components
  • Improve consistency and manageability at scale
  • Apply fine-grained controls per component, workload, or container

To know more about the various fields that you can configure in ComponentK8sConfig CR, see ComponentK8sConfig CRD reference

note

Avoid reusing container names across blocks for the same workload to prevent unintended overrides.

Prerequisites

Ensure that you are running:

  • Portworx Operator version 25.4.0 or later
  • PX-CSI version 25.8.0 or later

Migration from StorageCluster to ComponentK8sConfig

If you are upgrading from PX‑CSI 25.8.0 or earlier and PX-CSI Operator 25.3.1 or earlier, Kubernetes configurations in your StorageCluster are automatically migrated to ComponentK8sConfig to improve modularity and long-term maintainability.

important

After a successful migration:

  • Configure resource limits, tolerations, and affinity exclusively in ComponentK8sConfig. The operator ignores configuration changes in the StorageCluster. ComponentK8sConfig is the single source of truth for these configurations.
  • Clean up the configurations in the StorageCluster object that were migrated to ComponentK8sConfig. Portworx Operator gives precedence to the ComponentK8sConfig custom resource and ignores the equivalent settings in the StorageCluster custom resource.

Verify your configuration

Use the following commands to confirm that your configuration is applied:

kubectl get componentk8sconfig
kubectl describe componentk8sconfig <componentk8sconfig-name>

Check the value of the status.phase field. A value of ACCEPTED indicates that the configuration has been successfully processed. For more information, review the status.reason field.

Mapping of components to workloads, pods, and containers

The table below lists each component, its Kubernetes workload(s), the workload type, the expected pod name pattern created by that workload, and the containers in those pods.

ComponentWorkload nameWorkload typePod name patternContainers in the pod
PureCSIPluginpx-pure-csi-controllerDeploymentpx-pure-csi-controller-*

controller-plugin, liveness-probe, csi-provisioner, csi-attacher, csi-snapshotter, csi-resizer, snapshot-controller

px-pure-csi-nodeDaemonSetpx-pure-csi-node-*

node-plugin, node-driver-registrar, liveness-probe

PureCSITelemetrypx-pure-csi-telemetry-registrationDeploymentpx-pure-csi-telemetry-registration-*

registration, envoy, telemetry

px-pure-csi-telemetryDeploymentpx-pure-csi-telemetry-*

log-upload-service, envoy, telemetry

PureCSIMigratorpx-pure-csi-migratorJobpx-pure-csi-migrator-*migrator

Basic usage examples

Based on the components mapping, the following examples demonstrate how to configure Kubernetes settings using ComponentK8sConfig.

Set resource limits

This example limits each controller and node plugin container to 1 CPU and 256 MiB of memory, with requests set to 500m CPU and 128 MiB of memory:

apiVersion: core.libopenstorage.org/v1
kind: ComponentK8sConfig
metadata:
name: px-pure-csi-k8s-config
namespace: <portworx>
spec:
components:
- componentNames:
- PureCSIPlugin
workloadConfigs:
- workloadNames:
- px-pure-csi-controller
containerConfigs:
- containerNames:
- controller-plugin
- csi-provisioner
- csi-attacher
- csi-snapshotter
- csi-resizer
- snapshot-controller
- liveness-probe
resources:
requests:
cpu: "500m"
memory: "128Mi"
limits:
cpu: "1000m"
memory: "256Mi"
- workloadNames:
- px-pure-csi-node
containerConfigs:
- containerNames:
- node-plugin
- node-driver-registrar
- liveness-probe
resources:
requests:
cpu: "500m"
memory: "128Mi"
limits:
cpu: "1000m"
memory: "256Mi"
note

If you need to set resource requests and limits per container, see Advanced usage examples.

Apply placement rules and tolerations

To pin controller pods to control‑plane nodes and configure tolerations:

apiVersion: core.libopenstorage.org/v1
kind: ComponentK8sConfig
metadata:
name: px-pure-csi-k8s-config
namespace: <portworx>
spec:
components:
- componentNames: ["PureCSIPlugin"]
workloadConfigs:
- workloadNames: ["px-pure-csi-controller"]
placement:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: Exists
tolerations:
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule

Advanced usage examples

Based on the components mapping, the following scenarios illustrate how configurations are applied and merged.

Different configurations for specific components

This configuration defines common resource limits for the PX‑CSI plugin (controller + node) and defines different resource configurations for telemetry. This flexibility allows you to handle each component's unique resource requirements.

apiVersion: core.libopenstorage.org/v1
kind: ComponentK8sConfig
metadata:
name: px-pure-csi-k8s-config
namespace: <portworx>
spec:
components:
- componentNames:
- PureCSIPlugin
workloadConfigs:
- workloadNames:
- px-pure-csi-controller
- px-pure-csi-node
containerConfigs:
- containerNames:
- controller-plugin
- csi-provisioner
- csi-attacher
- csi-snapshotter
- csi-resizer
- snapshot-controller
- node-plugin
- node-driver-registrar
- liveness-probe
resources:
requests:
memory: "128Mi"
cpu: "500m"
limits:
memory: "256Mi"
cpu: "1000m"
- componentNames:
- PureCSITelemetry
workloadConfigs:
- workloadNames:
- px-pure-csi-telemetry-registration
containerConfigs:
- containerNames:
- registration
- envoy
- telemetry
resources:
requests:
memory: "128Mi"
cpu: "500m"
limits:
memory: "256Mi"
cpu: "1000m"

Merging configurations for pods

Overrides when a similar configuration is defined twice

This example defines resource requests and limits for the same telemetry container twice. The second definition overrides the first.

apiVersion: core.libopenstorage.org/v1
kind: ComponentK8sConfig
metadata:
name: px-pure-csi-k8s-config
namespace: <portworx>
spec:
components:
- componentNames:
- PureCSITelemetry
workloadConfigs:
- workloadNames: ["px-pure-csi-telemetry"]
containerConfigs:
- containerNames:
- telemetry
resources:
requests:
memory: "128Mi"
cpu: "500m"
limits:
memory: "256Mi"
cpu: "1000m"
- componentNames:
- PureCSITelemetry
workloadConfigs:
- workloadNames: ["px-pure-csi-telemetry"]
containerConfigs:
- containerNames:
- telemetry
resources:
requests:
memory: "64Mi"
cpu: "150m"
limits:
memory: "128Mi"
cpu: "500m"

Appending and merging configurations

When you define resource requests and limits specifically for the csi-attacher and csi-provisioner containers within the controller workload, you can also specify pod‑level placement configurations, including node affinity and tolerations. As a result, the operator creates controller pods using the defined placement rules and adds resource limits for the selected sidecars.

apiVersion: core.libopenstorage.org/v1
kind: ComponentK8sConfig
metadata:
name: px-pure-csi-k8s-config
namespace: <portworx>
spec:
components:
- componentNames: ["PureCSIPlugin"]
workloadConfigs:
- workloadNames: ["px-pure-csi-controller"]
containerConfigs:
- containerNames:
- csi-attacher
resources:
requests:
memory: "128Mi"
cpu: "500m"
limits:
memory: "256Mi"
cpu: "1000m"
- componentNames: ["PureCSIPlugin"]
workloadConfigs:
- workloadNames: ["px-pure-csi-controller"]
containerConfigs:
- containerNames:
- csi-provisioner
resources:
requests:
memory: "128Mi"
cpu: "500m"
limits:
memory: "256Mi"
cpu: "1000m"
- componentNames: ["PureCSIPlugin"]
workloadConfigs:
- workloadNames: ["px-pure-csi-controller"]
placement:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: Exists
tolerations:
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule