Configure resource limits, placements, tolerations, and nodeAffinity for PX-CSI components
Starting from PX-CSI 25.8.0, a new ComponentK8sConfig custom resource (CR) named px-pure-csi-k8s-config
is created by default. This CR instance can be used to configure CPU and memory limits, tolerations, and node affinity across PX-CSI components. Using this CR helps manage configuration and maintain consistency in large environments.
With ComponentK8sConfig
, you can:
- Centralize Kubernetes configuration for all PX-CSI components
- Improve consistency and manageability at scale
- Apply fine-grained controls per component, workload, or container
To know more about the various fields that you can configure in ComponentK8sConfig CR, see ComponentK8sConfig CRD reference
Avoid reusing container names across blocks for the same workload to prevent unintended overrides.
Prerequisites
Ensure that you are running:
- Portworx Operator version 25.4.0 or later
- PX-CSI version 25.8.0 or later
Migration from StorageCluster to ComponentK8sConfig
If you are upgrading from PX‑CSI 25.8.0 or earlier and PX-CSI Operator 25.3.1 or earlier, Kubernetes configurations in your StorageCluster
are automatically migrated to ComponentK8sConfig
to improve modularity and long-term maintainability.
After a successful migration:
- Configure resource limits, tolerations, and affinity exclusively in
ComponentK8sConfig
. The operator ignores configuration changes in theStorageCluster
.ComponentK8sConfig
is the single source of truth for these configurations. - Clean up the configurations in the
StorageCluster
object that were migrated toComponentK8sConfig
. Portworx Operator gives precedence to theComponentK8sConfig
custom resource and ignores the equivalent settings in theStorageCluster
custom resource.
Verify your configuration
Use the following commands to confirm that your configuration is applied:
kubectl get componentk8sconfig
kubectl describe componentk8sconfig <componentk8sconfig-name>
Check the value of the status.phase
field. A value of ACCEPTED
indicates that the configuration has been successfully processed. For more information, review the status.reason
field.
Mapping of components to workloads, pods, and containers
The table below lists each component, its Kubernetes workload(s), the workload type, the expected pod name pattern created by that workload, and the containers in those pods.
Component | Workload name | Workload type | Pod name pattern | Containers in the pod |
---|---|---|---|---|
PureCSIPlugin | px-pure-csi-controller | Deployment | px-pure-csi-controller-* |
|
px-pure-csi-node | DaemonSet | px-pure-csi-node-* |
| |
PureCSITelemetry | px-pure-csi-telemetry-registration | Deployment | px-pure-csi-telemetry-registration-* |
|
px-pure-csi-telemetry | Deployment | px-pure-csi-telemetry-* |
| |
PureCSIMigrator | px-pure-csi-migrator | Job | px-pure-csi-migrator-* | migrator |
Basic usage examples
Based on the components mapping, the following examples demonstrate how to configure Kubernetes settings using ComponentK8sConfig
.
Set resource limits
This example limits each controller and node plugin container to 1 CPU and 256 MiB of memory, with requests set to 500m CPU and 128 MiB of memory:
apiVersion: core.libopenstorage.org/v1
kind: ComponentK8sConfig
metadata:
name: px-pure-csi-k8s-config
namespace: <portworx>
spec:
components:
- componentNames:
- PureCSIPlugin
workloadConfigs:
- workloadNames:
- px-pure-csi-controller
containerConfigs:
- containerNames:
- controller-plugin
- csi-provisioner
- csi-attacher
- csi-snapshotter
- csi-resizer
- snapshot-controller
- liveness-probe
resources:
requests:
cpu: "500m"
memory: "128Mi"
limits:
cpu: "1000m"
memory: "256Mi"
- workloadNames:
- px-pure-csi-node
containerConfigs:
- containerNames:
- node-plugin
- node-driver-registrar
- liveness-probe
resources:
requests:
cpu: "500m"
memory: "128Mi"
limits:
cpu: "1000m"
memory: "256Mi"
If you need to set resource requests and limits per container, see Advanced usage examples.
Apply placement rules and tolerations
To pin controller pods to control‑plane nodes and configure tolerations:
apiVersion: core.libopenstorage.org/v1
kind: ComponentK8sConfig
metadata:
name: px-pure-csi-k8s-config
namespace: <portworx>
spec:
components:
- componentNames: ["PureCSIPlugin"]
workloadConfigs:
- workloadNames: ["px-pure-csi-controller"]
placement:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: Exists
tolerations:
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
Advanced usage examples
Based on the components mapping, the following scenarios illustrate how configurations are applied and merged.
Different configurations for specific components
This configuration defines common resource limits for the PX‑CSI plugin (controller + node) and defines different resource configurations for telemetry. This flexibility allows you to handle each component's unique resource requirements.
apiVersion: core.libopenstorage.org/v1
kind: ComponentK8sConfig
metadata:
name: px-pure-csi-k8s-config
namespace: <portworx>
spec:
components:
- componentNames:
- PureCSIPlugin
workloadConfigs:
- workloadNames:
- px-pure-csi-controller
- px-pure-csi-node
containerConfigs:
- containerNames:
- controller-plugin
- csi-provisioner
- csi-attacher
- csi-snapshotter
- csi-resizer
- snapshot-controller
- node-plugin
- node-driver-registrar
- liveness-probe
resources:
requests:
memory: "128Mi"
cpu: "500m"
limits:
memory: "256Mi"
cpu: "1000m"
- componentNames:
- PureCSITelemetry
workloadConfigs:
- workloadNames:
- px-pure-csi-telemetry-registration
containerConfigs:
- containerNames:
- registration
- envoy
- telemetry
resources:
requests:
memory: "128Mi"
cpu: "500m"
limits:
memory: "256Mi"
cpu: "1000m"
Merging configurations for pods
Overrides when a similar configuration is defined twice
This example defines resource requests and limits for the same telemetry container twice. The second definition overrides the first.
apiVersion: core.libopenstorage.org/v1
kind: ComponentK8sConfig
metadata:
name: px-pure-csi-k8s-config
namespace: <portworx>
spec:
components:
- componentNames:
- PureCSITelemetry
workloadConfigs:
- workloadNames: ["px-pure-csi-telemetry"]
containerConfigs:
- containerNames:
- telemetry
resources:
requests:
memory: "128Mi"
cpu: "500m"
limits:
memory: "256Mi"
cpu: "1000m"
- componentNames:
- PureCSITelemetry
workloadConfigs:
- workloadNames: ["px-pure-csi-telemetry"]
containerConfigs:
- containerNames:
- telemetry
resources:
requests:
memory: "64Mi"
cpu: "150m"
limits:
memory: "128Mi"
cpu: "500m"
Appending and merging configurations
When you define resource requests and limits specifically for the csi-attacher
and csi-provisioner
containers within the controller workload, you can also specify pod‑level placement configurations, including node affinity and tolerations. As a result, the operator creates controller pods using the defined placement rules and adds resource limits for the selected sidecars.
apiVersion: core.libopenstorage.org/v1
kind: ComponentK8sConfig
metadata:
name: px-pure-csi-k8s-config
namespace: <portworx>
spec:
components:
- componentNames: ["PureCSIPlugin"]
workloadConfigs:
- workloadNames: ["px-pure-csi-controller"]
containerConfigs:
- containerNames:
- csi-attacher
resources:
requests:
memory: "128Mi"
cpu: "500m"
limits:
memory: "256Mi"
cpu: "1000m"
- componentNames: ["PureCSIPlugin"]
workloadConfigs:
- workloadNames: ["px-pure-csi-controller"]
containerConfigs:
- containerNames:
- csi-provisioner
resources:
requests:
memory: "128Mi"
cpu: "500m"
limits:
memory: "256Mi"
cpu: "1000m"
- componentNames: ["PureCSIPlugin"]
workloadConfigs:
- workloadNames: ["px-pure-csi-controller"]
placement:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: Exists
tolerations:
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule