Configure resource limits, placements, tolerations, nodeAffinity, labels, and annotations for Portworx components
Use the ComponentK8sConfig
custom resource (CR) to configure CPU and memory limits, tolerations, node affinity, labels, and annotations across Portworx components. This CR simplifies configuration management and improves consistency in large environments.
With ComponentK8sConfig
, you can:
- Centralize Kubernetes configuration for all Portworx components
- Improve consistency and manageability at scale
- Apply fine-grained controls per component, workload, or container
If you are already using the StorageCluster
to configure these settings, you can migrate them to ComponentK8sConfig
. For details, see Migrate from StorageCluster to ComponentK8sConfig.
To know more about the various fields that you can configure in ComponentK8sConfig CR, see ComponentK8sConfig CRD reference
Prerequisites
- Ensure that you are running Portworx Operator version 25.3.0 or later.
Configuration guidelines
Follow the guidelines below when using ComponentK8sConfig
:
- Avoid reusing container names across blocks for the same workload to prevent unintended overrides.
- Labels and annotations defined at the component level are inherited by all pods and workloads under that component.
Migrate from StorageCluster to ComponentK8sConfig
If you configure Kubernetes settings in your StorageCluster
, migrate them to ComponentK8sConfig
for better modularity and long-term maintainability.
To initiate migration:
- Add the following annotation to your
StorageCluster
object:
metadata:
annotations:
portworx.io/migrate-configs: "true"
-
The Portworx Operator:
-
Creates a
ComponentK8sConfig
CR automatically, if one is not already present. -
Migrates supported settings from the
StorageCluster
(STC) object to the new CR. -
Adds a status condition to indicate successful migration:
status:
conditions:
- type: ConfigMigration
status: Completed
message: 'successfully migrated configs to componentK8sConfig CR'
-
After a successful migration, configure resource limits, tolerations, and affinity exclusively in ComponentK8sConfig
. The operator ignores configuration changes in the StorageCluster
, and ComponentK8sConfig
is the single source of truth for these configurations.
-
After migration, clean up the configurations in the
StorageCluster
object that were migrated toComponentK8sConfig
; Portworx Operator gives precedence theComponentK8sConfig
CR and ignores the equivalent settings in the StorageCluster CR. -
If you use GitOps to manage Kubernetes objects, update the Git Repo with the new
ComponentK8sConfig
custom resource (CR) created as part of the migration.It is not mandatory to use the migration feature by using the annotation; you can also perform the migration manually by creating a new
ComponentK8sConfig
CR in your Git repository and applying it to the cluster. Confirm that the configuration is applied successfully. If the new configuration is working as expected, remove the duplicate settings from theStorageCluster
custom resource.
Verify your configuration
Use the following commands to confirm that your configuration was applied:
kubectl get componentk8sconfig
kubectl describe componentk8sconfig <componentk8sconfig-name>
Check the value of the status.phase
field. A value of ACCEPTED
indicates that the configuration has been successfully processed. For more information, review the status.reason
field.
Mapping of components to workloads, pods, and containers
The table below lists each component, its Kubernetes workload(s), the workload type, the expected pod name pattern created by that workload, and the containers in those pods.
Component | Workload name | Workload type | Pod name pattern | Containers in the pod |
---|---|---|---|---|
Portworx proxy | portworx-proxy | DaemonSet | portworx-proxy-* | portworx-proxy |
Portworx plugin | px-plugin | Deployment | px-plugin-- | px-plugin |
px-plugin-proxy | Deployment | px-plugin-proxy-- | nginx | |
Stork | stork | Deployment | stork-- | stork |
stork-scheduler | Deployment | stork-scheduler-- | stork-scheduler | |
Autopilot | autopilot | Deployment | autopilot-- | autopilot |
Portworx telemetry | px-telemetry-registration | Deployment | px-telemetry-registration-- | registration , envoy |
px-telemetry-metrics-collector | Deployment | px-telemetry-metrics-collector-- | collector , envoy | |
px-telemetry-phonehome | DaemonSet | px-telemetry-phonehome-* | log-upload-service , envoy | |
PVC controller | portworx-pvc-controller | Deployment | portworx-pvc-controller-- | portworx-pvc-controller-manager |
Prometheus | px-prometheus-operator | Deployment | px-prometheus-operator-- | px-prometheus-operator |
px-prometheus | Prometheus instance | px-prometheus-* | px-prometheus | |
CSI | px-csi-ext | Deployment | px-csi-ext-- |
|
Portworx API | portworx-api | DaemonSet | portworx-api-* | portworx-api , csi-node-driver-registrar |
Storage | storage | Pod | storage | portworx |
KVDB | portworx-kvdb | Pod | portworx-kvdb | portworx-kvdb |
Alert manager | portworx | Alertmanager instance | portworx-alertmanager-* | portworx-alertmanager |
Basic usage examples
Based on the components mapping, the following examples demonstrate how to configure Kubernetes settings using ComponentK8sConfig
.
Set resource limits
This example limits each stork
and autopilot
container to 1 CPU and 256 MiB of memory, with requests set to 500m CPU and 128 MiB of memory:
apiVersion: core.libopenstorage.org/v1
kind: ComponentK8sConfig
metadata:
name: stork-autopilot-resources
spec:
components:
- componentNames:
- Stork
- Autopilot
workloadConfigs:
- workloadNames:
- stork
- autopilot
containerConfigs:
- containerNames:
- stork
- autopilot
resources:
requests:
cpu: "500m"
memory: "128Mi"
limits:
cpu: "1000m"
memory: "256Mi"
Apply placement rules and tolerations
To pin px-csi-ext
pods to specific nodes and configure tolerations:
apiVersion: core.libopenstorage.org/v1
kind: ComponentK8sConfig
metadata:
name: csi-placement-config
spec:
components:
- componentNames: ["CSI"]
workloadConfigs:
- workloadNames: ["px-csi-ext"]
placement:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: px-schedule
operator: NotIn
values: ["false"]
tolerations:
- key: px-schedule
operator: Equal
value: value
effect: NoSchedule
Add labels and annotations
To apply environment-specific metadata to all prometheus
pods:
apiVersion: core.libopenstorage.org/v1
kind: ComponentK8sConfig
metadata:
name: prometheus-metadata
spec:
components:
- componentNames: ["Prometheus"]
labels:
portworx.io/env: dev
annotations:
portworx.io/env: dev
These labels and annotations are automatically applied to all pods and workloads in the prometheus
component.
Advanced usage examples
Based on the components mapping, the following scenarios illustrate how configurations are applied and merged.
Different configurations for specific components
This configuration defines common resource limits for Stork and Autopilot and defines different resource configurations for telemetry. This flexibility allows you to handle each component's unique resource requirements.
apiVersion: core.libopenstorage.org/v1
kind: ComponentK8sConfig
metadata:
name: example-component-config
spec:
components:
- componentNames:
- Stork
- Autopilot
workloadConfigs:
- workloadNames:
- stork
- stork-scheduler
- autopilot
containerConfigs:
- containerNames:
- stork
- stork-scheduler
- autopilot
resources:
requests:
memory: "128Mi"
cpu: "500m"
limits:
memory: "256Mi"
cpu: "1000m"
- componentNames:
- Portworx Telemetry
workloadConfigs:
- workloadNames:
- px-telemetry-registration
containerConfigs:
- containerNames:
- registration
- envoy
resources:
requests:
memory: "128Mi"
cpu: "500m"
limits:
memory: "256Mi"
cpu: "1000m"
Merging configurations for pods
Overrides when a similar configuration is defined twice
This example defines resource requests and limits for the same stork
component twice. The second definition overrides the first.
apiVersion: core.libopenstorage.org/v1
kind: ComponentK8sConfig
metadata:
name: example-component-config
spec:
components:
- componentNames:
- Stork
workloadConfigs:
- workloadNames: ["stork"]
containerConfigs:
- containerNames:
- stork
resources:
requests:
memory: "128Mi"
cpu: "500m"
limits:
memory: "256Mi"
cpu: "1000m"
- componentNames:
- stork
workloadConfigs:
- workloadNames: ["stork"]
containerConfigs:
- containerNames:
- stork
resources:
requests:
memory: "64Mi"
cpu: "150m"
limits:
memory: "128Mi"
cpu: "500m"
Appending and merging configurations
When you define resource requests and limits specifically for the csi-installer
and csi-attacher
containers within the CSI component, you can also specify pod-level placement configurations, including node affinity and tolerations. As a result, the operator creates CSI pods using the defined placement rules and adds resource limits for the csi-installer
and csi-attacher
containers.
apiVersion: core.libopenstorage.org/v1
kind: ComponentK8sConfig
metadata:
name: example-component-config
spec:
components:
- componentNames: ["CSI"]
workloadConfigs:
- workloadNames: ["px-csi-ext"]
containerConfigs:
- containerNames:
- csi-attacher
resources:
requests:
memory: "128Mi"
cpu: "500m"
limits:
memory: "256Mi"
cpu: "1000m"
- componentNames: ["CSI"]
workloadConfigs:
- workloadNames: ["px-csi-ext"]
containerConfigs:
- containerNames:
- csi-external-provisioner
resources:
requests:
memory: "128Mi"
cpu: "500m"
limits:
memory: "256Mi"
cpu: "1000m"
- componentNames: ["CSI"]
workloadConfigs:
- workloadNames: ["px-csi-ext"]
placement:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: px-schedule
operator: NotIn
values:
- "false"
tolerations:
- key: px-schedule
operator: Equal
value: value
effect: NoSchedule