Portworx Operator Release Notes
25.3.1
September 06, 2025
Fixes
Issue Number | Issue Description | Severity |
---|---|---|
PWX-46691 | On OpenShift Container Platform (OCP) 4.15 and earlier, if Portworx-specific ServiceAccount objects have no annotations, the Operator updates the objects during every reconciliation loop. User impact: Service account updates trigger the regeneration of associated kubernetes.io/dockercfg and kubernetes.io/service-account-token secrets, causing excessive creation of secret objects and unnecessary API traffic.Resolution: The Operator no longer performs redundant updates on ServiceAccount objects without annotations, preventing unnecessary regeneration of secret objects and reducing API load. Affected versions: 25.3.0 | Major |
25.3.0
September 03, 2025
- When you upgrade to Operator version 25.3.0, the
px-plugin
andpx-plugin-proxy
pods restart. - If you are running an OpenShift versions 4.15 and earlier, do not upgrade to Operator version 25.3.0. This version causes excessive
Secret
object creation due to repeatedServiceAccount
updates, which significantly increases API server load. For more information about the workaround, see here.
New features
- ComponentK8sConfig: The
ComponentK8sConfig
custom resource allows configuration of resources, labels, annotations, tolerations, and placement rules for all Portworx components. Configurations previously defined in theStorageCluster
should now be migrated to theComponentK8sConfig
custom resource. For more information, see Configure resource limits, placements, tolerations, nodeAffinity, labels, and annotations for Portworx components.
Improvements
Improvement Number | Improvement Description |
---|---|
PWX-42536 | Starting with Kubernetes version 1.31, in-tree storage drivers have been deprecated, and the Portworx CSI driver must be used. The Portworx Operator now automatically sets the CSI configuration to enabled if the CSI spec is left empty or explicitly disabled. If CSI is already enabled, no changes are made. |
PWX-42429 | The Portworx Operator now supports IPv6 clusters in the OpenShift dynamic console plugin. |
PWX-44837 | The Portworx Operator now creates a default VolumeSnapshotClass named px-csi-snapclass . The class name and creation behavior can be configured through the spec.csi.volumeSnapshotClass field in the StorageCluster custom resource. |
PWX-44472 | The Portworx Operator now reports a new state, UpdatePaused , when an upgrade is paused. This state indicates that an update is not in progress. StorageCluster events and logs provide additional context about the paused upgrade. |
Fixes
Issue Number | Issue Description | Severity |
---|---|---|
PWX-45461 | The Portworx Operator was applying outdated CSI CustomResourceDefinitions (CRDs) that were missing the sourceVolumeMode field in VolumeSnapshotContent , resulting in compatibility issues on standard Kubernetes clusters.User Impact: On vanilla Kubernetes version 1.25 or later, attempts to create VolumeSnapshots failed due to the missing spec.sourceVolumeMode field. Snapshot controller logs reported warnings such as unknown field "spec.sourceVolumeMode" . Managed Kubernetes distributions like OpenShift were unaffected, as they typically include the correct CRDs by default.Resolution: The Operator now applies CSI CRDs version 8.2.0, which includes the sourceVolumeMode field, ensuring compatibility with Kubernetes 1.25 and later. Affected Versions: 25.2.2 or earlier | Minor |
PWX-45246 | An outdated Prometheus CustomResourceDefinition (CRD) was previously downloaded by the Operator. This CRD lacked required fields, which caused schema validation errors during Prometheus Operator reconciliation. User Impact: Reconciliation failures occurred due to missing fields in the CRD. Resolution: The Operator now references the latest Prometheus CRD at the deployment URL, ensuring compatibility and preventing schema validation errors. Affected Versions: 25.2.0 or earlier | Minor |
PWX-45156 | Live migration was previously skipped only for volumes with backend=pure_block . The Operator continued to trigger live migration for other volume types, such as FADA (pure_fa_file ) and FBDA, even when it was not appropriate.User Impact: Unnecessary migrations during upgrades could lead to virtual machine (VM) evictions and movement to other PX nodes. Resolution: The Operator now skips live migration for volumes using FADA and FBDA backends, reducing disruption and maintaining application availability during upgrades. Affected Versions: 25.2.2 or earlier | Minor |
PWX-45048 | In clusters with KubeVirt virtual machines (VMs), the Portworx Operator might not remove the custom "unschedulable" annotation from nodes when it is no longer needed. Additionally, paused VMs prevented upgrades from proceeding, as they cannot be evicted.Resolution: The Operator now ignores paused KubeVirt VMs during upgrades and removes the custom "unschedulable" annotation when it is no longer required. This behavior improves upgrade reliability in KubeVirt environments. Affected Versions: 25.2.2 or earlier | Minor |
PWX-44974 | In large clusters (for example, 250+ StorageNodes ), the Operator’s API calls to the Kubernetes API server increased linearly, resulting in high load on the API server. Resolution: The Operator is improved with better caching, which significantly reduces API calls to the Kubernetes API server. Affected versions: 25.2.1 and 25.2.2 | Minor |
PWX-39097 | When csi.enabled was set to false in the StorageCluster (STC) spec, the installSnapshotController field remained enabled, creating inconsistencies in the CSI configuration.User Impact: This mismatch could lead to confusion or result in the snapshot controller being deployed unnecessarily. Resolution: The Operator now automatically resets installSnapshotController when CSI is disabled, maintaining consistent configuration behavior. Affected Versions: 25.2.0 or earlier | Minor |
Known issues (Errata)
-
PWX-46691: If Portworx-specific ServiceAccount objects do not include any annotations, the Operator updates these objects during each reconciliation loop. On OpenShift Container Platform (OCP), each ServiceAccount update triggers the regeneration of associated Secret objects, causing excessive Secret creation and unnecessary API traffic. This affects OCP versions 4.15 and earlier.
Workaround: Add at least one annotation to each Portworx-specific ServiceAccount object, such as the following:
autopilot
autopilot
px-csi
portworx-proxy
px-telemetry
stork
stork-scheduler
For example:
kind: ServiceAccount
metadata:
annotations:
portworx.io/reconcile: "ignore"noteIf you've already upgraded to Operator version 25.3.0 and are affected by this issue, you can either downgrade to a previous Operator version or follow the workaround described above.
To downgrade the Operator version, follow these steps to uninstall Operator version 25.3.0 and install 25.2.2:
-
In the OpenShift web console, go to Operators > Installed Operators.
-
Verify that the installed version of Portworx Enterprise is 25.3.0.
-
Select Actions > Uninstall Operator.
-
In the confirmation dialog, clear the Delete all operand instances for this operator check box. This ensures that Portworx continues to run after uninstallation of Operator.
-
Select Uninstall again to confirm.
-
After the uninstallation completes, return to OperatorHub.
-
Search for Portworx Enterprise, and then install version 25.2.2.
Ensure that you set the Update approval to manual while installing.
-
PWX-45817: On Mirantis Kubernetes Engine (MKE) clusters, MKE injects tolerations into KVDB and Portworx pods. If you are using the
ComponentK8sConfig
custom resource (CR) to manage tolerations and the injected tolerations are not included in the CR, the Portworx Operator removes them. This causes the pods to restart continuously.Workaround: Follow the steps below:
-
Ensure that
jq
oryq
is installed on your machine. -
Get the kubeconfig of your cluster.
-
Get taints on the cluster nodes:
-
For
jq
, run:kubectl --kubeconfig=<path/to/kubeconfig> get nodes -ojson | jq -r . | jq -r '.items[].spec.taints'
-
For
yq
, run:kubectl --kubeconfig=<path/to/kubeconfig> get nodes -oyaml | yq -r . | yq -r '.items[].spec.taints'
Example output:
null
null
null
[{"effect": "NoSchedule", "key": "com.docker.ucp.manager"}]
null
null
-
Apply the tolerations to the
ComponentK8sConfig
CR based on the command output in the previous step. For example:- componentNames:
- KVDB
- Storage
- Portworx API
workloadConfigs:
- placement:
tolerations:
- key: com.docker.ucp.manager
operator: Exists
workloadNames:
- storage
- portworx-kvdb
- portworx-api
-
-
PWX-45960: When using the workload identity feature, restart of
KVDB
pods can cause theeks-pod-identity
webhook to inject credentials into theKVDB
pods because the same service account is used for thePortworx API
,Portworx
, andKVDB
pods.Note: When credentials are removed from
StorageCluster
, the operator does not remove them from theKVDB
pods if they have been added, so you must manually restart theKVDB
pods to remove these credentials.
25.2.2
July 8, 2025
Improvements
Improvement Number | Improvement Description |
---|---|
PWX-40116 | Portworx Operator now emits events on the StorageCluster object during Portworx and Kubernetes smart upgrades if a node is not selected for upgrade. Each event includes details explaining why the node was not selected for the upgrade. |
Fixes
Issue Number | Issue Description | Severity |
---|---|---|
PWX-45078 | The Operator configured Prometheus to connect to KVDB metrics over HTTP, even when TLS was enabled. This caused the connections to fail. User Impact: In clusters with KVDB TLS enabled, Prometheus could not scrape internal KVDB metrics. As a result, monitoring dashboards were incomplete and TLS handshake errors appeared in logs. Resolution: The Operator now configures Prometheus to use HTTPS with TLS settings when KVDB TLS is enabled. This ensures that internal KVDB metrics are collected successfully. Affected Versions: Versions 25.2.1 | Minor |
25.2.1
June 23, 2025
We recommend upgrading to Portworx Operator version 25.2.1. Follow these guidelines for a seamless upgrade:
- If you're currently running Portworx Operator version 24.2.4 or earlier, we recommend upgrading to
Operator 25.2.1
directly by following the standard upgrade procedure. - If you're currently running Portworx Operator version 25.1.0 or 25.2.0, follow these steps:
- If you've labeled only the nodes that should run Portworx by using the
px/enabled=true
label, ensure that you've applied the workaround described here. - Upgrade Portworx Operator.
- If you've labeled only the nodes that should run Portworx by using the
- On OpenShift clusters, if you're currently running Portworx Operator version 25.1.0, you cannot upgrade to 25.2.1 directly. You must uninstall Operator 25.1.0 (while retaining the StorageCluster and all operands), before installing the new version. For more information, see Upgrade notes for OpenShift.
Fixes
Issue Number | Issue Description | Severity |
---|---|---|
PWX-44096 | During a Portworx upgrade, if one or more nodes are cordoned, Portworx might upgrade multiple nodes simultaneously, exceeding the maxUnavailable configuration. This behavior occurs because of the default value of the cordon restart delay, which is set to 5 minutes. User Impact: More nodes might be upgraded simultaneously than specified by maxUnavailable , exceeding the maxUnavailable limit. Resolution: Now, when any nodes are cordoned, the operator overrides the value of the cordon restart delay to 0 seconds during Portworx upgrade. This ensures that only the number of nodes specified by maxUnavailable are upgraded at the same time. Affected Versions: Versions 25.2.0 and earlier | Minor |
Known issues (Errata)
Issue Number | Issue Description |
---|---|
PWX-41729 | The Portworx Operator did not schedule Portworx pods on nodes labeled px/enabled=false . However, during node decommission, the label is set to px/enabled=remove before being changed to px/enabled=false after Portworx is fully removed. If removal takes time, the Operator could incorrectly reschedule Portworx pods. User Impact: Portworx pods may be rescheduled on nodes that are in the process of being decommissioned, disrupting removal workflows. Affected Versions: 25.2.1 and 24.2.4 and earlier |