Portworx Operator Release Notes
Portworx Operator Release Notes
25.5.1
January 06, 2026
New Features
- Taint-based scheduling support: This Portworx Operator release enables Stork support for taint-based scheduling for workloads. When taint-based scheduling is enabled, the Operator applies taints to Portworx storage and storageless nodes. Stork automatically adds matching tolerations to Portworx system pods and to applications that use Portworx volumes. This blocks workloads that lack matching tolerations from being scheduled on Portworx storage nodes. For more information, see Taint-based scheduling with Stork.
note
This feature requires Stork version 25.6.0 or later.
Improvements
| Improvement Number | Improvement Description |
|---|---|
| PWX-48477 | The Portworx Operator now uses the Portworx Volume API to identify volume types instead of relying on the StorageClass API. Previously, when a StorageClass was deleted after a PersistentVolumeClaim (PVC) was created, the Operator couldn't determine the volume type, which led to failures during operations, including KubeVirt VM live migration. The Portworx Operator now queries the volume object directly through the Portworx API, enabling consistent volume type detection even if the StorageClass is unavailable. This update improves the reliability of VM live migration during Portworx upgrades for both Pure and SharedV4 volumes. |
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-48822 | In air-gapped environments with KVDB TLS enabled, Portworx fails to pull the cert-manager image from a private registry that requires authentication. This occurs because the Portworx Operator does not attach the regcred secret to the cert-manager pods.User Impact: The cert-manager pods enter a ImagePullBackOff state due to missing registry credentials, and installation fails.Resolution: Portworx Operator ensures that all cert-manager components are deployed with the correct imagePullSecrets, such as regcred, when you use authenticated custom registries.Affected Version: 25.5.0 and earlier | Minor |
| PWX-49374 | When you apply the portworx.io/disable-storage-class: "true" annotation, the Operator can delete an existing StorageClass if its name matches a default StorageClass introduced by the Operator. This can occur even if the StorageClass was not created by the Operator.User impact: StorageClasses not created by the Operator can be deleted after an upgrade or when the annotation is enabled. Workloads that depend on those StorageClasses might be disrupted. Resolution: The Operator now manages StorageClasses using explicit managed-by ownership labels. It deletes only those StorageClasses it created and labeled as Operator-managed. StorageClasses not created by the Operator are not modified or deleted, even if their names match default StorageClasses.Affected version: 25.5.0 | Minor |
| PWX-49456 | The Operator could overwrite existing VolumeSnapshotClass resources, removing your custom parameters or annotations during a restart or upgrade.User impact: If you customized the snapshot configuration, your changes might be lost during an Operator restart or upgrade, potentially resulting in snapshot failures. Resolution: The Operator now preserves existing VolumeSnapshotClass resources. If a VolumeSnapshotClass with the same name already exists in the cluster, the Operator no longer modifies or overwrites it.Affected version: 25.5.0 | Minor |
25.5.0
November 19, 2025
New Features
-
Support for external Prometheus monitoring: Portworx now supports external Prometheus for monitoring. You can configure this by disabling PX Prometheus and enabling metrics export in the
StorageCluster....
spec:
monitoring:
prometheus:
enabled: false
exportMetrics: true
...For more information, see Monitor Clusters on Kubernetes.
If you are using Autopilot, after you configure an external Prometheus instance, you must specify the Prometheus endpoint in the Autopilot configuration. For more information, see Autopilot Install and Setup.
Improvements
| Improvement Number | Improvement Description |
|---|---|
| PWX-47324 | You can now use the spec.deleteStrategy.ignoreVolumes field in the StorageCluster spec to allow uninstall even when Portworx volumes are present. This is required in scenarios where PersistentVolumeClaims (PVCs) referencing Portworx storage classes exist, such as those created by KubeVirt virtual machines. When set to true, this field allows uninstall operations using the UninstallAndWipe or UninstallAndDelete strategy to proceed. If not set, the uninstall is blocked until all Portworx volumes are removed. Note: The default value is false. |
| PWX-25352 | The StorageCluster spec now supports a delete strategy type: UninstallAndDelete on vSphere, AWS, GKE, and Azure platforms. This option removes all Portworx components from the system, wipes storage devices, deletes Portworx metadata from KVDB, and removes the associated cloud drives. For more information, see Delete/Uninstall strategy. |
| PWX-47099 | The Portworx Operator now supports configuring the cert-manager, cert-manager-cainjector, and cert-manager-webhook deployments through the ComponentK8sConfig custom resource. You can now set labels, annotations, resource requests and limits, and placement specifications (such as tolerations) for these workloads using the ComponentK8sConfig API. This enhancement enables consistent and centralized configuration of Portworx-managed cert-manager components deployed for TLS enabled KVDB. |
| PWX-42597 | If call-home is enabled on the cluster, telemetry is now automatically enabled. If telemetry does not become healthy within 30 minutes, the Operator disables it. Telemetry can still be toggled manually using spec.monitoring.telemetry.enabled field in the StorageCluster. |
| PWX-47543 | The Operator now suppresses repeated gRPC connection errors when the Portworx service is down on a node, reducing log noise and improving readability during node failure scenarios. |
| PWX-35869 | On OpenShift clusters, the Operator now creates the following StorageClasses by default when the HyperConverged custom resource is detected:
Creation of these StorageClasses can be controlled using the spec.csi.kubeVirtStorageClasses section in the StorageCluster. |
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-47053 | Callhome telemetry failed in dual-stack clusters because the telemetry service did not listen on IPv6 interfaces. When the environment variable PX_PREFER_IPV6_NETWORK_IP=true was set, the px-telemetry-phonehome service attempted to use IPv6, but the underlying Envoy configuration was bound only to IPv4.User Impact: Telemetry data was not reported in dual-stack clusters with IPv6 preference. Resolution: The Operator now correctly configures telemetry services to bind to both IPv4 and IPv6, ensuring that telemetry functions in dual-stack environments. Affected Version: 25.4.0 and earlier | Minor |
| PWX-46703 | The Portworx Operator was incorrectly updating multiple components on every reconcile cycle, even when there were no changes to their specifications. Affected components included px-telemetry-registration, px-telemetry-metrics-collector, px-prometheus, portworx-proxy, and others. This behavior caused unnecessary updates to deployments and other resources.User Impact: Unnecessary API calls were made to kube-apiserver.Resolution: The Operator now correctly detects and skips updates when there are no spec differences. Default values are explicitly set, and label comparison logic has been fixed to prevent unintended updates.Affected Version: 25.4.0 and earlier | Minor |
| PWX-47985 | If a StorageCluster resource included a toleration with operator: Exists, the upgrade to PX-CSI version 25.8.0 failed. These tolerations matched all taints, which interfered with CSI migration logic during upgrade.User Impact: PX upgrades failed in clusters using broad toleration rules. Resolution: The Operator now correctly handles tolerations with operator: Exists and no longer fails during upgrades.Affected Version: 25.4.0 | Minor |
| PWX-46704 | The Operator deleted all KVDB pods simultaneously when updating the resource or placementSpec in the ComponentK8sConfig or the STC. This caused the Operator to recreate all KVDB pods during the next reconciliation, which could result in quorum loss.User Impact: The cluster might temporarily lose KVDB quorum during updates, potentially affecting cluster availability and operations. Resolution: The Operator now respects the PodDisruptionBudget for KVDB pods and deletes them safely, ensuring quorum is maintained during updates. Affected Version: 25.4.0 | Minor |
| PWX-46394 | During upgrades with Smart upgrade enabled, the Operator did not prioritize nodes marked with the custom Unschedulable annotation for ongoing KubeVirt VM migrations. As a result, the upgrade logic frequently selected new nodes in each cycle, causing redundant VM evictions and increased upgrade time.User Impact: Resulted in prolonged upgrade durations and repeated KubeVirt VM migrations. Resolution: The Operator now treats nodes with the Unschedulable annotation as unavailable and prioritizes them for upgrade until completion. This ensures upgrade continuity and avoids redundant VM evictions. Affected Version: 25.4.0 and earlier | Minor |
| PWX-48279 | On clusters with SELinux set to enforcing mode, the px-pure-csi-node pod crashed due to denied access when attempting to connect to the CSI socket. The node-driver-registrar and liveness-probe containers were blocked by SELinux policies from accessing /csi/csi.sock, resulting in repeated connection failures and pod crash loops.User Impact: The px-pure-csi-node pod failed to start, preventing CSI node registration and storage provisioning when SELinux was in enforcing mode.Resolution: The Operator now configures the node-driver-registrar and liveness-probe containers with the required security context to allow socket access under SELinux enforcing mode.Affected Version: 25.4.0 | Minor |
| PWX-45817 | On some clusters, external webhooks such as those used by Mirantis Kubernetes Engine (MKE) injected configuration into KVDB and Portworx pods, including tolerations, affinity, or placement rules. If these injected settings were not explicitly defined in the ComponentK8sConfig custom resource (CR) or the StorageCluster spec, the Portworx Operator removed them, causing pod restarts.User Impact: Affected pods restarted continuously due to missing tolerations or placement rules. Resolution: The Operator now preserves webhook-injected configuration by default. This fix applies to both StorageCluster and ComponentK8sConfig workflows.Affected Version: 25.3.0, 25.3.1, and 25.4.0 | Minor |
Known issues (Errata)
| Issue Number | Issue Description |
|---|---|
| PWX-47502 | Kubernetes upgrades on AKS might fail when the data drives use Premium_LRS disks and Smart Upgrade isn't enabled, especially if maxUnavailable is set to 1.User Impact: If Smart Upgrade isn't enabled and a node is down, the upgrade process will halt due to the maxUnavailable=1 setting. Even if you increase maxUnavailable to 2, you might still experience a 30-minute timeout due to slow I/O performance from the underlying disk type.Workaround: Enable Smart Upgrade by following these instructions, and adjust maxUnavailable to unblock the upgrade. However, if disk performance issues persist, the timeout might still occur. Affected Version: 25.5.0 |
| PWX-48822 | In air-gapped environments with KVDB TLS enabled, the cert-manager image pull fails when using an authenticated custom registry. This occurs because the Portworx Operator does not attach the regcred secret to the cert-manager pods.User Impact: The cert-manager pods enter an ImagePullBackOff state due to missing registry credentials, and installation fails.Affected Version: 25.2.1 or later |
25.4.0
October 15, 2025
New features
- Support for PX-CSI 25.8.0: Portworx Operator adds support for the redesigned PX-CSI (version 25.8.0). For PX-CSI, this release introduces new components, such as the CSI Controller Plugin and CSI Node Plugin, removes dependencies on KVDB, PX API, CSI, PX Cluster, and PX Plugin pods, and enables in-place migration from earlier PX-CSI versions.
Improvements
| Improvement Number | Improvement Description |
|---|---|
| PWX-37494 | Added support to override the seLinuxMount setting in the CSI driver, via the spec.csi.seLinuxMount field in the StorageCluster specification. This field defaults to true but can be set to false in environments where SELinux relabeling is not required. |
| PWX-38408 | The Portworx Operator now supports updating the image for the portworx-proxy DaemonSet based on the image specified in the px-versions ConfigMap. Previously, this image was hard-coded to registry.k8s.io/pause:<release-version>. You can now configure a different image. The proxy DaemonSet reflects updates when AutoUpdateComponents is set to Once or Always. |
| PWX-46659 | The Operator version is now reported in the StorageCluster.status.OperatorVersion field. This change improves visibility into the deployed Operator version. |
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-46825 | The Portworx Operator incorrectly added the default annotation to a VolumeSnapshotClass even when a default VolumeSnapshotClass already existed. User Impact: This might result in multiple VolumeSnapshotClass objects marked as default.Resolution: The Operator now checks whether a default VolumeSnapshotClass already exists before applying the default annotation. | Minor |
25.3.1
September 06, 2025
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-46691 | On OpenShift Container Platform (OCP) 4.15 and earlier, if Portworx-specific ServiceAccount objects have no annotations, the Operator updates the objects during every reconciliation loop. User impact: Service account updates trigger the regeneration of associated kubernetes.io/dockercfg and kubernetes.io/service-account-token secrets, causing excessive creation of secret objects and unnecessary API traffic.Resolution: The Operator no longer performs redundant updates on ServiceAccount objects without annotations, preventing unnecessary regeneration of secret objects and reducing API load. Affected versions: 25.3.0 | Major |
25.3.0
September 03, 2025
- When you upgrade to Operator version 25.3.0, the
px-pluginandpx-plugin-proxypods restart. - If you are running an OpenShift versions 4.15 and earlier, do not upgrade to Operator version 25.3.0. This version causes excessive
Secretobject creation due to repeatedServiceAccountupdates, which significantly increases API server load. For more information about the workaround, see here.
New features
- ComponentK8sConfig: The
ComponentK8sConfigcustom resource allows configuration of resources, labels, annotations, tolerations, and placement rules for all Portworx components. Configurations previously defined in theStorageClustershould now be migrated to theComponentK8sConfigcustom resource. For more information, see Configure resource limits, placements, tolerations, nodeAffinity, labels, and annotations for Portworx components.
Improvements
| Improvement Number | Improvement Description |
|---|---|
| PWX-42536 | Starting with Kubernetes version 1.31, in-tree storage drivers have been deprecated, and the Portworx CSI driver must be used. The Portworx Operator now automatically sets the CSI configuration to enabled if the CSI spec is left empty or explicitly disabled. If CSI is already enabled, no changes are made. |
| PWX-42429 | The Portworx Operator now supports IPv6 clusters in the OpenShift dynamic console plugin. |
| PWX-44837 | The Portworx Operator now creates a default VolumeSnapshotClass named px-csi-snapclass. You can configure this behavior using the spec.csi.volumeSnapshotClass field in the StorageCluster custom resource. |
| PWX-44472 | The Portworx Operator now reports a new state, UpdatePaused, when an upgrade is paused. This state indicates that an update is not in progress. StorageCluster events and logs provide additional context about the paused upgrade. |
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-45461 | The Portworx Operator was applying outdated CSI CustomResourceDefinitions (CRDs) that were missing the sourceVolumeMode field in VolumeSnapshotContent, resulting in compatibility issues on standard Kubernetes clusters.User Impact: On vanilla Kubernetes version 1.25 or later, attempts to create VolumeSnapshots failed due to the missing spec.sourceVolumeMode field. Snapshot controller logs reported warnings such as unknown field "spec.sourceVolumeMode". Managed Kubernetes distributions like OpenShift were unaffected, as they typically include the correct CRDs by default.Resolution: The Operator now applies CSI CRDs version 8.2.0, which includes the sourceVolumeMode field, ensuring compatibility with Kubernetes 1.25 and later. Affected Versions: 25.2.2 or earlier | Minor |
| PWX-45246 | An outdated Prometheus CustomResourceDefinition (CRD) was previously downloaded by the Operator. This CRD lacked required fields, which caused schema validation errors during Prometheus Operator reconciliation. User Impact: Reconciliation failures occurred due to missing fields in the CRD. Resolution: The Operator now references the latest Prometheus CRD at the deployment URL, ensuring compatibility and preventing schema validation errors. Affected Versions: 25.2.0 or earlier | Minor |
| PWX-45156 | Live migration was previously skipped only for volumes with backend=pure_block. The Operator continued to trigger live migration for other volume types, such as FADA (pure_fa_file) and FBDA, even when it was not appropriate.User Impact: Unnecessary migrations during upgrades could lead to virtual machine (VM) evictions and movement to other PX nodes. Resolution: The Operator now skips live migration for volumes using FADA and FBDA backends, reducing disruption and maintaining application availability during upgrades. Affected Versions: 25.2.2 or earlier | Minor |
| PWX-45048 | In clusters with KubeVirt virtual machines (VMs), the Portworx Operator might not remove the custom "unschedulable" annotation from nodes when it is no longer needed. Additionally, paused VMs prevented upgrades from proceeding, as they cannot be evicted.Resolution: The Operator now ignores paused KubeVirt VMs during upgrades and removes the custom "unschedulable" annotation when it is no longer required. This behavior improves upgrade reliability in KubeVirt environments. Affected Versions: 25.2.2 or earlier | Minor |
| PWX-44974 | In large clusters (for example, 250+ StorageNodes), the Operator’s API calls to the Kubernetes API server increased linearly, resulting in high load on the API server. Resolution: The Operator is improved with better caching, which significantly reduces API calls to the Kubernetes API server. Affected versions: 25.2.1 and 25.2.2 | Minor |
| PWX-39097 | When csi.enabled was set to false in the StorageCluster (STC) spec, the installSnapshotController field remained enabled, creating inconsistencies in the CSI configuration.User Impact: This mismatch could lead to confusion or result in the snapshot controller being deployed unnecessarily. Resolution: The Operator now automatically resets installSnapshotController when CSI is disabled, maintaining consistent configuration behavior. Affected Versions: 25.2.0 or earlier | Minor |
Known issues (Errata)
-
PWX-46691: If Portworx-specific ServiceAccount objects do not include any annotations, the Operator updates these objects during each reconciliation loop. On OpenShift Container Platform (OCP), each ServiceAccount update triggers the regeneration of associated Secret objects, causing excessive Secret creation and unnecessary API traffic. This affects OCP versions 4.15 and earlier.
Workaround: Add at least one annotation to each Portworx-specific ServiceAccount object, such as the following:
autopilotautopilotpx-csiportworx-proxypx-telemetrystorkstork-scheduler
For example:
kind: ServiceAccount
metadata:
annotations:
portworx.io/reconcile: "ignore"noteIf you've already upgraded to Operator version 25.3.0 and are affected by this issue, you can either downgrade to a previous Operator version or follow the workaround described above.
To downgrade the Operator version, follow these steps to uninstall Operator version 25.3.0 and install 25.2.2:
-
In the OpenShift web console, go to Operators > Installed Operators.
-
Verify that the installed version of Portworx Enterprise is 25.3.0.
-
Select Actions > Uninstall Operator.
-
In the confirmation dialog, clear the Delete all operand instances for this operator check box. This ensures that Portworx continues to run after uninstallation of Operator.

-
Select Uninstall again to confirm.
-
After the uninstallation completes, return to OperatorHub.
-
Search for Portworx Enterprise, and then install version 25.2.2.
Ensure that you set the Update approval to manual while installing.
-
PWX-45817: On some clusters, external webhooks such as those used by Mirantis Kubernetes Engine (MKE) may inject additional configuration into KVDB and Portworx pods. This includes tolerations, affinity rules, or placement constraints.
If you use the
ComponentK8sConfigcustom resource (CR) to manage tolerations, and the injected tolerations are not explicitly defined in the CR, the Portworx Operator removes them. As a result, the affected pods restart continuously.This issue is not limited to MKE and can affect any platform where an external webhook injects configuration into workloads. It can occur when using either the
StorageClusteror theComponentK8sConfig.Workaround: Follow the steps below:
-
Ensure that
jqoryqis installed on your machine. -
Get the kubeconfig of your cluster.
-
Get taints on the cluster nodes:
-
For
jq, run:kubectl --kubeconfig=<path/to/kubeconfig> get nodes -ojson | jq -r . | jq -r '.items[].spec.taints' -
For
yq, run:kubectl --kubeconfig=<path/to/kubeconfig> get nodes -oyaml | yq -r . | yq -r '.items[].spec.taints'Example output:
null
null
null
[{"effect": "NoSchedule", "key": "com.docker.ucp.manager"}]
null
null
-
Apply the tolerations to the
ComponentK8sConfigCR based on the command output in the previous step. For example:- componentNames:
- KVDB
- Storage
- Portworx API
workloadConfigs:
- placement:
tolerations:
- key: com.docker.ucp.manager
operator: Exists
workloadNames:
- storage
- portworx-kvdb
- portworx-api
-
-
PWX-45960: When using the workload identity feature, restart of
KVDBpods can cause theeks-pod-identitywebhook to inject credentials into theKVDBpods because the same service account is used for thePortworx API,Portworx, andKVDBpods.Note: When credentials are removed from
StorageCluster, the operator does not remove them from theKVDBpods if they have been added, so you must manually restart theKVDBpods to remove these credentials.
25.2.2
July 8, 2025
Improvements
| Improvement Number | Improvement Description |
|---|---|
| PWX-40116 | Portworx Operator now emits events on the StorageCluster object during Portworx and Kubernetes smart upgrades if a node is not selected for upgrade. Each event includes details explaining why the node was not selected for the upgrade. |
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-45078 | The Operator configured Prometheus to connect to KVDB metrics over HTTP, even when TLS was enabled. This caused the connections to fail. User Impact: In clusters with KVDB TLS enabled, Prometheus could not scrape internal KVDB metrics. As a result, monitoring dashboards were incomplete and TLS handshake errors appeared in logs. Resolution: The Operator now configures Prometheus to use HTTPS with TLS settings when KVDB TLS is enabled. This ensures that internal KVDB metrics are collected successfully. Affected Versions: Versions 25.2.1 | Minor |
25.2.1
June 23, 2025
We recommend upgrading to Portworx Operator version 25.2.1. Follow these guidelines for a seamless upgrade:
- If you're currently running Portworx Operator version 24.2.4 or earlier, we recommend upgrading to
Operator 25.2.1directly by following the standard upgrade procedure. - If you're currently running Portworx Operator version 25.1.0 or 25.2.0, follow these steps:
- If you've labeled only the nodes that should run Portworx by using the
px/enabled=truelabel, ensure that you've applied the workaround described here. - Upgrade Portworx Operator.
- If you've labeled only the nodes that should run Portworx by using the
- On OpenShift clusters, if you're currently running Portworx Operator version 25.1.0, you cannot upgrade to 25.2.1 directly. You must uninstall Operator 25.1.0 (while retaining the StorageCluster and all operands), before installing the new version. For more information, see Upgrade notes for OpenShift.
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-44096 | During a Portworx upgrade, if one or more nodes are cordoned, Portworx might upgrade multiple nodes simultaneously, exceeding the maxUnavailable configuration. This behavior occurs because of the default value of the cordon restart delay, which is set to 5 minutes. User Impact: More nodes might be upgraded simultaneously than specified by maxUnavailable, exceeding the maxUnavailable limit. Resolution: Now, when any nodes are cordoned, the operator overrides the value of the cordon restart delay to 0 seconds during Portworx upgrade. This ensures that only the number of nodes specified by maxUnavailable are upgraded at the same time. Affected Versions: Versions 25.2.0 and earlier | Minor |
Known issues (Errata)
| Issue Number | Issue Description |
|---|---|
| PWX-41729 | The Portworx Operator did not schedule Portworx pods on nodes labeled px/enabled=false. However, during node decommission, the label is set to px/enabled=remove before being changed to px/enabled=false after Portworx is fully removed. If removal takes time, the Operator could incorrectly reschedule Portworx pods. User Impact: Portworx pods may be rescheduled on nodes that are in the process of being decommissioned, disrupting removal workflows. Affected Versions: 25.2.1 and 24.2.4 and earlier |
25.2.0
May 25, 2025
Clusters running Portworx Operator version 24.2.4 can directly upgrade to 25.2.0 following the regular upgrades and skip upgrade to 25.1.0.
OpenShift Clusters running Portworx Operator version 25.1.0, won’t be able to upgrade to 25.2.0 directly. You must uninstall Operator 25.1.0(while retaining the StorageCluster and all operands), before installing the new version. For more information, see Upgrade notes for OpenShift.
New features
-
Smart Upgrade now supports Kubernetes upgrades on additional platforms, expanding the streamlined and resilient upgrade process. Smart Upgrade maintains volume quorum and prevents application disruption during the parallel upgrade of Portworx and Kubernetes nodes.
- Portworx upgrade: Smart Upgrade continues to be supported on all platforms.
- Kubernetes upgrade: Smart Upgrade is now supported on the following platforms:
- All OpenShift distributions
- Google Anthos
- Vanilla Kubernetes
- Azure Kubernetes Service (AKS) (new)
- Amazon Elastic Kubernetes Service (EKS) (new)
- Google Kubernetes Engine (GKE) (new)
For more information, see Smart Upgrade.
-
Custom labels are supported on all Portworx managed components. You can apply labels to each component or component type via
StorageCluster.spec.metadata.labelsto improve observability. For more information, see StorageCluster.
Improvements
| Improvement Number | Improvement Description |
|---|---|
| PWX-38655 | Portworx Operator now supports the spec.monitoring.telemetry.hostNetwork field in the StorageCluster YAML. This setting controls whether Telemetry Pods run using either the Kubernetes network (default) or the host network. |
| PWX-42823 | When runtime options are specified as part of the misc-args annotation using rt_opts, and also provided as key-value pairs under RuntimeOptions in the cluster or node specification, the key-value pairs in RuntimeOptions are ignored, and a warning is raised in the StorageCluster. For more information, see StorageCluster CRD reference. |
| PWX-42799 | Portworx Operator now supports configuring resource limits for Telemetry pods through the StorageCluster specification. You can now use spec.monitoring.telemetry.resources to set CPU and memory limits for the registration and phonehome pods. |
| PWX-41647 | The Portworx Operator no longer creates in-tree StorageClasses on Kubernetes 1.31 and later. Users should use Portworx CSI StorageClass for volume provisioning. |
| PWX-42375 | Portworx Operator now supports configuring miscArgs at the node level using the spec.nodes[i].miscArgs field, in addition to cluster-level configuration using portworx.io/misc-args annotation. If both are set, the two sets of arguments are combined. When a key exists in both, the node-level value takes precedence. If no arguments are specified at the node level, the cluster-level annotations apply to all PX pods. |
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-44295 | When upgrading the operator to version 25.1.0, the StorageCluster spec placement configuration was reset to default values. This caused Portworx pods to stop running on infrastructure nodes in OpenShift clusters that had custom placement rules. User Impact: This issue affects OpenShift clusters where Portworx was explicitly configured to run on infrastructure nodes. As a result, workloads relying on Portworx storage on those nodes have become unavailable. Resolution: The operator now preserves existing custom node placement configurations that include infrastructure nodes during upgrades. New installations will continue to use the default placement rules, which exclude infrastructure nodes. Affected Versions: 25.1.0 | Major |
| PWX-42315 | If you disable telemetry in the Portworx Operator configuration, the Operator deletes the associated telemetry ConfigMap. However, it doesn’t remove the corresponding telemetry volume mounts from Portworx pods. User Impact: Portworx pods reference stale telemetry volume mounts. When a node reboots or a pod restarts, Portworx containers fail to start because the ConfigMap volumes are missing. Resolution: The Portworx Operator now detects changes in the telemetry specification and triggers a Portworx upgrade to apply them. This ensures that Portworx and Portworx API pods on affected nodes restart with the correct volume mounts, preventing stale references and startup failures. Affected Versions: 24.2.4 or earlier | Minor |
| PWX-41729 | The Portworx Operator did not schedule Portworx pods on nodes labeled px/enabled=false. However, during node decommission, the label is set to px/enabled=remove before being changed to px/enabled=false after Portworx is fully removed. If removal takes time, the Operator could incorrectly reschedule Portworx pods. User Impact: Portworx pods may be rescheduled on nodes that are in the process of being decommissioned, disrupting removal workflows. Resolution: The Operator now checks for both px/enabled=remove and px/enabled=false before scheduling Portworx pods. This prevents unintended scheduling during node decommissioning. Note: When upgrading to operator version 25.1.0 from an earlier version, all operator-managed pods will restart—except Portworx oci-monitor pods. This is due to a new node affinity rule. Affected Versions: 24.2.4 or earlier | Minor |
| PWX-41181 | Migration from the Portworx DaemonSet to the Portworx Operator failed when performed using Helm. Helm created the StorageCluster directly, which prevented the Operator from adding the required migration status condition. As a result, the Operator could not detect that migration was approved, even when the correct annotation was set. User Impact: The migration could not proceed, and Portworx pods were not created by the Operator. This left the cluster in an incomplete state. Resolution: The Operator now adds the migration status condition when migration starts, even if the StorageCluster was created outside the Operator. This allows migration to complete as expected. Affected Versions: 24.2.4 or earlier | Minor |
| PWX-40283 | During Portworx upgrades, the StorageCluster status remained Running even after the upgrade started. This made it appear that the upgrade was complete when it was still in progress. User Impact: Users could not track the upgrade accurately. The Running status caused confusion and limited visibility into progress. Resolution: Portworx Operator now sets the status to Updating when the upgrade begins and reverts it to Running after completion. This provides clearer tracking of the upgrade process. Affected Versions: 24.2.4 or earlier | Minor |
| PWX-37878 | STORK could not reschedule pods during node maintenance because it was unable to access the Portworx API. This triggered a "No node found with storage driver" error. User Impact: Pods could become stuck or unavailable during node maintenance, affecting application availability. Resolution: A liveness probe was added for the Portworx API. This allows STORK to detect when a node is down or in maintenance mode and reschedule pods appropriately. Note: Portworx API pods will restart during the upgrade to operator version 25.1.0 due to the new liveness probe. Affected Versions: 24.2.4 or earlier | Minor |
| PWX-36253 | When external etcd was configured with only the --cacert flag, the Portworx Operator did not add the required volume mount for the etcd secret. This caused Portworx pods to fail during DaemonSet to Operator migration. User Impact: Clusters using external etcd with only a CA certificate failed to start Portworx pods during migration, blocking successful initialization. Resolution: Portworx Operator now adds the etcd secret volume mount when only the --cacert flag is set. Portworx pods now start successfully during migration in this configuration. Affected Versions: 24.2.4 or earlier | Minor |
Known issues (Errata)
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-44331 | To prevent Portworx from being installed and started on specific nodes, Portworx recommends labeling the nodes with If you labeled only the nodes where Portworx should run by using the User Impact: After upgrading to Operator version 25.2.0:
Workaround: Before upgrading to Operator version 25.2.0, update the | Minor |
Upgrade notes for OpenShift
Portworx Operator version 25.1.0 has been replaced with 25.2.0. Upgrade to version 25.2.0.
These steps apply only to OpenShift clusters.
For other Kubernetes environments, refer to Upgrade Portworx Operator
Follow these steps to uninstall Operator version 25.1.0 and install version 25.2.0 on OpenShift clusters:
-
In the OpenShift web console, go to Operators > Installed Operators.
-
Verify that the installed version of Portworx Enterprise is 25.1.0.
-
Select Actions > Uninstall Operator.
-
In the confirmation dialog, clear the Delete all operand instances for this operator check box. This ensures that Portworx continues to run after uninstallation of Operator.

-
Select Uninstall again to confirm.
-
After the uninstallation completes, return to OperatorHub.
-
Search for Portworx Enterprise, and then install version 25.2.0.
25.1.0
May 21, 2025
This release has been replaced with version 25.2.0.
If Portworx was running on nodes labeled node-role.kubernetes.io/infra before upgrading to Portworx Operator version 25.1.0, you might notice that Portworx no longer runs on those nodes after the upgrade. This issue occurs because upgrading the Portworx Operator to version 25.1.0 resets the pod placement configuration in the StorageCluster resource to its default values. As a result, Portworx pods may stop running on infrastructure nodes in OpenShift clusters that use custom placement rules.
Workaround: Manually edit the StorageCluster resource to remove the node affinity rule that excludes infrastructure nodes.
Delete the following section under the spec.placement.nodeAffinity field from the StorageCluster spec.
- key: node-role.kubernetes.io/infra
operator: DoesNotExist