Portworx Operator Release Notes
26.1.0
April 6, 2026
New Features
-
Everpure Fusion Integration: The Portworx Operator now supports Everpure Fusion integration. Use Fusion to centrally manage storage policies across your Kubernetes clusters. Enable it by using the
spec.purePlatformfield in the StorageCluster custom resource (CR). The operator automatically deploys the Integration Operator component when Fusion is enabled. The Integration Operator manages the Fusion controller life cycle.For more information about StorageCluster fields to enable Fusion integration, see the StorageCluster CRD reference.
Fusion integration requires Portworx Enterprise 3.6.0 or later. If you enable Fusion on earlier versions, the Operator displays a warning.
-
Restrict Data Protection RBAC: The Portworx Operator now supports a restricted RBAC mode for storage-only deployments that don't require data protection capabilities, such as backups or disaster recovery. You can limit Stork permissions or run the operator with the minimum required permissions to meet least privilege security requirements. For more information, see Restrict RBAC for Stork and Operator.
Note: This feature is not supported for PX-CSI.
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-51912 | RBAC warnings related to VolumeAttributesClass were emitted from the PX-CSI resizer container on Kubernetes 1.34 or later. The warnings occurred because the PX-CSI controller ClusterRole lacked permissions to list VolumeAttributesClass resources.User Impact: Warning messages appeared in PX-CSI resizer container logs, which could cause confusion during troubleshooting. Resolution: The operator now grants the required RBAC permissions for VolumeAttributesClass to the PX-CSI controller ClusterRole, eliminating the warning messages.Affected Version: 25.6.1 and earlier | Minor |
| PWX-47895 | The operator did not automatically add systemMetadataDeviceSpec to the StorageCluster when PX-StoreV2 was specified in the annotation and preflight checks were skipped. This caused Portworx installations to fail with errors indicating that the system metadata device was not specified, which is required for PX-StoreV2 configurations.User Impact: Portworx installations with PX-StoreV2 failed when preflight checks were skipped, requiring manual intervention to add the systemMetadataDeviceSpec field.Resolution: The operator now automatically adds a default systemMetadataDeviceSpec when PX-StoreV2 is specified in the annotation, the systemMetadataDeviceSpec field is empty, and preflight checks are skipped.Affected Version: 25.6.1 and earlier | Major |
25.6.1
March 2, 2026
If you are upgrading from Portworx Operator version 25.6.0, ensure that you have reviewed the Known issues (errata) section and applied any applicable workarounds for your environment.
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-51697 | When you added custom annotations to operator-created StorageClass objects (except the KubeVirt StorageClass objects px-rwx-block-kubevirt, px-rwx-file-kubevirt, and px-cdi-scratch), the operator removed them during reconciliation.User Impact: Integrations that relied on these annotations (for example, backup tools or GitOps systems) might not have detected the intended StorageClass objects. Resolution: The operator now preserves custom annotations on operator-created StorageClass objects. Affected Version: 25.6.0 | Major |
| PWX-49456 | If you created a VolumeSnapshotClass object named px-csi-snapclass, the operator deleted it and recreated it. Custom annotations, labels, parameters, or deletion policy settings on operator-created VolumeSnapshotClass objects were not preserved.User Impact: Custom VolumeSnapshotClass configurations were lost, which might have affected snapshot workflows or integrations. Resolution: The operator now preserves existing VolumeSnapshotClass resources. If a VolumeSnapshotClass with the target name already exists in your cluster, the operator no longer modifies or overwrites it. Affected Version: 25.6.0 | Major |
| PWX-49374 | When the portworx.io/disable-storage-class: "true" annotation was applied to the StorageCluster, the operator could delete an existing StorageClass object if its name matched a default StorageClass introduced by the operator, even if that StorageClass was originally created by you. Additionally, Portworx Operator version 25.6.0 created StorageClass objects without the managed-by: operator label, which caused StorageClass objects to remain orphaned after uninstallation.User Impact: You could unexpectedly lose user-created StorageClass objects after upgrading the operator or enabling the disable-storage-class annotation, which could disrupt workloads relying on those StorageClass objects. Additionally, manual cleanup was required during uninstallation.Resolution: The operator now safely manages StorageClass objects by using explicit managed-by: operator ownership labels. The operator now deletes StorageClass objects only if they were created and labeled as operator-managed. User-created StorageClass objects are no longer modified or deleted, even if their names match default StorageClass objects.Affected Version: 25.6.0 | Major |
Known issues (Errata)
| Issue number | Description |
|---|---|
| PWX-51756 | If you add the portworx.io/disable-storage-class: "true" annotation to the StorageCluster after you install Portworx and then remove it, the Operator doesn’t recreate the KubeVirt StorageClass objects (px-rwx-block-kubevirt, px-rwx-file-kubevirt, and px-cdi-scratch) automatically. This issue doesn’t occur if the annotation is present during initial installation and then removed. User impact: In OpenShift clusters with virtualization enabled, the KubeVirt StorageClass objects aren’t created after you remove the disable-storage-class annotation that you added after installation. Workaround: Restart the Portworx Operator pod to trigger the recreation of the missing StorageClass objects. Affected version: 25.5.2 or later |
25.6.0
February 24, 2026
We recommend that you install or upgrade to Portworx Operator version 25.6.1 instead of 25.6.0 to avoid the known issues with StorageClass and VolumeSnapshotClass objects management. For details, see the Known issues (errata) section.
Note: Existing PVs and volumes are not affected.
New Features
-
Two-Node Arbiter (TNA) support for OpenShift: The Portworx Operator now supports Red Hat OpenShift two-node with arbiter (TNA) clusters for edge deployments. A TNA cluster has two control-plane nodes and one arbiter node. The arbiter stores Portworx KVDB data to maintain quorum and prevent split-brain, but it doesn’t store application data or run workloads. For more information, see Installation on OpenShift Two-Node with Arbiter Bare Metal Cluster.
Note: This feature requires Portworx Enterprise 3.5.2 or later and OpenShift Container Platform 4.20.11 or later.
-
OpenShift dynamic plugin configuration enhancements: You can now set custom images for Portworx OpenShift dynamic plugin components by using
spec.ocpDynamicPluginin the StorageCluster spec:pluginImage: Portworx plugin imageproxyImage: Portworx plugin proxy image
For more information, see StorageCluster CRD reference.
Portworx Plugin components (
px-pluginandpx-plugin-proxy) are now deployed only when the OpenShift Console plugin is enabled. In earlier versions, these components were always deployed, even when the plugin was disabled. You can configure resource limits, tolerations, and other settings for these components using theComponentK8sConfigCR.
-
Priority class configuration for all Portworx components: You can now set priority classes for Portworx components by using the
ComponentK8sConfigcustom resource (CR):spec.globalConfig.priorityClass: Apply one priority class to all Portworx components.spec.components.workloadConfigs.priorityClass: Override the global priority class for specific components.
A higher priority class helps keep Portworx pods scheduled and reduces the chance of eviction under resource pressure. For more information, see Configure Priority Class and ComponentK8sConfig CRD reference.
-
Autopilot Prometheus metrics and alerts: The Portworx Operator now exposes Prometheus metrics for Autopilot and adds the
AutopilotActionFailedalert when an action for a rule object fails. For more information, see Portworx Metrics and Portworx Alerts. -
ComponentK8sConfig in OpenShift Software Catalog:
ComponentK8sConfigis now available in the OpenShift Software Catalog as a Provided API. You can create and manageComponentK8sConfigresources in the OpenShift console to configure settings such as priority classes, resource limits, and deployment behavior. For more information, see ComponentK8sConfig CRD reference and Configure resource limits, placements, tolerations, nodeAffinity, labels, and annotations for Portworx components. -
Automated pxfslibs update DaemonSet: The Portworx Operator can now deploy and manage the pxfslibs update DaemonSet through the StorageCluster spec. This feature is supported only in air-gapped environments and simplifies updating filesystem dependencies. The Operator creates the DaemonSet, monitors execution, updates status, and cleans up. Configure it using
spec.pxfslibsUpdate. For more information, see Update Portworx filesystem dependencies and StorageCluster CRD reference.
Improvements
| Improvement Number | Improvement Description |
|---|---|
| PWX-49291 | Increased the default memory resources for px-telemetry-metrics-collector: requests from 64Mi to 128Mi, and limits from 128Mi to 1000Mi. These defaults apply when no custom values are set using the ComponentK8sConfig CR. |
| PWX-48988 | Added RBAC get and list permissions for CSIDriver, StatefulSets, DaemonSets, and ReplicaSets to support diagnostics collection of storage-related Kubernetes objects. |
| PWX-43575 | Added RBAC get and list permissions for ControllerRevisions so diagnostics can capture these resources. |
| PWX-49290 | Increased the readiness probe timeout for telemetry registration and phone-home probes from 1s to 2s. This prevents intermittent failures when /ping-trusted responds successfully but exceeds the 1s client-side timeout. |
| PWX-50538 | You can now set the cluster domain at the cluster level or node level. Use the portworx.io/misc-args annotation for cluster-level configuration, or spec.nodes[].clusterDomain for node-level configuration (recommended for TNA). Node-level configurations override cluster-level settings. |
| PWX-49373 | Added RBAC get, list, and watch permissions for VolumeAttachment resources to the CSI node plugin service account. This change lets the PX-CSI Lister service monitor VolumeAttachment resources using informers for active clusters. |
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-50225 | Issue: When internal KVDB TLS is enabled, the Operator could misclassify an in-progress installation as not a fresh installation if the cluster temporarily entered a Degraded state during installation. As a result, it enforced KVDB TLS migration validation too early and repeatedly emitted this warning: Internal kvdb tls is enabled, but kvdb tls migration is not approved.User Impact: The installation could be blocked, leaving clusterCondition in the InProgress state and the StorageCluster in the Degraded state, which caused reconciliation failures.Resolution: The Operator now treats an Install condition of InProgress as a fresh installation, even if the cluster is temporarily in a Degraded state during installation.Affected Version: 25.5.2 and earlier | Minor |
| PWX-50025 | Issue: During a Portworx upgrade, the Operator could fail if it encountered volumes not managed by Portworx. In environments running KubeVirt virtual machines (VMs) with a mix of Portworx-managed and non-Portworx volumes, the Operator incorrectly treated missing Portworx volume metadata as a fatal error, causing the Portworx upgrade to pause or fail. User Impact: Portworx upgrades could pause or fail even though the non-Portworx volumes were unrelated. Resolution: The Operator now skips non-Portworx volumes safely and logs a debug message instead of failing the upgrade. Affected Version: 25.5.2 and earlier | Major |
| PWX-49846 | Issue: After upgrading to PX CSI 25.8.0 with containerd, volumes could not be unmounted. User Impact: Applications using PX-CSI volumes could not be stopped or migrated after the PX-CSI upgrade. Resolution: Fixed the unmount issue with containerd. Affected Version: 25.5.2 and earlier | Major |
| PWX-49745 | Issue: StorageCluster uninstall could get stuck if Prometheus Operator CRDs (ServiceMonitor, PrometheusRule) were not installed. User Impact: Uninstall could not complete in clusters without Prometheus Operator CRDs. Resolution: Uninstall now succeeds even when Prometheus Operator CRDs are not present. Affected Version: 25.5.2 and earlier | Major |
| PWX-49579 | Issue: In PX-CSI, PVCs could remain in the Pending state when many PVCs were created at once. User Impact: Applications could not start because PVCs were stuck in the Pending state. Resolution: Reduced the number of PX-CSI provisioner worker threads to avoid bottlenecks. Affected versions: 25.5.2 and earlier | Major |
| PWX-47448 | Issue: imagePullPolicy in the StorageCluster spec did not apply to the px-plugin deployment.User Impact: Users could not control the px-plugin image pull policy. Resolution: The StorageCluster imagePullPolicy now applies to the px-plugin deployment as well.Affected Version: 25.5.2 and earlier | Minor |
| PWX-47367 | Issue: Autopilot custom resources (AutopilotRule, AutopilotRuleObject, and ActionApproval) were not removed during uninstall. User Impact: Stale Autopilot CRs remained after Portworx uninstallation. Resolution: Fixed Autopilot CR cleanup during uninstall. Affected Version: 25.5.2 and earlier | Minor |
| PWX-42749 | Issue: Invalid volume IDs were ignored during diagnostics collection, making it difficult to understand why diagnostics stayed Pending. User Impact: Users could not determine why diagnostics collection failed. Resolution: Diagnostics now reports missing volume IDs clearly and stops with an error so you can update the PortworxDiag spec and resume automatically.Affected Version: 25.5.2 and earlier | Minor |
| PWX-42750 | Issue: When Telemetry services were non-functional, the system did not provide notifications or error logs. User Impact: Users were not notified when diagnostics failed to upload due to Telemetry issues. Resolution: The system now logs and reports Telemetry failures appropriately. Affected Version: 25.5.2 and earlier | Minor |
| PWX-50406 | Issue: If the PortworxDiags CR was unhealthy, diagnostics collection stayed in Pending state without error details.User Impact: Users could not determine why diagnostics was stuck in Pending. Resolution: PortworxDiags Pending now includes appropriate reasoning. Affected Version: 25.5.2 and earlier | Minor |
| PWX-50372 | Issue: ComponentK8sConfig did not apply annotations to the portworx-service and portworx-kvdb-service services. When users specified annotations for these Services by using ComponentK8sConfig, the CR entered the VALIDATION_FAILED state.User Impact: Users could not configure annotations for the portworx-service and portworx-kvdb-service services by using ComponentK8sConfig. Migration from StorageCluster to ComponentK8sConfig that uses the portworx.io/migrate-configs: "true" annotation did not migrate annotations for these services.Resolution: ComponentK8sConfig now supports annotations for the portworx-service service and the portworx-kvdb-service service. The migration now correctly migrates annotations for these Services. For more information, see the Configure resource limits, placements, tolerations, nodeAffinity, labels, and annotations for Portworx components.Affected Version: 25.5.2 and earlier | Major |
Known issues (Errata)
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-49374 | New installations with version 25.6.0 create StorageClass objects without the managed-by: operator label. This causes the portworx.io/disable-storage-class annotation to not work as expected. If you create a custom StorageClass with the same name as an operator-created StorageClass and set portworx.io/disable-storage-class: "true" in the StorageCluster, the custom StorageClass is deleted. Additionally, StorageClass objects remain orphaned after uninstallation.User Impact: The portworx.io/disable-storage-class annotation does not function correctly, and manual cleanup is required during uninstallation.Workaround: After installation, manually add the managed-by: operator label to operator-created StorageClass objects, or manually delete them during uninstallation.Affected Version: 25.6.0 | Major |
| PWX-51697 | When you add custom annotations to operator-created StorageClass objects (except KubeVirt StorageClass objects: px-rwx-block-kubevirt, px-rwx-file-kubevirt, px-cdi-scratch), the operator removes them during reconciliation.User Impact: Integrations that rely on these annotations (for example, backup tools or GitOps systems) might not detect the intended StorageClass objects. Workaround: Create custom StorageClass objects with different names and add the required annotations to them instead of modifying operator-created StorageClass objects. Affected Version: 25.6.0 | Major |
| PWX-49456 | If you create a VolumeSnapshotClass named px-csi-snapclass, the operator deletes and replaces it. Custom annotations, labels, parameters, or deletion policy settings on operator-created VolumeSnapshotClass objects are not preserved.User Impact: Custom VolumeSnapshotClass configurations are lost, which might affect snapshot workflows or integrations. Workaround: Use a different name for custom VolumeSnapshotClass objects (for example, custom-px-snapclass). Avoid modifying operator-created VolumeSnapshotClass objects.Affected Version: 25.6.0 | Major |
25.5.2
February 02, 2026
This release also addresses security vulnerabilities.
Improvements
| Improvement Number | Improvement Description |
|---|---|
| PWX-50404 | You can now configure the Pure1 metrics collector separately from other telemetry components using new StorageCluster fields:
Note: When telemetry is enabled, the Operator automatically adds metricsCollector.enabled: true to your StorageCluster spec. If you use GitOps, update your Git manifest to include this field so that your Git repository matches the actual cluster state.For more information, see Customize metrics collector. |
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-50404 | Issue: The metrics collector logged excessive warning messages for Prometheus metrics with more than 32 labels, causing increased log volume in centralized logging systems. User Impact: Increased log volume from the metrics collector consumed additional storage in centralized logging systems. Resolution: The issue is fixed in metrics collector image StorageCluster For more information, see Customize metrics collector. Affected Version: 25.5.1 and earlier | Minor |
25.5.1
January 06, 2026
New Features
- Taint-based scheduling support: This Portworx Operator release enables Stork support for taint-based scheduling for workloads. When taint-based scheduling is enabled, the Operator applies taints to Portworx storage and storageless nodes. Stork automatically adds matching tolerations to Portworx system pods and to applications that use Portworx volumes. This blocks workloads that lack matching tolerations from being scheduled on Portworx storage nodes. For more information, see Taint-based scheduling with Stork.
note
This feature requires Stork version 25.6.0 or later.
Improvements
| Improvement Number | Improvement Description |
|---|---|
| PWX-48477 | The Portworx Operator now uses the Portworx Volume API to identify volume types instead of relying on the StorageClass API. Previously, when a StorageClass was deleted after a PersistentVolumeClaim (PVC) was created, the Operator couldn't determine the volume type, which led to failures during operations, including KubeVirt VM live migration. The Portworx Operator now queries the volume object directly through the Portworx API, enabling consistent volume type detection even if the StorageClass is unavailable. This update improves the reliability of VM live migration during Portworx upgrades for both Pure and SharedV4 volumes. |
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-48822 | In air-gapped environments with KVDB TLS enabled, Portworx fails to pull the cert-manager image from a private registry that requires authentication. This occurs because the Portworx Operator does not attach the regcred secret to the cert-manager pods.User Impact: The cert-manager pods enter a ImagePullBackOff state due to missing registry credentials, and installation fails.Resolution: Portworx Operator ensures that all cert-manager components are deployed with the correct imagePullSecrets, such as regcred, when you use authenticated custom registries.Affected Version: 25.5.0 and earlier | Minor |
| PWX-49374 | When you apply the portworx.io/disable-storage-class: "true" annotation, the Operator can delete an existing StorageClass if its name matches a default StorageClass introduced by the Operator. This can occur even if the StorageClass was not created by the Operator.User impact: StorageClasses not created by the Operator can be deleted after an upgrade or when the annotation is enabled. Workloads that depend on those StorageClasses might be disrupted. Resolution: The Operator now manages StorageClasses using explicit managed-by ownership labels. It deletes only those StorageClasses it created and labeled as Operator-managed. StorageClasses not created by the Operator are not modified or deleted, even if their names match default StorageClasses.Affected version: 25.5.0 | Minor |
| PWX-49456 | The Operator could overwrite existing VolumeSnapshotClass resources, removing your custom parameters or annotations during a restart or upgrade.User impact: If you customized the snapshot configuration, your changes might be lost during an Operator restart or upgrade, potentially resulting in snapshot failures. Resolution: The Operator now preserves existing VolumeSnapshotClass resources. If a VolumeSnapshotClass with the same name already exists in the cluster, the Operator no longer modifies or overwrites it.Affected version: 25.5.0 | Minor |
25.5.0
November 19, 2025
New Features
-
Support for external Prometheus monitoring: Portworx now supports external Prometheus for monitoring. You can configure this by disabling PX Prometheus and enabling metrics export in the
StorageCluster....
spec:
monitoring:
prometheus:
enabled: false
exportMetrics: true
...For more information, see Monitor Clusters on Kubernetes.
If you are using Autopilot, after you configure an external Prometheus instance, you must specify the Prometheus endpoint in the Autopilot configuration. For more information, see Autopilot.
Improvements
| Improvement Number | Improvement Description |
|---|---|
| PWX-47324 | You can now use the spec.deleteStrategy.ignoreVolumes field in the StorageCluster spec to allow uninstall even when Portworx volumes are present. This is required in scenarios where PersistentVolumeClaims (PVCs) referencing Portworx storage classes exist, such as those created by KubeVirt virtual machines. When set to true, this field allows uninstall operations using the UninstallAndWipe or UninstallAndDelete strategy to proceed. If not set, the uninstall is blocked until all Portworx volumes are removed. Note: The default value is false. |
| PWX-25352 | The StorageCluster spec now supports a delete strategy type: UninstallAndDelete on vSphere, AWS, GKE, and Azure platforms. This option removes all Portworx components from the system, wipes storage devices, deletes Portworx metadata from KVDB, and removes the associated cloud drives. For more information, see Delete/Uninstall strategy. |
| PWX-47099 | The Portworx Operator now supports configuring the cert-manager, cert-manager-cainjector, and cert-manager-webhook deployments through the ComponentK8sConfig custom resource. You can now set labels, annotations, resource requests and limits, and placement specifications (such as tolerations) for these workloads using the ComponentK8sConfig API. This enhancement enables consistent and centralized configuration of Portworx-managed cert-manager components deployed for TLS enabled KVDB. |
| PWX-42597 | If call-home is enabled on the cluster, telemetry is now automatically enabled. If telemetry does not become healthy within 30 minutes, the Operator disables it. Telemetry can still be toggled manually using spec.monitoring.telemetry.enabled field in the StorageCluster. |
| PWX-47543 | The Operator now suppresses repeated gRPC connection errors when the Portworx service is down on a node, reducing log noise and improving readability during node failure scenarios. |
| PWX-35869 | On OpenShift clusters, the Operator now creates the following StorageClasses by default when the HyperConverged custom resource is detected:
Creation of these StorageClasses can be controlled using the spec.csi.kubeVirtStorageClasses section in the StorageCluster. |
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-47053 | Callhome telemetry failed in dual-stack clusters because the telemetry service did not listen on IPv6 interfaces. When the environment variable PX_PREFER_IPV6_NETWORK_IP=true was set, the px-telemetry-phonehome service attempted to use IPv6, but the underlying Envoy configuration was bound only to IPv4.User Impact: Telemetry data was not reported in dual-stack clusters with IPv6 preference. Resolution: The Operator now correctly configures telemetry services to bind to both IPv4 and IPv6, ensuring that telemetry functions in dual-stack environments. Affected Version: 25.4.0 and earlier | Minor |
| PWX-46703 | The Portworx Operator was incorrectly updating multiple components on every reconcile cycle, even when there were no changes to their specifications. Affected components included px-telemetry-registration, px-telemetry-metrics-collector, px-prometheus, portworx-proxy, and others. This behavior caused unnecessary updates to deployments and other resources.User Impact: Unnecessary API calls were made to kube-apiserver.Resolution: The Operator now correctly detects and skips updates when there are no spec differences. Default values are explicitly set, and label comparison logic has been fixed to prevent unintended updates.Affected Version: 25.4.0 and earlier | Minor |
| PWX-47985 | If a StorageCluster resource included a toleration with operator: Exists, the upgrade to PX-CSI version 25.8.0 failed. These tolerations matched all taints, which interfered with CSI migration logic during upgrade.User Impact: PX upgrades failed in clusters using broad toleration rules. Resolution: The Operator now correctly handles tolerations with operator: Exists and no longer fails during upgrades.Affected Version: 25.4.0 | Minor |
| PWX-46704 | The Operator deleted all KVDB pods simultaneously when updating the resource or placementSpec in the ComponentK8sConfig or the STC. This caused the Operator to recreate all KVDB pods during the next reconciliation, which could result in quorum loss.User Impact: The cluster might temporarily lose KVDB quorum during updates, potentially affecting cluster availability and operations. Resolution: The Operator now respects the PodDisruptionBudget for KVDB pods and deletes them safely, ensuring quorum is maintained during updates. Affected Version: 25.4.0 | Minor |
| PWX-46394 | During upgrades with Smart upgrade enabled, the Operator did not prioritize nodes marked with the custom Unschedulable annotation for ongoing KubeVirt VM migrations. As a result, the upgrade logic frequently selected new nodes in each cycle, causing redundant VM evictions and increased upgrade time.User Impact: Resulted in prolonged upgrade durations and repeated KubeVirt VM migrations. Resolution: The Operator now treats nodes with the Unschedulable annotation as unavailable and prioritizes them for upgrade until completion. This ensures upgrade continuity and avoids redundant VM evictions. Affected Version: 25.4.0 and earlier | Minor |
| PWX-48279 | On clusters with SELinux set to enforcing mode, the px-pure-csi-node pod crashed due to denied access when attempting to connect to the CSI socket. The node-driver-registrar and liveness-probe containers were blocked by SELinux policies from accessing /csi/csi.sock, resulting in repeated connection failures and pod crash loops.User Impact: The px-pure-csi-node pod failed to start, preventing CSI node registration and storage provisioning when SELinux was in enforcing mode.Resolution: The Operator now configures the node-driver-registrar and liveness-probe containers with the required security context to allow socket access under SELinux enforcing mode.Affected Version: 25.4.0 | Minor |
| PWX-45817 | On some clusters, external webhooks such as those used by Mirantis Kubernetes Engine (MKE) injected configuration into KVDB and Portworx pods, including tolerations, affinity, or placement rules. If these injected settings were not explicitly defined in the ComponentK8sConfig custom resource (CR) or the StorageCluster spec, the Portworx Operator removed them, causing pod restarts.User Impact: Affected pods restarted continuously due to missing tolerations or placement rules. Resolution: The Operator now preserves webhook-injected configuration by default. This fix applies to both StorageCluster and ComponentK8sConfig workflows.Affected Version: 25.3.0, 25.3.1, and 25.4.0 | Minor |
Known issues (Errata)
| Issue Number | Issue Description |
|---|---|
| PWX-47502 | Kubernetes upgrades on AKS might fail when the data drives use Premium_LRS disks and Smart Upgrade isn't enabled, especially if maxUnavailable is set to 1.User Impact: If Smart Upgrade isn't enabled and a node is down, the upgrade process will halt due to the maxUnavailable=1 setting. Even if you increase maxUnavailable to 2, you might still experience a 30-minute timeout due to slow I/O performance from the underlying disk type.Workaround: Enable Smart Upgrade by following these instructions, and adjust maxUnavailable to unblock the upgrade. However, if disk performance issues persist, the timeout might still occur. Affected Version: 25.5.0 |
| PWX-48822 | In air-gapped environments with KVDB TLS enabled, the cert-manager image pull fails when using an authenticated custom registry. This occurs because the Portworx Operator does not attach the regcred secret to the cert-manager pods.User Impact: The cert-manager pods enter an ImagePullBackOff state due to missing registry credentials, and installation fails.Affected Version: 25.2.1 or later |
25.4.0
October 15, 2025
New features
- Support for PX-CSI 25.8.0: Portworx Operator adds support for the redesigned PX-CSI (version 25.8.0). For PX-CSI, this release introduces new components, such as the CSI Controller Plugin and CSI Node Plugin, removes dependencies on KVDB, PX API, CSI, PX Cluster, and PX Plugin pods, and enables in-place migration from earlier PX-CSI versions. The priority class specified in the StorageCluster is also applied to all PX-CSI pods.
Improvements
| Improvement Number | Improvement Description |
|---|---|
| PWX-37494 | Added support to override the seLinuxMount setting in the CSI driver, via the spec.csi.seLinuxMount field in the StorageCluster specification. This field defaults to true but can be set to false in environments where SELinux relabeling is not required. |
| PWX-38408 | The Portworx Operator now supports updating the image for the portworx-proxy DaemonSet based on the image specified in the px-versions ConfigMap. Previously, this image was hard-coded to registry.k8s.io/pause:<release-version>. You can now configure a different image. The proxy DaemonSet reflects updates when AutoUpdateComponents is set to Once or Always. |
| PWX-46659 | The Operator version is now reported in the StorageCluster.status.OperatorVersion field. This change improves visibility into the deployed Operator version. |
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-46825 | The Portworx Operator incorrectly added the default annotation to a VolumeSnapshotClass even when a default VolumeSnapshotClass already existed. User Impact: This might result in multiple VolumeSnapshotClass objects marked as default.Resolution: The Operator now checks whether a default VolumeSnapshotClass already exists before applying the default annotation. | Minor |
25.3.1
September 06, 2025
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-46691 | On OpenShift Container Platform (OCP) 4.15 and earlier, if Portworx-specific ServiceAccount objects have no annotations, the Operator updates the objects during every reconciliation loop. User impact: Service account updates trigger the regeneration of associated kubernetes.io/dockercfg and kubernetes.io/service-account-token secrets, causing excessive creation of secret objects and unnecessary API traffic.Resolution: The Operator no longer performs redundant updates on ServiceAccount objects without annotations, preventing unnecessary regeneration of secret objects and reducing API load. Affected versions: 25.3.0 | Major |
25.3.0
September 03, 2025
- When you upgrade to Operator version 25.3.0, the
px-pluginandpx-plugin-proxypods restart. - If you are running an OpenShift versions 4.15 and earlier, do not upgrade to Operator version 25.3.0. This version causes excessive
Secretobject creation due to repeatedServiceAccountupdates, which significantly increases API server load. For more information about the workaround, see here.
New features
- ComponentK8sConfig: The
ComponentK8sConfigcustom resource allows configuration of resources, labels, annotations, tolerations, and placement rules for all Portworx components. Configurations previously defined in theStorageClustershould now be migrated to theComponentK8sConfigcustom resource. For more information, see Configure resource limits, placements, tolerations, nodeAffinity, labels, and annotations for Portworx components.
Improvements
| Improvement Number | Improvement Description |
|---|---|
| PWX-42536 | Starting with Kubernetes version 1.31, in-tree storage drivers have been deprecated, and the Portworx CSI driver must be used. The Portworx Operator now automatically sets the CSI configuration to enabled if the CSI spec is left empty or explicitly disabled. If CSI is already enabled, no changes are made. |
| PWX-42429 | The Portworx Operator now supports IPv6 clusters in the OpenShift dynamic console plugin. |
| PWX-44837 | The Portworx Operator now creates a default VolumeSnapshotClass named px-csi-snapclass. You can configure this behavior using the spec.csi.volumeSnapshotClass field in the StorageCluster custom resource. |
| PWX-44472 | The Portworx Operator now reports a new state, UpdatePaused, when an upgrade is paused. This state indicates that an update is not in progress. StorageCluster events and logs provide additional context about the paused upgrade. |
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-45461 | The Portworx Operator was applying outdated CSI CustomResourceDefinitions (CRDs) that were missing the sourceVolumeMode field in VolumeSnapshotContent, resulting in compatibility issues on standard Kubernetes clusters.User Impact: On vanilla Kubernetes version 1.25 or later, attempts to create VolumeSnapshots failed due to the missing spec.sourceVolumeMode field. Snapshot controller logs reported warnings such as unknown field "spec.sourceVolumeMode". Managed Kubernetes distributions like OpenShift were unaffected, as they typically include the correct CRDs by default.Resolution: The Operator now applies CSI CRDs version 8.2.0, which includes the sourceVolumeMode field, ensuring compatibility with Kubernetes 1.25 and later. Affected Versions: 25.2.2 or earlier | Minor |
| PWX-45246 | An outdated Prometheus CustomResourceDefinition (CRD) was previously downloaded by the Operator. This CRD lacked required fields, which caused schema validation errors during Prometheus Operator reconciliation. User Impact: Reconciliation failures occurred due to missing fields in the CRD. Resolution: The Operator now references the latest Prometheus CRD at the deployment URL, ensuring compatibility and preventing schema validation errors. Affected Versions: 25.2.0 or earlier | Minor |
| PWX-45156 | Live migration was previously skipped only for volumes with backend=pure_block. The Operator continued to trigger live migration for other volume types, such as FADA (pure_fa_file) and FBDA, even when it was not appropriate.User Impact: Unnecessary migrations during upgrades could lead to virtual machine (VM) evictions and movement to other PX nodes. Resolution: The Operator now skips live migration for volumes using FADA and FBDA backends, reducing disruption and maintaining application availability during upgrades. Affected Versions: 25.2.2 or earlier | Minor |
| PWX-45048 | In clusters with KubeVirt virtual machines (VMs), the Portworx Operator might not remove the custom "unschedulable" annotation from nodes when it is no longer needed. Additionally, paused VMs prevented upgrades from proceeding, as they cannot be evicted.Resolution: The Operator now ignores paused KubeVirt VMs during upgrades and removes the custom "unschedulable" annotation when it is no longer required. This behavior improves upgrade reliability in KubeVirt environments. Affected Versions: 25.2.2 or earlier | Minor |
| PWX-44974 | In large clusters (for example, 250+ StorageNodes), the Operator’s API calls to the Kubernetes API server increased linearly, resulting in high load on the API server. Resolution: The Operator is improved with better caching, which significantly reduces API calls to the Kubernetes API server. Affected versions: 25.2.1 and 25.2.2 | Minor |
| PWX-39097 | When csi.enabled was set to false in the StorageCluster (STC) spec, the installSnapshotController field remained enabled, creating inconsistencies in the CSI configuration.User Impact: This mismatch could lead to confusion or result in the snapshot controller being deployed unnecessarily. Resolution: The Operator now automatically resets installSnapshotController when CSI is disabled, maintaining consistent configuration behavior. Affected Versions: 25.2.0 or earlier | Minor |
Known issues (Errata)
-
PWX-46691: If Portworx-specific ServiceAccount objects do not include any annotations, the Operator updates these objects during each reconciliation loop. On OpenShift Container Platform (OCP), each ServiceAccount update triggers the regeneration of associated Secret objects, causing excessive Secret creation and unnecessary API traffic. This affects OCP versions 4.15 and earlier.
Workaround: Add at least one annotation to each Portworx-specific ServiceAccount object, such as the following:
autopilotautopilotpx-csiportworx-proxypx-telemetrystorkstork-scheduler
For example:
kind: ServiceAccount
metadata:
annotations:
portworx.io/reconcile: "ignore"noteIf you've already upgraded to Operator version 25.3.0 and are affected by this issue, you can either downgrade to a previous Operator version or follow the workaround described above.
To downgrade the Operator version, follow these steps to uninstall Operator version 25.3.0 and install 25.2.2:
-
In the OpenShift web console, go to Operators > Installed Operators.
-
Verify that the installed version of Portworx Operator is 25.3.0.
-
Select Actions > Uninstall Operator.
-
In the confirmation dialog, clear the Delete all operand instances for this operator check box. This ensures that Portworx continues to run after uninstallation of Operator.

-
Select Uninstall again to confirm.
-
After the uninstallation completes, return to OperatorHub.
-
Search for Portworx Operator, and then install version 25.2.2.
Ensure that you set the Update approval to manual while installing.
-
PWX-45817: On some clusters, external webhooks such as those used by Mirantis Kubernetes Engine (MKE) may inject additional configuration into KVDB and Portworx pods. This includes tolerations, affinity rules, or placement constraints.
If you use the
ComponentK8sConfigcustom resource (CR) to manage tolerations, and the injected tolerations are not explicitly defined in the CR, the Portworx Operator removes them. As a result, the affected pods restart continuously.This issue is not limited to MKE and can affect any platform where an external webhook injects configuration into workloads. It can occur when using either the
StorageClusteror theComponentK8sConfig.Workaround: Follow the steps below:
-
Ensure that
jqoryqis installed on your machine. -
Get the kubeconfig of your cluster.
-
Get taints on the cluster nodes:
-
For
jq, run:kubectl --kubeconfig=<path/to/kubeconfig> get nodes -ojson | jq -r . | jq -r '.items[].spec.taints' -
For
yq, run:kubectl --kubeconfig=<path/to/kubeconfig> get nodes -oyaml | yq -r . | yq -r '.items[].spec.taints'Example output:
null
null
null
[{"effect": "NoSchedule", "key": "com.docker.ucp.manager"}]
null
null
-
Apply the tolerations to the
ComponentK8sConfigCR based on the command output in the previous step. For example:- componentNames:
- KVDB
- Storage
- Portworx API
workloadConfigs:
- placement:
tolerations:
- key: com.docker.ucp.manager
operator: Exists
workloadNames:
- storage
- portworx-kvdb
- portworx-api
-
-
PWX-45960: When using the workload identity feature, restart of
KVDBpods can cause theeks-pod-identitywebhook to inject credentials into theKVDBpods because the same service account is used for thePortworx API,Portworx, andKVDBpods.Note: When credentials are removed from
StorageCluster, the operator does not remove them from theKVDBpods if they have been added, so you must manually restart theKVDBpods to remove these credentials.
25.2.2
July 8, 2025
Improvements
| Improvement Number | Improvement Description |
|---|---|
| PWX-40116 | Portworx Operator now emits events on the StorageCluster object during Portworx and Kubernetes smart upgrades if a node is not selected for upgrade. Each event includes details explaining why the node was not selected for the upgrade. |
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-45078 | The Operator configured Prometheus to connect to KVDB metrics over HTTP, even when TLS was enabled. This caused the connections to fail. User Impact: In clusters with KVDB TLS enabled, Prometheus could not scrape internal KVDB metrics. As a result, monitoring dashboards were incomplete and TLS handshake errors appeared in logs. Resolution: The Operator now configures Prometheus to use HTTPS with TLS settings when KVDB TLS is enabled. This ensures that internal KVDB metrics are collected successfully. Affected Versions: Versions 25.2.1 | Minor |