Skip to main content
Version: 3.3

Portworx Enterprise Release Notes

3.3.1.3

August 25, 2025

This version addresses security vulnerabilities.

note

The security fixes are also available in 3.2.4 and 3.1.9 versions of Portworx Enterprise.

3.3.1.2

August 08, 2025

To install or upgrade Portworx Enterprise to version 3.3.1.2, ensure that you are running one of the supported kernels and all system requirements are met.

Fixes

Issue NumberIssue DescriptionSeverity
PWX-45578The deprecated max_thin_pool_size option, previously used to set the maximum size a storage pool can grow via runc, was replaced by the more flexible max_pool_size_tb. The max_thin_pool_size option is reintroduced for backward compatibility but has limited functionality and is not recommended for use.

User Impact: The max_thin_pool_size setting is installation-only and not runtime-configurable, limiting flexibility for changing storage needs. This may lead to technical debt or constrained operations in dynamic environments.

Resolution: To support dynamic updates and improve lifecycle management, Portworx recommends using the runtime-configurable max_pool_size_tb option instead of the depracated max_thin_pool_size option. Configure max_pool_size_tb using one of these methods:
  • Apply directly through the StorageCluster (STC) specification using the spec.runtimeOptions.max_pool_size_tb field.
  • Use portworx.io/misc-args: --rt_opts "max_pool_size_tb annotation in the STC for runtime configuration.
  • Update the runtime option across the cluster using pxctl cluster options update --runtime-options "max_pool_size_tb=<value> command.
  • Apply the setting to a specific pool at the time of its creation using the pxctl sv drive add -s <spec> --newpool --max-pool-size-tb <value> command.
    Note: This method overrides any cluster-level settings, but only for the pool being created with the command.
For more information on max_pool_size_tb, see the max_pool_size_tb section.

Components: Storage
Affected Versions: 3.3.0 and later
Minor
PWX-45694When an Asynchronous DR migration schedule and Portworx Backup Schedule run concurrently on the destination cluster, some restore tasks may fail if the volume being restored is also being backed up and Asynchronous DR migration may become stuck in pending state.

User Impact: Asynchronous DR migration may fail to complete and become stuck.

Resolution: Without this fix, the workaround is to disable the Portworx Backup Schedule on the destination cluster. With the fix applied, the issue can be avoided without requiring the workaround.

Components: Cloudsnap
Affected Versions: All versions
Minor
PWX-45738In clusters configured with IPv6-only or dual-stack networking, application pods using SharedV4 service volumes might be deleted unnecessarily during failover, such as when an NFS server node goes down.

User Impact: For KubeVirt VMs, this issue can cause unexpected VM shutdowns during scheduled node maintenance operations.

Resolution: Portworx prevents unnecessary deletion of application pods during SharedV4 service failover.

Components: Shared Volumes
Affected Versions: 3.2.2 and later
Minor

3.3.1.1

July 15, 2025

To install or upgrade Portworx Enterprise to version 3.3.1.1, ensure that you are running one of the supported kernels and all system requirements are met.

Fixes

Issue NumberIssue DescriptionSeverity
PWX-44778The attachment count for Flash Array Direct Access RWX volumes (raw block) and Portworx RWX volumes (raw block) on a node might not be properly decremented when a volume is detached from a remote node during KubeVirt VM live migrations.

User Impact: Nodes can incorrectly reach their attachment limit, blocking new volume attachments and leaving pods stuck in the "ContainerCreating" state.

Resolution: Detachments on remote nodes are now properly accounted for, ensuring correct attachment counts and allowing new pods to schedule as expected.

Components: Storage
Affected Versions: 3.2.3 and later
Minor

3.3.1

July 8, 2025

To install or upgrade Portworx Enterprise to version 3.3.1, ensure that you are running one of the supported kernels and all system requirements are met.

New Features

  • TLS Encryption for Internal KVDB Communication
    Portworx now supports enabling Transport Layer Security (TLS) for internal KVDB communication on the following platforms:
    • Amazon Elastic Kubernetes Service (EKS)
    • Azure Kubernetes Service (AKS)
    • Google Kubernetes Engine (GKE)
    • IBM Cloud Kubernetes Service (IKS)
    • VMware Tanzu Kubernetes Grid Integration (TKGI)
    • Rancher Kubernetes Engine 2 (RKE2)
    • Oracle Container Engine for Kubernetes (OKE)
    • Mirantis Kubernetes Engine (MKE)
    • Google Anthos
    • OpenShift Container Platform (OCP)
    • Kubernetes Operations (KOPS)
    • VMware Tanzu Kubernetes Grid Service (TKGS)
    • Red Hat OpenShift Service on AWS (ROSA)
    • Azure Red Hat OpenShift (ARO)
    This feature secures communication between internal KVDB and all Portworx nodes using TLS certificates managed by cert-manager. For more information, see Enable TLS for Internal KVDB.

Fixes

Issue NumberIssue DescriptionSeverity
PWX-45055During snapshot creation, a race condition between internal Portworx processes might result in duplicate snapshot entries for the same volume name. In such cases, an incorrect UUID from a duplicate or incomplete entry might be stored in the PersistentVolume (PV) label referencing the snapshot. This can lead the CSI driver to interact with an invalid snapshot object.

User Impact: Snapshot creation or volume clone operations might intermittently fail. Mount operations for volumes derived from snapshots might fail because of incorrect UUID references, which can disrupt backup pipelines, volume provisioning, or automated workflows that rely on snapshot-based clones.

Resolution: Portworx Enterprise now stores and returns only the correct UUID during snapshot creation, eliminating duplicate snapshot metadata that previously disrupted CSI operations.

Components: Volume Management
Affected Versions: 3.3.0
Minor
PWX-43085For ReadWriteMany (RWX) raw block devices (shared raw block devices), discard operations could not be disabled at the device level. In OCP, some hypervisor versions had issues working directly with 4Kn (block size of pxd block device) correctly. This resulted in unstable VM, that gets paused under incorrect discard handling at the hypervisor.

User Impact: When these volumes are used by hypervisors (e.g., KubeVirt VMs), discard operations can lead to stability or performance issues, especially on hypervisors that handles discards incorrectly while with 4Kn block devices (PXD devices are 4Kn always).

Resolution: Portworx now supports disabling discard operations at the block device level by setting the nodiscard option on RWX raw block volumes.

Components: Storage
Affected Versions: 3.3.0
Minor
PWX-44523Metering was not activated when you install Portworx from the IBM Marketplace with local drives. A recent change initialized the provider with a default name instead of the expected IBM provider name, which prevented the correct metering agent from being selected.

User Impact: Clusters deployed from the IBM Marketplace with local drives showed metering as unhealthy and did not report usage, potentially impacting license tracking and billing.

Resolution: Portworx now initializes the provider correctly when installed from the IBM Marketplace with local drives. Metering activates as expected and reports usage data at the standard one-hour frequency.

Components: IX-Licensing & Metering
Affected Versions: 3.3.1
Minor

3.3.0.1

June 25, 2025

To install or upgrade Portworx Enterprise to version 3.3.0.1, ensure that you are running one of the supported kernels and all system requirements are met.

Fixes

Issue NumberIssue DescriptionSeverity
PWX-44992When using Portworx Enterprise 3.3.0 with fuse driver version earlier than 3.3.0, Portworx might fail to clean up stale PXD block devices for encrypted volumes, if the volume is re-attached to a new node after the source node goes offline.

User Impact: Stale PXD devices with encrypted volumes will remain on the source nodes and will not be cleaned up, even if the volume has moved to another node. Any subsequent attempt to attach the same volume to this node will fail. However, the volume can still be attached successfully to other nodes in the cluster.

Resolution: Portworx Enterprise 3.3.0.1 now completely cleans up stale PXD devices for encrypted volumes from a node, even when an older fuse driver is in use.

Components: Volume Management
Affected Versions: 3.3.0
Minor

3.3.0

June 23, 2025

To install or upgrade Portworx Enterprise to version 3.3.0, ensure that you are running one of the supported kernels and all system requirements are met.

New Features

  • Active Cluster on FlashArray Direct Access volumes
    Portworx now supports ActiveCluster on FlashArray Direct Access volumes with PX-StoreV2, allowing synchronous replication and automatic failover across multiple FlashArrays. For more information, see Install Portworx with Pure Storage FlashArray Direct Access volumes with ActiveCluster setup.

  • Application I/O Control leveraging Control Group v2
    Portworx now supports Application I/O Control on hosts that use cgroup v2, in addition to cgroup v1. Portworx automatically detects the available cgroup version and applies I/O throttling accordingly to ensure seamless operation across supported Linux distributions. For more information, see Application I/O Control.

  • Vault for storing vSphere credentials
    Portworx Enterprise now supports storing vSphere credentials in Vault when using Vault as a secret provider, to provide a more secure and centralized way to manage vSphere credentials. Previously, vSphere credentials were stored in Kubernetes secrets. For more information, see Secrets Management with Vault.

  • Enhanced Cluster-wide Diagnostics Collection and Upload
    Portworx now supports cluster-level diagnostics collection through the PortworxDiag custom resource. When created, the Portworx Operator launches temporary diagnostic pods that collect node-level data and Portworx pod logs, store the results in the /var/cores directory, and then automatically delete the diagnostic pods. For more information, see On-demand diagnostics using PortworxDiag custom resource.

  • Volume-granular Checksum Verification tool for PX Store-V1
    Portworx now supports block-level checksum verification across volume replicas using the pxctl volume verify-checksum command for PX-StoreV1. This feature ensures data integrity by comparing checksums across all replicas and supports pause/resume functionality with configurable I/O controls. For more information, see pxctl volume.

  • TLS Encryption for Internal KVDB Communication
    Portworx now supports enabling Transport Layer Security (TLS) for internal KVDB communication on Google Anthos. Subsequent releases will include support for additional platforms. This feature secures communication between internal KVDB and all Portworx nodes using TLS certificates managed by cert-manager. For more information, see Enable TLS for Internal KVDB.

Early Access Features

  • Portworx Shared RWX Block volumes for KubeVirt VMs
    Portworx now supports ReadWriteMany (RWX) raw block volumes for KubeVirt virtual machines (VMs), enabling high-performance, shared storage configurations that support live migration of VMs in OpenShift environments. For more information, see Manage Shared Block Device (RWX Block) for KubeVirt VMs.

  • Enhance capacity management by provisioning custom storage pools
    Portworx now enables provisioning of storage pools during and post Portworx installation, enhancing the management of storage capacity. For more information, see Provision storage pool.

  • Journal IO support for PX-StoreV2
    Portworx now supports Journal device setup and journal IO profile volumes for PX-StoreV2. For more information, see Add a journal device.

  • Support for multiple connections on the same NIC interface or bonded NIC using LACP
    Portworx enables the use of multiple connections on the same NIC interface or bonded NIC interfaces using LACP, to enhance performance as data traffic can be distributed across multiple links. For more information, see Configure multiple NICs with LACP NIC Bonding.

  • Pool drain
    Portworx now supports moving volume replicas between storage pools using the pool drain operation. For more information, see Move volumes using pool drain.

Fixes

Issue NumberIssue DescriptionSeverity
PWX-43808Pure FA Cloud Drives attach backend volumes to hosts on FlashArray using the hostnames retrieved from the NODE_NAME container environment variable, which is specified by the Portworx spec.nodeName field. This might lead to hostname collisions across clusters.

User Impact: Backend volumes might mount to hosts of other clusters.

Resolution: The Pure FlashArray CloudDrive feature now sets the Purity hostname using a combination of the hostname and NODE_UID, limited to a maximum of 63 characters. This prevents hostname collisions and ensures that backend volumes are mounted only on the correct hosts. This also allows easier mapping back to the original host in the FlashArray user interface (UI) and logs.

Components: Drive & Pool Management
Affected Versions: 3.2.2 and earlier
Minor
PWX-43472When a storage node fails, storageless nodes would repeatedly attempt cloud drive failover. Each attempt opened a new connection to kvdb/etcd but did not close it.

User Impact: Open connections might eventually exhaust available file descriptors, making etcd non-responsive to new connections.

Resolution: Connections opened for kvdb health checks during failover attempts are properly closed, preventing resource exhaustion and maintaining etcd responsiveness.

Components: Control Plane, KVDB
Affected Versions: 3.2.2.1
Minor
PWX-41940Portworx telemetry did not collect kubelet logs from cluster nodes. Only Portworx logs were available for troubleshooting.

User Impact: Without kubelet logs, diagnosing cluster-level Kubernetes issues (such as pod crashes, evictions, or node failures) was slower and less effective, impeding root cause analysis and consistent monitoring across environments.

Resolution: Telemetry-enabled clusters now periodically send filtered kubelet logs, which provides more complete telemetry for debugging and alerting.

Components: Telemetry and Monitoring
Affected Versions: 3.3.0
Minor
PWX-36280Portworx did not display kube-scheduler, kube-controller-manager, and pause image details in the /version endpoint output.

User Impact: Without image details, it is difficult to obtain complete component version information when querying the manifest or automating image checks using the curl command.

Resolution: The /version endpoint now includes kube-scheduler, kube-controller-manager, pause, and other relevant images in its version manifest, addressing the need for comprehensive version reporting via standard API calls.

Components: Install & Uninstall, Operator
Affected Versions: All
Minor
PWX-32328Sometimes, Portworx propagated volume metrics to Prometheus from the wrong node, and in some cases, metrics for deleted volumes were reported as active.

User Impact: A single volume appears as attached to two different nodes, resulting in false alerts about the actual state of storage volumes in Prometheus.

Resolution: Volume-related metrics are now emitted only by the node where the volume is actually attached.

Components: Telemetry and Monitoring
Affected Versions: All
Minor
PWX-27968Volume replicas may be placed incorrectly when its VPS volume affinity or volume anti-affinity rule contains multiple match expressions. The provisioner may treat a partial match as a whole match, thus mistakenly selecting/deselecting certain pools during volume provisioning.

User Impact: When users create new volumes or add an existing volume using a VPS volume rule with multiple match expressions, users may see replicas placed on unwanted nodes (volume-affinity scenario), or provision failure (volume anti-affinity rule).

Resolution: Modify the provision algorithm to always evaluate VPS volume rules on a per-volume basis, thereby avoiding confusion between partial match and full match.

Recommendation: No need to modify VPS and storage classes. With the new version, new volumes will be placed correctly according to the VPS rules. However, incorrectly placed existing volumes still require manual fix (move replicas using pxctl command).

Components: Volume Placement & Balancing
Affected Versions: 3.3.x
Minor
PWX-39098Abrupt pod shutdown or deletion in rare scenarios might leave behind (retain) device mappings.

User Impact: New pods attempting to use the volume become stuck in the ContainerCreating phase due to incomplete cleanup of the device mappings.

Resolution: The fix adds additional methods to remove any retained device mappings and attempts to clean them up. If the cleanup is unsuccessful, an appropriate message is returned.

Components: Shared volumes
Affected Versions: 3.2.0
Minor

Known issues (Errata)

Issue NumberIssue DescriptionSeverity
PWX-44992

When using Portworx Enterprise 3.3.0 with fuse driver version earlier than 3.3.0, Portworx might fail to clean up stale PXD block devices for encrypted volumes, if the volume is re-attached to a new node after the source node goes offline.

User Impact: Stale PXD devices with encrypted volumes will remain on the source nodes and will not be cleaned up, even if the volume has moved to another node. Any subsequent attempt to attach the same volume to this node will fail. However, the volume can still be attached successfully to other nodes in the cluster.

Components: Volume Management
Affected Versions: 3.3.0

Minor
PWX-43720

In Portworx Enterprise version 3.3.0 or later, when used with operator version 25.2.0 or later, the StorageCluster custom resource definition (CRD) includes a spec.monitoring.telemetry.hostNetwork field. This field defaults to true if not specified. However, if explicitly set to false, telemetry registration fails.

Workaround: To ensure registration succeeds:

  1. Temporarily set spec.monitoring.telemetry.hostNetwork: true in the StorageCluster specification.
  2. Allow telemetry to register (registration is required only once per year).
  3. Optionally revert the field to false after successful registration. Note that future registration attempts will be blocked unless hostNetwork is enabled again.

Components: Telemetry
Affected Versions: 3.3.0

Minor
PWX-43275

When using FlashArray Direct Access (FADA) volumes in ActiveCluster mode, if a volume is deleted while one of the backend arrays is unavailable, orphan volumes may remain on the FlashArray that was down. This issue does not affect application workloads directly, but manual cleanup might be required to identify and remove orphaned volumes.

Workaround: To clean up orphan volumes:

  1. Run kubectl get stc and extract the first 8 characters of the cluster UUID. This string is used as a prefix to identify volumes related to the cluster.
  2. Log in to the FlashArray that was previously down and go to Storage > Volumes. Filter volumes using the UUID prefix string.
  3. Run pxctl volume list from any node in the cluster.
  4. Compare the volume names from the FlashArray against the output of pxctl volume list. Any volume not listed in pxctl volume list is orphaned and can be deleted from the FlashArray.

Components: Volume Management
Affected Versions: 3.3.0

Minor
PWX-44473

KubeVirt virtual machines using Portworx on IPv6 clusters with SharedV4 volumes may enter a Paused state during a Portworx upgrade. This situation can occur because some nodes continue to run the older Portworx version while others are already upgraded. In these cases, the VMs might attempt to mount the volume using a local NFS path that resolves to a Unique Local Address (ULA), which Portworx does not expose for mount operations.

Workaround: Configure the spec.parameters.allow_ips field in the StorageClass and specify both Unique Local Address (ULA) and link-local address ranges.
Link-local addresses: fe80::/10
Unique local addresses (ULA): fd00::/8

Example StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: px-sharedv4-ipv6-example
parameters:
repl: "2"
sharedv4: "true"
sharedv4_svc_type: "ClusterIP"
sharedv4_mount_options: vers=3.0,nolock
allow_ips: "fe80::/10;fd00::/8"
provisioner: pxd.portworx.com
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true


Components: Shared Volumes
Affected Versions: 3.3.0

Minor
PWX-44223

A storage-less node may fail to pick up the drive-set from a deleted storage node if the API token used for FlashArray authentication has expired and the node does not automatically retrieve the updated token from the Kubernetes secret. As a result, the storage-less node is unable to log in to the FlashArray and fails to initiate drive-set failover after the storage node is deleted.

Workaround: To ensure storge-less have working API token for FlashArray authentication:

  1. Update the px-pure-secret with the newly generated API token on your Kubernetes cluster.
  2. Manually restart the Portworx on storage-less node to trigger it to use the updated credential.

Components: Drive & Pool Management
Affected Versions: All

Minor
PWX-44623

When provisioning virtual machines (VMs) on FlashArray using KubeVirt, the VM might remain in the Provisioning state if the underlying PersistentVolumeClaim (PVC) fails with the error Duplicate volumes found. This issue can occur when the FlashArray is overloaded or responding slowly, causing backend volume creation conflicts that leave the PVC in a Pending state.

Workaround:

  1. Identify and delete the conflicting backend volume from the FlashArray.
  2. Delete the corresponding PVC from the Kubernetes cluster using kubectl delete pvc <pvc-name> command.

The VM will attempt to provision a new PVC automatically.

Components: Volume Management
Affected Versions: 3.3.0

Minor
PWX-43060

If a FlashArray Direct Access (FADA) multipath device becomes unavailable after the volume has already been marked as attached external, the volume attachment might become stuck with a file does not exist error. This can happen if the device path is removed while the attachment is in progress, causing the system to lose access to the underlying device without updating the attachment status.

Workaround:
  1. Scale down the application pod that is using the volume to reset the volume to a detached state.
  2. After the volume is detached, scale the pod back up to re-initiate the attachment.

Components: Volume Management
Affected Versions: 3.3.0, 3.2.3

Minor
PWX-43212

Some VMs might remain stuck in the Shutting Down state after a FlashArray (FA) failover, especially when nodes are overpopulated with VMs. This is a known occurrence related to VM density and node resource allocation.

Resolution: Monitor the number of VMs assigned to each node and plan resource allocation across the cluster to reduce the risk.

Components: Volume Management
Affected Versions: 3.2.3

Minor
PWX-44486

If the coordinator node of an RWX volume (i.e. the node where the volume is currently attached) is placed into Maintenance mode, application pods using the volume might temporarily experience I/O disruption and encounter Input/Output errors.

Resolution: To prevent this issue, before putting a node into Maintenance mode, check if any volumes (especially RWX volumes) are attached to it. If they are attached, restart Portworx on the node first by running px/service=restart before proceeding with Maintenance mode. This ensures that volumes capable of finding an alternate server can be gracefully reattached to another node without disrupting I/O operations.

Components: Storage
Affected Versions: 3.3.0

Minor