Skip to main content
Version: 3.5

Portworx Enterprise Release Notes

3.5.1

January 06, 2026

To install or upgrade Portworx Enterprise to version 3.5.1, ensure that your cluster meets all system requirements and is running Operator 25.5.0 or later, Stork 25.6.0 or later, and one of the supported kernels.

Improvements

Improvement NumberImprovement DescriptionComponent
PWX-37143Portworx now uses Sentinel mounts to optimize Sharedv4 (NFS) volume handling when multiple pods on the same node access the same volume. Instead of creating separate NFS mounts for each pod, Portworx creates a single NFS mount per volume per node and uses lightweight bind mounts for additional pods. This enhancement improves performance, reduces mount overhead, speeds up pod restarts, and simplifies unmounting.
For more information, see Configure sentinel mount cleanup interval and timeout.
Shared Volumes (NFS)

Fixes

Issue NumberIssue DescriptionSeverity
PWX-31343/PWX-47952After upgrading the Anthos version in your Portworx cluster, volumes in the pods switched to read-only due to I/O errors and remain in that state. This caused pods to crash (for example, with a CrashLoopBackOff error), and the volumes were not remounted as read-write because Portworx did not consistently detect kernel-initiated read-only remounts in certain edge cases. As a result, automatic recovery, such as pod restarts, did not occur.

User Impact: A node could appear decommissioned but not be fully removed from the cluster, requiring users to manually retry the decommission operation.

Resolution:
Portworx now correctly detects read-only volumes and automatically restarts the affected pods.

Components: Storage
Affected Versions: 3.5.0
Minor
PWX-41941During node decommission, volume replica removal occurs asynchronously and can take time to complete. Portworx marks the node as decommissioned and waits for all replica removals to finish before removing the node from the cluster. In some cases, the Portworx cluster coordinator might miss the replica-removal completion events, causing the node to remain stuck in a decommissioned state.

User Impact: A node could appear decommissioned but not fully removed from the cluster, requiring users to retry the decommission operation manually.

Resolution:
The Portworx cluster coordinator now accurately receives all replica-removal completion events and ensures the final replica is fully removed before deleting the node specification from the cluster. This ensures the decommission process completes successfully without manual intervention and nodes no longer get stuck in a decommissioned state.

Components: Storage
Affected Versions: 3.5.0 or earlier
Minor
PWX-48149When taking a cloud snapshot in cases where a metadata chunk was already stored in the cloud, the block-diff collector did not correctly handle a rare boundary condition. During diff calculation, a page containing zero records caused the collector to generate an invalid next offset value. This resulted in an incomplete snapshot.

User Impact: CloudSnap backups could become incomplete, and subsequent backups on the affected volume could cause a px-storage crash.

Resolution:
Portworx now correctly handles pages with zero records during diff calculation and ensures that the next offset is computed safely. CloudSnap diff logic no longer returns invalid offsets, preventing incomplete snapshots and eliminating the crash scenario.

Components: Cloudsnaps
Affected Versions: 3.5.0
Minor
PWX-48538After deleting all the pools in a cluster, the node appears storageless in pxctl status, but the driveset still contains a metadata drive. Because the current driveset is not empty, attempting storageless to storage conversion using the Provision Storage Node (PSN) label fails.

User Impact: You cannot convert a storageless node back to a storage node without clearing the leftover metadata drive; pool re-creation via PSN label is blocked.

Resolution:
Portworx now correctly handles the cleanup of leftover metadata drives when all pools are deleted. If a driveset contains only a metadata device with no remaining storage pools, Portworx automatically clears the stale metadata entry allowing storageless-to-storage conversion through the PSN label to proceed successfully.

Components: Storage
Affected Versions: 3.5.0
Minor
PWX-48516/PWX-48676When migrating a cluster from a DaemonSet-based Portworx deployment to an Operator-managed setup, using the portworx.io/cluster-id annotation on the StorageCluster resource allows you to preserve the original cluster name. However, during KVDB TLS migration, Portworx fails to locate the correct StorageCluster if the cluster name differs from the name specified in the portworx.io/cluster-id annotation.

User Impact: Clusters with mismatched STC names and portworx.io/cluster-id annotations could not perform KVDB failovers or add new KVDB nodes.

Resolution: Portworx now checks all available StorageCluster resources and identifies the correct one using the portworx.io/cluster-id annotation. This ensures that KVDB TLS migration and failover operations work correctly, even when the STC name differs from the cluster name preserved during migration.

Components: KVDB
Affected Versions: 3.5.0
Minor
PWX-48719To support OpenShift version 4.20 or later, Portworx introduced /var/lib/osd/pxns as a common directory used to export volumes from a node. Although access to actual volume data remained restricted, exporting this directory in a broad read-only mode was flagged as a security concern.

User Impact: Security scans could report warnings about the export of /var/lib/osd/pxns directory, even though users could not access data they were not authorized to see.

Resolution:
Portworx now exports the directory using only the combined access rules of the underlying volumes, ensuring that exports are limited to exactly what each volume allows and addressing the security concern.

Components: Shared Volumes
Affected Versions: 3.5.0 or earlier
Minor
PWX-48765On clusters using NBDD, Portworx experienced a segmentation fault in the px-storage process when a GetDiffExtents request was triggered while all storage pools on the node were down. In this state, internal data structures were not fully initialized, leading to a null pointer dereference.

User Impact: When the segmentation fault occurs, the px-storage process crashes on the affected node, and the local storage pools may enter an InitErr or Down state. This can temporarily disrupt I/O operations for volumes on that node until Portworx restarts and the pools recover.

Resolution:
The diff-handling logic has been updated to detect when the local data store is not initialized, for example, when all pools are down, and skip initiating diff extent requests in that state. This prevents null pointer dereferences, avoids segmentation faults, and ensures that nodes handle pool-down conditions when NBDD is enabled.

Components: Storage
Affected Versions: 3.5.0
Minor
PWX-48794After upgrading to AWS SDK v2 in Portworx 3.4.1, stricter data integrity validation was enabled by default for S3 operations. Some object storage systems, such as older versions of MinIO or other S3-compatible backends may not fully support these checks. As a result, CloudSnap uploads and S3 credential creation failed with errors.

User Impact: Customers using incompatible S3-compatible object stores experienced CloudSnap backup and credential creation failures after upgrading to Portworx 3.4 or later.

Resolution:
Portworx has modified the AWS SDK v2 configuration to include checksum headers in S3 requests only when required. This restores compatibility with older MinIO or other S3-compatible backends while maintaining normal operation with AWS S3 and other supported object stores.

Components: Cloudsnaps
Affected Versions: 3.5.0
Minor
PWX-48829When a node hosting multiple virtual machines is abruptly powered off, one of the affected virtual machines could become stuck in recovery mode after being restarted on another node, while other virtual machines recovered successfully.

User Impact: One or more virtual machines could remain unreachable after node power loss, requiring manual intervention to recover the affected VM.

Resolution:
Portworx now ensures consistent recovery handling for virtual machines during abrupt node shutdowns, allowing all affected VMs to recover and start successfully on new nodes without getting stuck in recovery mode.

Components: Storage
Affected Versions: 3.5.0 or earlier
Minor
PWX-48835NBDD reported a much larger discard_bytes_total value than expected during snapshot delete and overwrite operations.

User Impact: NBDD discarded the entire snapshot range instead of only the intended blocks, leading to misleading discard metrics and unnecessary data discard during snapshot workflows.

Resolution:
NBDD performs discard operations only on the correct block ranges, and discard_bytes_total accurately reflects the amount of data actually discarded.

Components: Snapshots
Affected Versions: 3.5.0
Minor
PWX-48967Portworx cluster nodes fails to start when shared raw block VMs are running and their underlying PVCs are attached/exported on multiple nodes. If Portworx goes down on several nodes and a node that was hosting these VM PVC exports is auto-decommissioned, volumes may still reference that removed node.

User Impact: Portworx repeatedly fails to come up on other nodes because they continue to request the storage spec for the decommissioned node, leaving the cluster in a restart loop and storage unavailable.

Resolution:
Portworx now prevents cluster startup failures by clearing invalid exported-node references when a node is auto-decommissioned. If a shared raw block volume was still marked as exported on a node that no longer exists, Portworx automatically removes that reference so all remaining nodes can retrieve a valid storage spec and start successfully.

Components: Storage
Affected Versions: 3.3.0 or later
Minor
PWX-48982During node startup, Portworx attempted to unmount device-mapper paths for FlashArray Direct Access (FADA) devices if they were mounted under system directories such as /var/lib/osd or /var/lib/kubelet. Because Portworx did not verify whether these devices actually belonged to Portworx-managed FADA volumes, it could unintentionally unmount non-PX FADA devices that were legitimately mounted by the host or other components.

User Impact: Nodes could lose valid host-mounted FADA devices during startup, disrupting workloads and preventing dependent components from functioning correctly.

Resolution:
Portworx now unmounts FADA devices at startup only when a valid Portworx volume is associated with the device, ensuring that non-PX FADA devices remain mounted and unaffected during initialization.

Components: Volume Management
Affected Versions: 3.5.0 or earlier
Minor
PWX-49015/PWX-49036When increasing the replication factor of a volume, the operation stalls if the volume had snapshots whose replica sets did not match the parent volume. During the High Availability (HA) add workflow, where Portworx adds a new replica and synchronizes the volume’s data to the new node, Portworx incorrectly attempted to compare data against a snapshot that was not an internal snapshot and did not share the same replicas as the parent volume. This caused the data-comparison phase of the workflow to fail, preventing the HA-add operation from continuing. This is a day 0 issue.

User Impact: The volume replication operation could remain stuck indefinitely, preventing users from increasing the HA level of affected volumes.

Resolution:
Portworx now correctly handles snapshots with mismatched replica sets during HA-add workflow. The HA-add workflow no longer gets blocked when older snapshots have different replica layouts. Replication increases now complete successfully even when snapshots do not match the parent volume’s replica configuration.

Components: Storage
Affected Versions: 3.5.0 or earlier
Minor
PWX-49219After upgrading to Portworx Enterprise 3.5.0, some Grafana pool-level panels in the Portworx Performance dashboard stopped displaying data. Portworx introduced new node-level metrics and updated naming, which exposed a defect in the Prometheus metrics registration logic. The internal library registers metrics using only the metric key name, without considering subsystem names. Therefore, when multiple subsystems define metrics with the same key (for example, write_latency_seconds), the first metric registered overwrites the others.

User Impact: Pool-focused Grafana panels (for example, pool write latency, write throughput, flush latency, flushed bytes) showed no data, even though the cluster was healthy. Only pool-level visualization was affected; Portworx I/O operations continue to function normally.

Resolution:
Portworx now correctly registers pool-level metrics by including both the subsystem and metric name during registration, preventing name conflicts between node-level and pool-level metrics. This ensures that all pool metrics are exported to Prometheus and Grafana dashboards display pool-level data correctly after upgrade.

Components: Volume Management
Affected Versions: 3.5.0
Minor
PWX-49340Pods using shared service (Sharedv4) volumes could remain stuck in the ContainerCreating state when the underlying file system contained errors that prevented the volume from being mounted.

User Impact: Affected pods failed to start because the volume could not be mounted, even though the cluster and other workloads continued to run normally.

Resolution:
Portworx now prevents workloads from starting on volumes with detected file system errors until recovery is completed, ensuring that applications do not access corrupted file systems and improving stability for shared volume mounts.

Components: Shared Volumes (NFS)
Affected Versions: 3.5.0
Minor

Known issues (Errata)

Issue NumberIssue DescriptionSeverity
PWX-48939

Scheduling a SharedV4 application can fail if the required NFS packages are not installed on a node. During Portworx startup, automatic NFS installation may fail (for example, due to transient network issues). Portworx then enters a cooldown state and skips subsequent NFS installation attempts, even though the NFS service is not present on the node.

User Impact: Pods that use SharedV4 volumes fail to mount with errors indicating that NFS installation was skipped. As a result, SharedV4 workloads cannot be scheduled or started on affected nodes.

Workaround: Restart Portworx on the affected node to retry NFS package installation and start the NFS service successfully.

Components: Shared Volumes (NFS)
Affected Versions: 3.5.1

Minor
PWX-49255

KubeVirt virtual machines that use SharedV4 service volumes as hot-plug disks can pause or fail during Portworx restarts or coordinator node shutdown events (including node restarts). This occurs only in rare cases where a service volume failover attempt fails. In such scenarios, the hot-plug pod restarts, but because the volume is bind-mounted directly into the virt-launcher pod, bypassing kubelet publish and unpublish operations, Portworx does not detect the active mount, which can lead to stalled I/O.

User Impact: The affected virtual machine enters a paused or failed state and becomes unavailable.

Workaround: Restart the virtual machine.

Components: Shared V4
Affected Versions: 3.2.x or later

Minor
PWX-49481

During sharedv4 volume failover, Portworx may fail to clean up a stale sentinel mount. When the sentinel cleanup timeout is set to 0 (the default), the background cleanup thread does not run, leaving the sentinel mount in place and preventing the volume from detaching as expected.

User Impact: Sharedv4 volumes may remain attached after application teardown or node failover. Tests and workloads that expect the volume to detach can fail, potentially blocking application rescheduling or cleanup workflows.

Workaround: Set sentinelCleanupTimeout to a value greater than 0 to enable the background sentinel cleanup process. This allows Portworx to retry and complete sentinel mount cleanup, ensuring volumes detach correctly after failover. Alternately, you can use the pxctl host unmount command to unmount the sentinel mounts, followed by the pxctl host detach command to detach the device from the host.

Components: Shared Volumes (NFS)
Affected Versions: 3.5.1

Minor

3.5.0

November 19, 2025

To install or upgrade Portworx Enterprise to version 3.5.0, ensure that your cluster meets all system requirements and is running Operator 25.5.0 or later, Stork 25.4.0 or later, and one of the supported kernels.

New Features

  • Support for Cloud Snapshots via CSI
    Portworx now supports cloud-based snapshots through the Kubernetes Container Storage Interface (CSI) enabling users to create, delete, and restore cloud snapshots stored in S3-compatible object storage using standard CSI APIs.
    For more information, see Manage Cloud snapshots of CSI-enabled volumes.

  • Support for automatic provisioning of storage nodes when autoscaling with cluster autoscaler
    In an environment where the cluster autoscaler is enabled, you can link the autoscaler to one or more nodePools or machineSets. You can preconfigure the node template by adding the portworx.io/provision-storage-node="true" label. This ensures that each node created from that template starts as a storage node automatically.
    For more information, see Provisioning Storage Nodes and Configure Kubernetes Cluster Autoscaler to Autoscale Storage Nodes.

    note

    The parameters maxStorageNodes, maxStorageNodesPerZone, and maxStorageNodesPerZonePerNodeGroup are deprecated, and replaced with the initialStorageNodes parameter. When scaling a Portworx cluster, themaxStorageNodesPerZone parameter needed an increment to add more storage nodes. Instead of this, you can add new nodes and label them with portworx.io/provision-storage-node="true" to scale the cluster. See:Deprecated Features.

  • Enhance CloudSnap Object Size for Improved Backup and Restore Performance
    Portworx now supports increasing the CloudSnap backup object size from the default 10 MB to 100 MB, thereby accelerating backup and restore operations for large volumes.
    In asynchronous disaster recovery scenarios, using a larger 100 MB object size results in fewer data to track and transfer. This reduces CPU and metadata overhead, and minimizes round trips, especially in high-latency environments.
    For more information, see Cloud Snapshots.

  • Non-Blocking Device Delete (NBDD) for Snapshot and Volume Deletes
    The Non-Blocking Device Delete (NBDD) feature improves the performance and reliability of large volume and snapshot deletions in Portworx Enterprise clusters running on PX-StoreV2. NBDD performs background block discards at a configurable rate before deletion, reducing I/O pauses and helping reclaim storage, especially in SAN or FlashArray (FA) environments.
    For more information, see Non-Blocking Device Delete.

  • PX-StoreV2 set as default datastore
    Portworx now uses PX-StoreV2 as the default datastore for all new deployments on supported platforms.
    For more information, see PX-StoreV2.

  • Support for Co-location on Portworx RWX block volumes
    Portworx now supports volume co-location for KubeVirt virtual machines (VMs) on Portworx RWX block volumes. This feature enables Portworx to place the replicas of all VM volumes on the same set of nodes, improving I/O performance, reducing network traffic, and avoiding unnecessary remote volume access.
    For more information, see Manage Shared Block Device (RWX Block) for KubeVirt VMs.

  • Portworx Shared RWX Block volumes for KubeVirt VMs on SUSE Virtualization
    Portworx now supports ReadWriteMany (RWX) raw block volumes for KubeVirt virtual machines (VMs) on SUSE Virtualization, enabling features such as live migration, high availability, and persistent VM storage across SUSE Virtualization nodes.
    For more information, see Manage Portworx RWX Block Volumes on SUSE Virtualization for KubeVirt VMs.

  • Portworx Installation Support on SUSE Virtualization with FlashArray
    Portworx now supports installation on SUSE Virtualization environments with Pure Storage FlashArray as the backend storage provider. This integration enables administrators and platform engineers to deploy a highly available, scalable, and production-grade storage layer for virtualized and cloud-native workloads running on SUSE platforms.
    For more information, see Installation on SUSE Virtualization.

  • Enhanced FUSE Driver with Tunable Runtime Options
    The Portworx FUSE driver now includes enhanced support for high-performance Portworx workloads through new tunable runtime options. These options allow you to fine-tune FUSE behavior based on workload characteristics and system resources.
    For more information, see the Tune Performance guide.

  • Increased Volume Attachment Limit per Node
    Portworx now supports up to 1,024 volume attachments per node. The volume attachment limit for FADA and SharedV4 volumes remains at 256.
    For more information, see Features and configurations supported by different license types.

    note

    You must upgrade the fuse driver in your Portworx cluster to version 3.5.0 or later to support the new limits.

  • Automatic KVDB Node Rebalancing Across Failure Domains
    Portworx now supports automatic rebalancing of internal KVDB nodes across failure domains after a failover. The rebalance check runs every 15 minutes and ensures that the KVDB nodes do not remain concentrated in a single failure domain after failovers or cluster topology changes.

  • Support for KubeVirt VMs with IPv6
    Portworx now supports KubeVirt VMs with IPv6 networking.

Early Access Features

  • Support for storage operations in OpenShift hosted control plane clusters
    Portworx now supports storage operations for both management and hosted clusters created using Openshift hosted control plane.
    For more information, see Deploy Openshift hosted clusters with Portworx.

Deprecated Features

  • The parameters maxStorageNodes, maxStorageNodesPerZone, and maxStorageNodesPerZonePerNodeGroup are deprecated, and replaced with the initialStorageNodes parameter and the portworx.io/provision-storage-node="true" label.
    If you are upgrading the Portworx cluster, from version 3.4.x or earlier, any existing values for the deprecated parameters (maxStorageNodes, maxStorageNodesPerZone, and maxStorageNodesPerZonePerNodeGroup) will remain set in your configuration, but it will have no operational effect. These values persist only for backward compatibility. After upgrading, you cannot modify these parameters to new values. You can only unset them. Attempts to set or modify them (on both new and existing clusters) will result in an error.
    For more information, see Provisioning Storage Nodes.

End of support notifications

  • Photon3 is no longer supported, as the distribution has reached end of life (EOL).
  • This is the final release that supports all versions of Fedora.

Fixes

Issue NumberIssue DescriptionSeverity
PWX-21104During crypto device attach operations, the Cgo library hangs while making a system call. Because this operation held a lock, all further encrypted volume actions such as, attach, detach, mount, or unmount were blocked on the node.

User Impact: Pods using encrypted volumes become stuck in ContainerCreating or Terminating states, with recovery only possible by restarting the Portworx service on the node.

Resolution:
Portworx now skips the Cgo library and directly uses the native cryptsetup binaries available on the host system. This approach prevents the Cgo library from hanging while maintaining support for encrypted volume operations.

Components: Volume Management
Affected Versions: 3.4.0 or earlier
Minor
PWX-35186Portworx validated BM/VM license compatibility during startup only. If a cluster with both bare-metal (BM) and virtual machine (VM) nodes switched to a VM-only license while running, no alerts were generated. This lead to clusters operating under an incompatible license until a restart or upgrade, at which point BM nodes fails to start.

User Impact: Clusters running a mixed configuration (BM + VM) experienced unexpected failures during restart or upgrade due to undetected license mismatches.

Resolution:
Portworx now continuously monitors for license changes and validates platform compatibility in real time. When a VM-only license is applied to a cluster with BM nodes, clear alerts are raised (e.g., LicenseCheckFailed: The current license does not enable the required platform). The alerts automatically clear after a valid license with BM/VM support is applied.

Components: IX-Licensing & Metering
Affected Versions: 3.4.0 or earlier
Minor
PWX-36391The pxctl volume inspect -j and pxctl volume update -h commands did not display the sharedv4_failover_strategy field unless it was explicitly set by the user.

User Impact: Users could not determine the default failover strategy for Sharedv4 or Sharedv4 Service volumes unless it had been manually updated.

Resolution:
Portworx now displays the default sharedv4_failover_strategy for Sharedv4 and Sharedv4 Service volumes in the pxctl volume inspect output. By default, the strategy is AGGRESSIVE for Sharedv4 Service volumes and NORMAL for Sharedv4 volumes. This field does not appear for non-Sharedv4 volumes.

Components: CLI & API
Affected Versions: All versions
Minor
PWX-37801When configuring NFS attribute caching options on Portworx Sharedv4 volumes, custom parameters such as acdirmin=30 and acregmin=3 were being overridden by the default actimeo=60 value, even when actimeo was not explicitly specified.

User Impact: Users attempting to fine-tune NFS caching behavior for Sharedv4 volumes could not apply specific values for individual parameters. As a result, attribute cache refresh intervals did not reflect the desired configuration.

Resolution:
Portworx now correctly honors user-specified NFS caching parameters such as acregmin, acregmax, acdirmin, acdirmax when actimeo is not explicitly set. If actimeo is provided, it will apply globally to all caching parameters unless specific options are defined after in the mount configuration.

Components: Volume Management
Affected Versions: All versions
Minor
PWX-38162When a node stopped serving as a KVDB node, Portworx automatically deleted the KVDB disk during restart. In rare cases where a data disk was incorrectly marked as a KVDB disk, such as during manual configuration changes or troubleshooting this behavior could result in accidental deletion of the data disk.

User Impact: This leads to unintended data loss if a non-KVDB disk was misidentified as a KVDB disk and removed during the node restart process.

Resolution:
Portworx now includes additional validation checks based on disk labels and size before performing any KVDB disk cleanup. These safeguards ensure that only valid KVDB disks are deleted, preventing accidental removal of data disks incorrectly tagged as KVDB.

Components: KVDB
Affected Versions: All versions
Minor
PWX-42269When you run the pxctl clouddrive delete command to delete a DriveSet in FlashArray CloudDrive (FACD) setups, the system attempts to delete DriveSets even if the detach operation fails and displays inconsistent error messages.

User Impact: Users could encounter misleading or inconsistent error messages when attempting to delete a DriveSet that was still attached to a backend array, causing confusion about whether the deletion succeeded, failed, or was attempted safely.

Resolution:
Portworx now includes additional validation checks based on disk labels and size before performing any KVDB disk cleanup. These safeguards ensure that only valid KVDB disks are deleted, preventing accidental removal of data disks incorrectly tagged as KVDB.

Components: Drive & Pool Management
Affected Versions: 3.4.0 or earlier
Minor
PWX-45884Updating Quality of Service (QoS) parameters such as max_iops and max_bandwidth for FlashArray DirectAccess (FADA) volumes was not supported through the Portworx CLI. Users attempting to run commands like pxctl volume update --max_iops received the error:
pxctl update not supported for Pure DirectAccess volumes.
Users had to manually adjust QoS settings through the FlashArray UI, blocking automation and dynamic performance tuning workflows.

User Impact: Customers relying on automated QoS management could not dynamically tune IOPS or bandwidth limits for FADA volumes, impacting efficiency and requiring manual interventions.

Resolution:
Portworx now supports dynamic updates to QoS parameters for FADA volumes using the pxctl volume update --max_iops <IOPSLimit> or pxctl volume update --max_bandwidth <BandwidthLimit> CLI commands. Users can now modify IOPS and bandwidth limits on existing FADA volumes without accessing the FlashArray UI. Updates reflect in both the pxctl volume inspect output and the FlashArray backend.

Components: Volume Management
Affected Versions: 3.4.0 or earlier
Minor
PWX-46424Portworx CLI (pxctl volume update) allowed users to specify either --max_iops or --max_bandwidth, but not both simultaneously. This limited flexibility for users who wanted to fine-tune both IOPS and bandwidth constraints on a volume concurrently.

User Impact: Users could not set both IOPS and bandwidth limits simultaneously, which restricted performance control for workloads requiring simultaneous throttling of both parameters. Attempting to specify both would result in an error or one of the values being ignored.

Resolution:
Portworx now supports specifying both --max_iops and --max_bandwidth together in the pxctl volume update command. When both are set, the underlying cgroup mechanism determines how the limits are enforced. Users are notified that applying both limits simultaneously is non-deterministic, meaning the effective throttling behavior may depend on system-level cgroup prioritization.

Components: Volume Management
Affected Versions: 3.4.0 or earlier
Minor
PWX-46697In some cases, when a node stops receiving watch updates directly from KVDB, Portworx begins pulling updates from other nodes that have newer data. If this condition persists for a long time, the node’s watch version can fall behind, resulting in the error:
Storage watch error, error: Kvdb watch revision compacted.
When this occurs, affected Portworx nodes may restart unexpectedly.

User Impact: Unexpected node restarts can temporarily disrupt storage services and impact running applications.

Resolution:
Portworx now automatically restarts only the affected KVDB watch instead of restarting the entire node process.

Components: KVDB
Affected Versions: 3.4.0 or earlier
Minor
PWX-46800Sharedv4 volumes failed to mount on Portworx clusters when the rpcbind service was disabled or stopped. As a result, pods using Sharedv4 volumes remained in a waiting state.

User Impact: When rpcbind was not active, the NFS service dependencies required for Sharedv4 volumes to function were unavailable. This caused persistent mount failures and pod scheduling delays until rpcbind was started manually.

Resolution:
Portworx now automatically starts the rpcbind service during NFS operations, similar to how it manages the nfs-server service. This ensures that Sharedv4 volume mounts succeed even if rpcbind was previously disabled, improving reliability and reducing the need for manual recovery steps.

Components: Control Plane
Affected Versions: 3.4.0 or earlier
Minor
PWX-47733AutoFSTrim did not run on volumes mounted with the discard option, even when AutoFSTrim was enabled at both the cluster and volume level. As a result, deleted data blocks were not reclaimed automatically, causing higher backend storage usage.

User Impact: Users with volumes configured to use discard see space consumption remain high after deleting data, since automatic trimming was skipped.

Resolution:
Portworx now supports AutoFSTrim for all volumes with AutoFSTrim enabled, regardless of whether the mount option is discard or nodiscard. The trimming operation runs automatically and reclaims unused storage space as expected.

Components: Autofstrim, Dmthin, Storage
Affected Versions: 3.4.0 or earlier
Minor
PWX-47751If the nfsd filesystem is not properly mounted inside the Portworx runtime (runc) container, exportfs commands fails with the error Function not implemented. This caused Sharedv4 volume exports to fail and affected NFS service initialization.

User Impact: When nfsd was not mounted, Portworx could not correctly export NFS volumes, leading to failed or stalled Sharedv4 mounts. This issue particularly impacted environments with NFS misconfigurations or during migrations where the NFS service did not start correctly.

Resolution:
Portworx now automatically verifies that the nfsd filesystem is mounted within the runc container during NFS initialization. If it is missing, Portworx remounts it and reinitializes the NFS service. This prevents errors and ensures that Sharedv4 volumes mount reliably even when NFS is initially disabled or unhealthy.

Components: Shared Volumes
Affected Versions: 3.4.0 or earlier
Minor
PWX-47763Portworx Enterprise periodically ran a cleanup task to remove deleted CloudSnaps, even when all CloudSnap operations were managed by PX-Backup. This caused unnecessary filesystem calls on NFS backends, leading to additional system overhead.

User Impact: In environments where PX-Backup handled all CloudSnap deletions, redundant cleanup tasks from Portworx Enterprise could increase filesystem load and reduce performance efficiency.

Resolution:
A new configuration parameter, cs_cleanup_interval_minutes, is introduced to control the cleanup schedule. When set to 0, Portworx disables the internal CloudSnap cleanup scheduler. When set to a positive value, the cleanup runs periodically based on the specified interval. This change allows PX-Backup to fully manage CloudSnap cleanup operations, preventing redundant cleanup activity and reducing system overhead.

Components: Cloudsnaps
Affected Versions: 3.4.0 or earlier
Minor

Known issues (Errata)

Issue NumberIssue DescriptionSeverity
PWX-31343/PWX-47952

After upgrading the Anthos version in your Portworx cluster, volumes in the pods may switch to read-only due to I/O errors and remain in that state. This causes pods to crash (for example, with a CrashLoopBackOff error), and the volumes are not remounted as read-write because Portworx did not consistently detect kernel-initiated read-only remounts in certain edge cases. As a result, automatic recovery, such as pod restarts, does not occur.

User Impact: Affected pods does not write to their data and continues restarting until manual intervention.


Workaround:
  • Set the cluster option to bounce pods on read-only volumes:
    • For versions prior to the default change, run pxctl cluster update --ro-vol-pod-bounce all
    • If supported in your version, run pxctl cluster update --bounce-pods-for-ro-vol all
  • If pods continue to bounce, repair the filesystem:
    • Stop the workload (scale the deployment/StatefulSet to zero).
    • Attach the volume to a maintenance host.
    • Run a filesystem check to clear errors (for example, fsck.ext4 -y).
    • Detach the volume and restart the workload.
  • Verify storage health and resolve any controller or path issues before bringing workloads back online.

Components: Storage
Affected Versions: 3.5.0

Minor
PWX-47360

Portworx may report inaccurate trimmable space values and repeatedly run the fstrim process even when no space is available to discard. As a result, fstrim can show a constant baseline (around 17 GiB) as reclaimable space even when the volume is empty. This behavior is expected for dm-thin–provisioned volumes.

User Impact:
  • The reported trimmable space may appear higher than the actual reclaimable space.
  • fstrim may run multiple times and show zero bytes trimmed.
  • Administrators may misinterpret repeated fstrim runs as a failure, even though it is normal behavior.

Workaround: There is no workaround.

Components: Storage
Affected Versions: 3.5.0

Minor
PWX-47921

After deleting all the pools in a cluster, the node appears storageless in pxctl status, but the driveset still contains a metadata drive. Because the current driveset is not empty, attempting storageless to storage conversion using the Provision Storage Node (PSN) label fails.

User Impact: You cannot convert a storageless node back to a storage node without clearing the leftover metadata drive; pool re-creation via PSN label is blocked.


Workaround: Decommission the node to clear the driveset or metadata, then add the PSN label (or reprovision) to convert the storageless node back to a storage node.

Components: Storage
Affected Versions: 3.5.0

Minor
PWX-48516

KVDB TLS migration fails when the StorageCluster (STC) name does not match the cluster name preserved in the portworx.io/cluster-id annotation set when migrating a cluster from a DaemonSet-based Portworx deployment to an Operator-managed setup. Portworx cannot find the correct STC, and new KVDB nodes cannot take over.

User Impact: KVDB failover and KVDB node add/remove operations fail.


Workaround: Ensure that the cluster name preserved in the portworx.io/cluster-id annotation and the STC name used for TLS migration align (for example, match the STC name to the portworx.io/cluster-id value or use a single STC with the correct portworx.io/cluster-id).

Components: Storage
Affected Versions: 3.5.0

Minor
PWX-48538

After deleting all the pools in a cluster, the node appears storageless in pxctl status, but the driveset still contains a metadata drive. Because the current driveset is not empty, attempting storageless to storage conversion using the Provision Storage Node (PSN) label fails.

User Impact: You cannot convert a storageless node back to a storage node without clearing the leftover metadata drive; pool re-creation via PSN label is blocked.


Workaround: Decommission the node to clear the driveset or metadata, then add the PSN label (or reprovision) to convert the storageless node back to a storage node.

Components: Storage
Affected Versions: 3.5.0

Minor
PWX-49016

If a FlashArray Direct Access (FADA) volume is mounted at /var/lib/osd or /var/lib/kubelet, Portworx may incorrectly unmount it during startup. This occurs when FA device cleanup logic matches the device path and treats it as stale. After an OpenShift and Portworx upgrade, this behavior causes Portworx to repeatedly terminate on all nodes because /var/lib/osd was unintentionally unmounted, leading to missing nsmounts and PX failing to start.

User Impact: Portworx fails to start and enters a restart loop. Nodes remain in STATUS_ERROR, cluster operations halt, and workloads such as VMs cannot start. The issue blocks upgrades, cluster initialization, and general PX functionality.


Workaround: Contact Portworx Support. Components: Volume Management
Affected Versions: 3.5.0 or earlier

Minor
PWX-49219

After upgrading to Portworx Enterprise 3.5.0, some Grafana pool-level panels in the Portworx Performance dashboard may stop displaying data. Portworx introduced new node-level metrics and updated naming, which exposed a defect in the Prometheus metrics registration logic. The internal library registers metrics using only the metric key name, without considering subsystem names. Therefore, when multiple subsystems define metrics with the same key (for example, write_latency_seconds), the first metric registered overwrites the others. As a result, some pool-level metrics are not exported, and related Grafana panels show no data.

User Impact: Pool-focused Grafana panels (for example, pool write latency, write throughput, flush latency, flushed bytes) may show no data, even though the cluster is healthy. Only pool-level visualization is affected; Portworx I/O operations continue to function normally.

Workaround: Use one of the following methods to restore pool-level Grafana panels until a permanent fix is available:

  • Update Grafana panel queries: Edit each affected panel in Grafana and update PromQL queries that reference old pool metric names.
    Replace every metric of the form px_pool_stats_<metric> with px_pool_stats_pool_<metric>. For example, replace px_pool_stats_write_latency_seconds with px_pool_stats_pool_write_latency_seconds.
    Apply the same changes for throughput, flush latency, and flushed-bytes metrics. Save the panel and repeat the same for all pool-level panels.

  • Update dashboards via ConfigMap: If you manage Grafana dashboards through a ConfigMap:

    1. Export the ConfigMap:

      kubectl get configmap grafana-dashboards \
      -n <namespace> -o yaml > dashboards.yaml
    2. Update all occurrences of old metric names to the new naming pattern, px_pool_stats_pool_<metric>.

    3. Apply the updated ConfigMap:

      kubectl apply -f dashboards.yaml -n <namespace>
    4. Restart Grafana if dashboards do not automatically reload:

      kubectl rollout restart deploy grafana -n <namespace>

Components: Volume Management
Affected Versions: 3.5.0

Minor