Portworx Enterprise Release Notes
3.5.1
January 06, 2026
To install or upgrade Portworx Enterprise to version 3.5.1, ensure that your cluster meets all system requirements and is running Operator 25.5.0 or later, Stork 25.6.0 or later, and one of the supported kernels.
Improvements
| Improvement Number | Improvement Description | Component |
|---|---|---|
| PWX-37143 | Portworx now uses Sentinel mounts to optimize Sharedv4 (NFS) volume handling when multiple pods on the same node access the same volume. Instead of creating separate NFS mounts for each pod, Portworx creates a single NFS mount per volume per node and uses lightweight bind mounts for additional pods. This enhancement improves performance, reduces mount overhead, speeds up pod restarts, and simplifies unmounting. For more information, see Configure sentinel mount cleanup interval and timeout. | Shared Volumes (NFS) |
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-31343/PWX-47952 | After upgrading the Anthos version in your Portworx cluster, volumes in the pods switched to read-only due to I/O errors and remain in that state. This caused pods to crash (for example, with a CrashLoopBackOff error), and the volumes were not remounted as read-write because Portworx did not consistently detect kernel-initiated read-only remounts in certain edge cases. As a result, automatic recovery, such as pod restarts, did not occur.User Impact: A node could appear decommissioned but not be fully removed from the cluster, requiring users to manually retry the decommission operation. Resolution: Portworx now correctly detects read-only volumes and automatically restarts the affected pods. Components: Storage Affected Versions: 3.5.0 | Minor |
| PWX-41941 | During node decommission, volume replica removal occurs asynchronously and can take time to complete. Portworx marks the node as decommissioned and waits for all replica removals to finish before removing the node from the cluster. In some cases, the Portworx cluster coordinator might miss the replica-removal completion events, causing the node to remain stuck in a decommissioned state. User Impact: A node could appear decommissioned but not fully removed from the cluster, requiring users to retry the decommission operation manually. Resolution: The Portworx cluster coordinator now accurately receives all replica-removal completion events and ensures the final replica is fully removed before deleting the node specification from the cluster. This ensures the decommission process completes successfully without manual intervention and nodes no longer get stuck in a decommissioned state. Components: Storage Affected Versions: 3.5.0 or earlier | Minor |
| PWX-48149 | When taking a cloud snapshot in cases where a metadata chunk was already stored in the cloud, the block-diff collector did not correctly handle a rare boundary condition. During diff calculation, a page containing zero records caused the collector to generate an invalid next offset value. This resulted in an incomplete snapshot. User Impact: CloudSnap backups could become incomplete, and subsequent backups on the affected volume could cause a px-storage crash. Resolution: Portworx now correctly handles pages with zero records during diff calculation and ensures that the next offset is computed safely. CloudSnap diff logic no longer returns invalid offsets, preventing incomplete snapshots and eliminating the crash scenario. Components: Cloudsnaps Affected Versions: 3.5.0 | Minor |
| PWX-48538 | After deleting all the pools in a cluster, the node appears storageless in pxctl status, but the driveset still contains a metadata drive. Because the current driveset is not empty, attempting storageless to storage conversion using the Provision Storage Node (PSN) label fails.User Impact: You cannot convert a storageless node back to a storage node without clearing the leftover metadata drive; pool re-creation via PSN label is blocked. Resolution: Portworx now correctly handles the cleanup of leftover metadata drives when all pools are deleted. If a driveset contains only a metadata device with no remaining storage pools, Portworx automatically clears the stale metadata entry allowing storageless-to-storage conversion through the PSN label to proceed successfully. Components: Storage Affected Versions: 3.5.0 | Minor |
| PWX-48516/PWX-48676 | When migrating a cluster from a DaemonSet-based Portworx deployment to an Operator-managed setup, using the portworx.io/cluster-id annotation on the StorageCluster resource allows you to preserve the original cluster name. However, during KVDB TLS migration, Portworx fails to locate the correct StorageCluster if the cluster name differs from the name specified in the portworx.io/cluster-id annotation.User Impact: Clusters with mismatched STC names and portworx.io/cluster-id annotations could not perform KVDB failovers or add new KVDB nodes.Resolution: Portworx now checks all available StorageCluster resources and identifies the correct one using the portworx.io/cluster-id annotation. This ensures that KVDB TLS migration and failover operations work correctly, even when the STC name differs from the cluster name preserved during migration.Components: KVDB Affected Versions: 3.5.0 | Minor |
| PWX-48719 | To support OpenShift version 4.20 or later, Portworx introduced /var/lib/osd/pxns as a common directory used to export volumes from a node. Although access to actual volume data remained restricted, exporting this directory in a broad read-only mode was flagged as a security concern.User Impact: Security scans could report warnings about the export of /var/lib/osd/pxns directory, even though users could not access data they were not authorized to see.Resolution: Portworx now exports the directory using only the combined access rules of the underlying volumes, ensuring that exports are limited to exactly what each volume allows and addressing the security concern. Components: Shared Volumes Affected Versions: 3.5.0 or earlier | Minor |
| PWX-48765 | On clusters using NBDD, Portworx experienced a segmentation fault in the px-storage process when a GetDiffExtents request was triggered while all storage pools on the node were down. In this state, internal data structures were not fully initialized, leading to a null pointer dereference.User Impact: When the segmentation fault occurs, the px-storage process crashes on the affected node, and the local storage pools may enter an InitErr or Down state. This can temporarily disrupt I/O operations for volumes on that node until Portworx restarts and the pools recover.Resolution: The diff-handling logic has been updated to detect when the local data store is not initialized, for example, when all pools are down, and skip initiating diff extent requests in that state. This prevents null pointer dereferences, avoids segmentation faults, and ensures that nodes handle pool-down conditions when NBDD is enabled. Components: Storage Affected Versions: 3.5.0 | Minor |
| PWX-48794 | After upgrading to AWS SDK v2 in Portworx 3.4.1, stricter data integrity validation was enabled by default for S3 operations. Some object storage systems, such as older versions of MinIO or other S3-compatible backends may not fully support these checks. As a result, CloudSnap uploads and S3 credential creation failed with errors. User Impact: Customers using incompatible S3-compatible object stores experienced CloudSnap backup and credential creation failures after upgrading to Portworx 3.4 or later. Resolution: Portworx has modified the AWS SDK v2 configuration to include checksum headers in S3 requests only when required. This restores compatibility with older MinIO or other S3-compatible backends while maintaining normal operation with AWS S3 and other supported object stores. Components: Cloudsnaps Affected Versions: 3.5.0 | Minor |
| PWX-48829 | When a node hosting multiple virtual machines is abruptly powered off, one of the affected virtual machines could become stuck in recovery mode after being restarted on another node, while other virtual machines recovered successfully. User Impact: One or more virtual machines could remain unreachable after node power loss, requiring manual intervention to recover the affected VM. Resolution: Portworx now ensures consistent recovery handling for virtual machines during abrupt node shutdowns, allowing all affected VMs to recover and start successfully on new nodes without getting stuck in recovery mode. Components: Storage Affected Versions: 3.5.0 or earlier | Minor |
| PWX-48835 | NBDD reported a much larger discard_bytes_total value than expected during snapshot delete and overwrite operations.User Impact: NBDD discarded the entire snapshot range instead of only the intended blocks, leading to misleading discard metrics and unnecessary data discard during snapshot workflows. Resolution: NBDD performs discard operations only on the correct block ranges, and discard_bytes_total accurately reflects the amount of data actually discarded.Components: Snapshots Affected Versions: 3.5.0 | Minor |
| PWX-48967 | Portworx cluster nodes fails to start when shared raw block VMs are running and their underlying PVCs are attached/exported on multiple nodes. If Portworx goes down on several nodes and a node that was hosting these VM PVC exports is auto-decommissioned, volumes may still reference that removed node. User Impact: Portworx repeatedly fails to come up on other nodes because they continue to request the storage spec for the decommissioned node, leaving the cluster in a restart loop and storage unavailable. Resolution: Portworx now prevents cluster startup failures by clearing invalid exported-node references when a node is auto-decommissioned. If a shared raw block volume was still marked as exported on a node that no longer exists, Portworx automatically removes that reference so all remaining nodes can retrieve a valid storage spec and start successfully. Components: Storage Affected Versions: 3.3.0 or later | Minor |
| PWX-48982 | During node startup, Portworx attempted to unmount device-mapper paths for FlashArray Direct Access (FADA) devices if they were mounted under system directories such as /var/lib/osd or /var/lib/kubelet. Because Portworx did not verify whether these devices actually belonged to Portworx-managed FADA volumes, it could unintentionally unmount non-PX FADA devices that were legitimately mounted by the host or other components.User Impact: Nodes could lose valid host-mounted FADA devices during startup, disrupting workloads and preventing dependent components from functioning correctly. Resolution: Portworx now unmounts FADA devices at startup only when a valid Portworx volume is associated with the device, ensuring that non-PX FADA devices remain mounted and unaffected during initialization. Components: Volume Management Affected Versions: 3.5.0 or earlier | Minor |
| PWX-49015/PWX-49036 | When increasing the replication factor of a volume, the operation stalls if the volume had snapshots whose replica sets did not match the parent volume. During the High Availability (HA) add workflow, where Portworx adds a new replica and synchronizes the volume’s data to the new node, Portworx incorrectly attempted to compare data against a snapshot that was not an internal snapshot and did not share the same replicas as the parent volume. This caused the data-comparison phase of the workflow to fail, preventing the HA-add operation from continuing. This is a day 0 issue. User Impact: The volume replication operation could remain stuck indefinitely, preventing users from increasing the HA level of affected volumes. Resolution: Portworx now correctly handles snapshots with mismatched replica sets during HA-add workflow. The HA-add workflow no longer gets blocked when older snapshots have different replica layouts. Replication increases now complete successfully even when snapshots do not match the parent volume’s replica configuration. Components: Storage Affected Versions: 3.5.0 or earlier | Minor |
| PWX-49219 | After upgrading to Portworx Enterprise 3.5.0, some Grafana pool-level panels in the Portworx Performance dashboard stopped displaying data. Portworx introduced new node-level metrics and updated naming, which exposed a defect in the Prometheus metrics registration logic. The internal library registers metrics using only the metric key name, without considering subsystem names. Therefore, when multiple subsystems define metrics with the same key (for example, write_latency_seconds), the first metric registered overwrites the others. User Impact: Pool-focused Grafana panels (for example, pool write latency, write throughput, flush latency, flushed bytes) showed no data, even though the cluster was healthy. Only pool-level visualization was affected; Portworx I/O operations continue to function normally. Resolution: Portworx now correctly registers pool-level metrics by including both the subsystem and metric name during registration, preventing name conflicts between node-level and pool-level metrics. This ensures that all pool metrics are exported to Prometheus and Grafana dashboards display pool-level data correctly after upgrade. Components: Volume Management Affected Versions: 3.5.0 | Minor |
| PWX-49340 | Pods using shared service (Sharedv4) volumes could remain stuck in the ContainerCreating state when the underlying file system contained errors that prevented the volume from being mounted.User Impact: Affected pods failed to start because the volume could not be mounted, even though the cluster and other workloads continued to run normally. Resolution: Portworx now prevents workloads from starting on volumes with detected file system errors until recovery is completed, ensuring that applications do not access corrupted file systems and improving stability for shared volume mounts. Components: Shared Volumes (NFS) Affected Versions: 3.5.0 | Minor |
Known issues (Errata)
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-48939 | Scheduling a SharedV4 application can fail if the required NFS packages are not installed on a node. During Portworx startup, automatic NFS installation may fail (for example, due to transient network issues). Portworx then enters a cooldown state and skips subsequent NFS installation attempts, even though the NFS service is not present on the node. User Impact: Pods that use SharedV4 volumes fail to mount with errors indicating that NFS installation was skipped. As a result, SharedV4 workloads cannot be scheduled or started on affected nodes. Workaround: Restart Portworx on the affected node to retry NFS package installation and start the NFS service successfully. Components: Shared Volumes (NFS) | Minor |
| PWX-49255 | KubeVirt virtual machines that use SharedV4 service volumes as hot-plug disks can pause or fail during Portworx restarts or coordinator node shutdown events (including node restarts). This occurs only in rare cases where a service volume failover attempt fails. In such scenarios, the hot-plug pod restarts, but because the volume is bind-mounted directly into the User Impact: The affected virtual machine enters a paused or failed state and becomes unavailable. Workaround: Restart the virtual machine. Components: Shared V4 | Minor |
| PWX-49481 | During sharedv4 volume failover, Portworx may fail to clean up a stale sentinel mount. When the sentinel cleanup timeout is set to User Impact: Sharedv4 volumes may remain attached after application teardown or node failover. Tests and workloads that expect the volume to detach can fail, potentially blocking application rescheduling or cleanup workflows. Workaround: Set Components: Shared Volumes (NFS) | Minor |
3.5.0
November 19, 2025
To install or upgrade Portworx Enterprise to version 3.5.0, ensure that your cluster meets all system requirements and is running Operator 25.5.0 or later, Stork 25.4.0 or later, and one of the supported kernels.
New Features
-
Support for Cloud Snapshots via CSI
Portworx now supports cloud-based snapshots through the Kubernetes Container Storage Interface (CSI) enabling users to create, delete, and restore cloud snapshots stored in S3-compatible object storage using standard CSI APIs.
For more information, see Manage Cloud snapshots of CSI-enabled volumes. -
Support for automatic provisioning of storage nodes when autoscaling with cluster autoscaler
In an environment where the cluster autoscaler is enabled, you can link the autoscaler to one or morenodePoolsormachineSets. You can preconfigure the node template by adding theportworx.io/provision-storage-node="true"label. This ensures that each node created from that template starts as a storage node automatically.
For more information, see Provisioning Storage Nodes and Configure Kubernetes Cluster Autoscaler to Autoscale Storage Nodes.noteThe parameters
maxStorageNodes,maxStorageNodesPerZone, andmaxStorageNodesPerZonePerNodeGroupare deprecated, and replaced with theinitialStorageNodesparameter. When scaling a Portworx cluster, themaxStorageNodesPerZoneparameter needed an increment to add more storage nodes. Instead of this, you can add new nodes and label them withportworx.io/provision-storage-node="true"to scale the cluster. See:Deprecated Features. -
Enhance CloudSnap Object Size for Improved Backup and Restore Performance
Portworx now supports increasing the CloudSnap backup object size from the default 10 MB to 100 MB, thereby accelerating backup and restore operations for large volumes.
In asynchronous disaster recovery scenarios, using a larger 100 MB object size results in fewer data to track and transfer. This reduces CPU and metadata overhead, and minimizes round trips, especially in high-latency environments.
For more information, see Cloud Snapshots. -
Non-Blocking Device Delete (NBDD) for Snapshot and Volume Deletes
The Non-Blocking Device Delete (NBDD) feature improves the performance and reliability of large volume and snapshot deletions in Portworx Enterprise clusters running on PX-StoreV2. NBDD performs background block discards at a configurable rate before deletion, reducing I/O pauses and helping reclaim storage, especially in SAN or FlashArray (FA) environments.
For more information, see Non-Blocking Device Delete. -
PX-StoreV2 set as default datastore
Portworx now uses PX-StoreV2 as the default datastore for all new deployments on supported platforms.
For more information, see PX-StoreV2. -
Support for Co-location on Portworx RWX block volumes
Portworx now supports volume co-location for KubeVirt virtual machines (VMs) on Portworx RWX block volumes. This feature enables Portworx to place the replicas of all VM volumes on the same set of nodes, improving I/O performance, reducing network traffic, and avoiding unnecessary remote volume access.
For more information, see Manage Shared Block Device (RWX Block) for KubeVirt VMs. -
Portworx Shared RWX Block volumes for KubeVirt VMs on SUSE Virtualization
Portworx now supports ReadWriteMany (RWX) raw block volumes for KubeVirt virtual machines (VMs) on SUSE Virtualization, enabling features such as live migration, high availability, and persistent VM storage across SUSE Virtualization nodes.
For more information, see Manage Portworx RWX Block Volumes on SUSE Virtualization for KubeVirt VMs. -
Portworx Installation Support on SUSE Virtualization with FlashArray
Portworx now supports installation on SUSE Virtualization environments with Pure Storage FlashArray as the backend storage provider. This integration enables administrators and platform engineers to deploy a highly available, scalable, and production-grade storage layer for virtualized and cloud-native workloads running on SUSE platforms.
For more information, see Installation on SUSE Virtualization. -
Enhanced FUSE Driver with Tunable Runtime Options
The Portworx FUSE driver now includes enhanced support for high-performance Portworx workloads through new tunable runtime options. These options allow you to fine-tune FUSE behavior based on workload characteristics and system resources.
For more information, see the Tune Performance guide. -
Increased Volume Attachment Limit per Node
Portworx now supports up to 1,024 volume attachments per node. The volume attachment limit for FADA and SharedV4 volumes remains at 256.
For more information, see Features and configurations supported by different license types.noteYou must upgrade the fuse driver in your Portworx cluster to version 3.5.0 or later to support the new limits.
-
Automatic KVDB Node Rebalancing Across Failure Domains
Portworx now supports automatic rebalancing of internal KVDB nodes across failure domains after a failover. The rebalance check runs every 15 minutes and ensures that the KVDB nodes do not remain concentrated in a single failure domain after failovers or cluster topology changes. -
Support for KubeVirt VMs with IPv6
Portworx now supports KubeVirt VMs with IPv6 networking.
Early Access Features
- Support for storage operations in OpenShift hosted control plane clusters
Portworx now supports storage operations for both management and hosted clusters created using Openshift hosted control plane.
For more information, see Deploy Openshift hosted clusters with Portworx.
Deprecated Features
- The parameters
maxStorageNodes,maxStorageNodesPerZone, andmaxStorageNodesPerZonePerNodeGroupare deprecated, and replaced with theinitialStorageNodesparameter and theportworx.io/provision-storage-node="true"label.
If you are upgrading the Portworx cluster, from version 3.4.x or earlier, any existing values for the deprecated parameters (maxStorageNodes,maxStorageNodesPerZone, andmaxStorageNodesPerZonePerNodeGroup) will remain set in your configuration, but it will have no operational effect. These values persist only for backward compatibility. After upgrading, you cannot modify these parameters to new values. You can only unset them. Attempts to set or modify them (on both new and existing clusters) will result in an error.
For more information, see Provisioning Storage Nodes.
End of support notifications
- Photon3 is no longer supported, as the distribution has reached end of life (EOL).
- This is the final release that supports all versions of Fedora.
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-21104 | During crypto device attach operations, the Cgo library hangs while making a system call. Because this operation held a lock, all further encrypted volume actions such as, attach, detach, mount, or unmount were blocked on the node. User Impact: Pods using encrypted volumes become stuck in ContainerCreating or Terminating states, with recovery only possible by restarting the Portworx service on the node.Resolution: Portworx now skips the Cgo library and directly uses the native cryptsetup binaries available on the host system. This approach prevents the Cgo library from hanging while maintaining support for encrypted volume operations.Components: Volume Management Affected Versions: 3.4.0 or earlier | Minor |
| PWX-35186 | Portworx validated BM/VM license compatibility during startup only. If a cluster with both bare-metal (BM) and virtual machine (VM) nodes switched to a VM-only license while running, no alerts were generated. This lead to clusters operating under an incompatible license until a restart or upgrade, at which point BM nodes fails to start. User Impact: Clusters running a mixed configuration (BM + VM) experienced unexpected failures during restart or upgrade due to undetected license mismatches. Resolution: Portworx now continuously monitors for license changes and validates platform compatibility in real time. When a VM-only license is applied to a cluster with BM nodes, clear alerts are raised (e.g., LicenseCheckFailed: The current license does not enable the required platform). The alerts automatically clear after a valid license with BM/VM support is applied.Components: IX-Licensing & Metering Affected Versions: 3.4.0 or earlier | Minor |
| PWX-36391 | The pxctl volume inspect -j and pxctl volume update -h commands did not display the sharedv4_failover_strategy field unless it was explicitly set by the user.User Impact: Users could not determine the default failover strategy for Sharedv4 or Sharedv4 Service volumes unless it had been manually updated. Resolution: Portworx now displays the default sharedv4_failover_strategy for Sharedv4 and Sharedv4 Service volumes in the pxctl volume inspect output. By default, the strategy is AGGRESSIVE for Sharedv4 Service volumes and NORMAL for Sharedv4 volumes. This field does not appear for non-Sharedv4 volumes.Components: CLI & API Affected Versions: All versions | Minor |
| PWX-37801 | When configuring NFS attribute caching options on Portworx Sharedv4 volumes, custom parameters such as acdirmin=30 and acregmin=3 were being overridden by the default actimeo=60 value, even when actimeo was not explicitly specified.User Impact: Users attempting to fine-tune NFS caching behavior for Sharedv4 volumes could not apply specific values for individual parameters. As a result, attribute cache refresh intervals did not reflect the desired configuration. Resolution: Portworx now correctly honors user-specified NFS caching parameters such as acregmin, acregmax, acdirmin, acdirmax when actimeo is not explicitly set. If actimeo is provided, it will apply globally to all caching parameters unless specific options are defined after in the mount configuration.Components: Volume Management Affected Versions: All versions | Minor |
| PWX-38162 | When a node stopped serving as a KVDB node, Portworx automatically deleted the KVDB disk during restart. In rare cases where a data disk was incorrectly marked as a KVDB disk, such as during manual configuration changes or troubleshooting this behavior could result in accidental deletion of the data disk. User Impact: This leads to unintended data loss if a non-KVDB disk was misidentified as a KVDB disk and removed during the node restart process. Resolution: Portworx now includes additional validation checks based on disk labels and size before performing any KVDB disk cleanup. These safeguards ensure that only valid KVDB disks are deleted, preventing accidental removal of data disks incorrectly tagged as KVDB. Components: KVDB Affected Versions: All versions | Minor |
| PWX-42269 | When you run the pxctl clouddrive delete command to delete a DriveSet in FlashArray CloudDrive (FACD) setups, the system attempts to delete DriveSets even if the detach operation fails and displays inconsistent error messages.User Impact: Users could encounter misleading or inconsistent error messages when attempting to delete a DriveSet that was still attached to a backend array, causing confusion about whether the deletion succeeded, failed, or was attempted safely. Resolution: Portworx now includes additional validation checks based on disk labels and size before performing any KVDB disk cleanup. These safeguards ensure that only valid KVDB disks are deleted, preventing accidental removal of data disks incorrectly tagged as KVDB. Components: Drive & Pool Management Affected Versions: 3.4.0 or earlier | Minor |
| PWX-45884 | Updating Quality of Service (QoS) parameters such as max_iops and max_bandwidth for FlashArray DirectAccess (FADA) volumes was not supported through the Portworx CLI. Users attempting to run commands like pxctl volume update --max_iops received the error:pxctl update not supported for Pure DirectAccess volumes.Users had to manually adjust QoS settings through the FlashArray UI, blocking automation and dynamic performance tuning workflows. User Impact: Customers relying on automated QoS management could not dynamically tune IOPS or bandwidth limits for FADA volumes, impacting efficiency and requiring manual interventions. Resolution: Portworx now supports dynamic updates to QoS parameters for FADA volumes using the pxctl volume update --max_iops <IOPSLimit> or pxctl volume update --max_bandwidth <BandwidthLimit> CLI commands. Users can now modify IOPS and bandwidth limits on existing FADA volumes without accessing the FlashArray UI. Updates reflect in both the pxctl volume inspect output and the FlashArray backend.Components: Volume Management Affected Versions: 3.4.0 or earlier | Minor |
| PWX-46424 | Portworx CLI (pxctl volume update) allowed users to specify either --max_iops or --max_bandwidth, but not both simultaneously. This limited flexibility for users who wanted to fine-tune both IOPS and bandwidth constraints on a volume concurrently.User Impact: Users could not set both IOPS and bandwidth limits simultaneously, which restricted performance control for workloads requiring simultaneous throttling of both parameters. Attempting to specify both would result in an error or one of the values being ignored. Resolution: Portworx now supports specifying both --max_iops and --max_bandwidth together in the pxctl volume update command. When both are set, the underlying cgroup mechanism determines how the limits are enforced. Users are notified that applying both limits simultaneously is non-deterministic, meaning the effective throttling behavior may depend on system-level cgroup prioritization.Components: Volume Management Affected Versions: 3.4.0 or earlier | Minor |
| PWX-46697 | In some cases, when a node stops receiving watch updates directly from KVDB, Portworx begins pulling updates from other nodes that have newer data. If this condition persists for a long time, the node’s watch version can fall behind, resulting in the error:Storage watch error, error: Kvdb watch revision compacted.When this occurs, affected Portworx nodes may restart unexpectedly. User Impact: Unexpected node restarts can temporarily disrupt storage services and impact running applications. Resolution: Portworx now automatically restarts only the affected KVDB watch instead of restarting the entire node process. Components: KVDB Affected Versions: 3.4.0 or earlier | Minor |
| PWX-46800 | Sharedv4 volumes failed to mount on Portworx clusters when the rpcbind service was disabled or stopped. As a result, pods using Sharedv4 volumes remained in a waiting state.User Impact: When rpcbind was not active, the NFS service dependencies required for Sharedv4 volumes to function were unavailable. This caused persistent mount failures and pod scheduling delays until rpcbind was started manually.Resolution: Portworx now automatically starts the rpcbind service during NFS operations, similar to how it manages the nfs-server service. This ensures that Sharedv4 volume mounts succeed even if rpcbind was previously disabled, improving reliability and reducing the need for manual recovery steps.Components: Control Plane Affected Versions: 3.4.0 or earlier | Minor |
| PWX-47733 | AutoFSTrim did not run on volumes mounted with the discard option, even when AutoFSTrim was enabled at both the cluster and volume level. As a result, deleted data blocks were not reclaimed automatically, causing higher backend storage usage.User Impact: Users with volumes configured to use discard see space consumption remain high after deleting data, since automatic trimming was skipped.Resolution: Portworx now supports AutoFSTrim for all volumes with AutoFSTrim enabled, regardless of whether the mount option is discard or nodiscard. The trimming operation runs automatically and reclaims unused storage space as expected.Components: Autofstrim, Dmthin, Storage Affected Versions: 3.4.0 or earlier | Minor |
| PWX-47751 | If the nfsd filesystem is not properly mounted inside the Portworx runtime (runc) container, exportfs commands fails with the error Function not implemented. This caused Sharedv4 volume exports to fail and affected NFS service initialization.User Impact: When nfsd was not mounted, Portworx could not correctly export NFS volumes, leading to failed or stalled Sharedv4 mounts. This issue particularly impacted environments with NFS misconfigurations or during migrations where the NFS service did not start correctly.Resolution: Portworx now automatically verifies that the nfsd filesystem is mounted within the runc container during NFS initialization. If it is missing, Portworx remounts it and reinitializes the NFS service. This prevents errors and ensures that Sharedv4 volumes mount reliably even when NFS is initially disabled or unhealthy.Components: Shared Volumes Affected Versions: 3.4.0 or earlier | Minor |
| PWX-47763 | Portworx Enterprise periodically ran a cleanup task to remove deleted CloudSnaps, even when all CloudSnap operations were managed by PX-Backup. This caused unnecessary filesystem calls on NFS backends, leading to additional system overhead. User Impact: In environments where PX-Backup handled all CloudSnap deletions, redundant cleanup tasks from Portworx Enterprise could increase filesystem load and reduce performance efficiency. Resolution: A new configuration parameter, cs_cleanup_interval_minutes, is introduced to control the cleanup schedule. When set to 0, Portworx disables the internal CloudSnap cleanup scheduler. When set to a positive value, the cleanup runs periodically based on the specified interval. This change allows PX-Backup to fully manage CloudSnap cleanup operations, preventing redundant cleanup activity and reducing system overhead.Components: Cloudsnaps Affected Versions: 3.4.0 or earlier | Minor |
Known issues (Errata)
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-31343/PWX-47952 | After upgrading the Anthos version in your Portworx cluster, volumes in the pods may switch to read-only due to I/O errors and remain in that state. This causes pods to crash (for example, with a User Impact: Affected pods does not write to their data and continues restarting until manual intervention. Workaround:
Components: Storage | Minor |
| PWX-47360 | Portworx may report inaccurate trimmable space values and repeatedly run the
Workaround: There is no workaround. Components: Storage | Minor |
| PWX-47921 | After deleting all the pools in a cluster, the node appears storageless in User Impact: You cannot convert a storageless node back to a storage node without clearing the leftover metadata drive; pool re-creation via PSN label is blocked. Workaround: Decommission the node to clear the driveset or metadata, then add the PSN label (or reprovision) to convert the storageless node back to a storage node. Components: Storage | Minor |
| PWX-48516 | KVDB TLS migration fails when the StorageCluster (STC) name does not match the cluster name preserved in the User Impact: KVDB failover and KVDB node add/remove operations fail. Workaround: Ensure that the cluster name preserved in the Components: Storage | Minor |
| PWX-48538 | After deleting all the pools in a cluster, the node appears storageless in User Impact: You cannot convert a storageless node back to a storage node without clearing the leftover metadata drive; pool re-creation via PSN label is blocked. Workaround: Decommission the node to clear the driveset or metadata, then add the PSN label (or reprovision) to convert the storageless node back to a storage node. Components: Storage | Minor |
| PWX-49016 | If a FlashArray Direct Access (FADA) volume is mounted at User Impact: Portworx fails to start and enters a restart loop. Nodes remain in Workaround: Contact Portworx Support.
Components: Volume Management | Minor |
| PWX-49219 | After upgrading to Portworx Enterprise 3.5.0, some Grafana pool-level panels in the Portworx Performance dashboard may stop displaying data. Portworx introduced new node-level metrics and updated naming, which exposed a defect in the Prometheus metrics registration logic. The internal library registers metrics using only the metric key name, without considering subsystem names. Therefore, when multiple subsystems define metrics with the same key (for example,
Components: Volume Management | Minor |