Skip to main content
Version: 3.1

Portworx Enterprise Release Notes

3.1.9

August 22, 2025

This version addresses security vulnerabilities.

3.1.8

January 28, 2025

Visit these pages to see if you're ready to upgrade to this version:

Note

This version addresses security vulnerabilities.

Fixes

Issue NumberIssue DescriptionSeverity
PWX-41332When running Portworx in debug mode with FlashBlade, certain log entries displayed extraneous information under rare conditions.

User Impact: Unwanted information appeared in the log entries.

Resolution: Portworx has been enhanced to ensure that only relevant information is displayed.

Affected Versions: 3.1.x, 3.0.x, 2.13.x

Component: Volume Management
Major
PWX-41329
PWX-41480
When executing a few commands, extraneous information was displayed in their output.

User Impact: Unwanted information appeared in the output of certain commands.

Resolution: Portworx has been enhanced to ensure that only relevant information is displayed.

Affected Versions: 3.1.x, 3.0.x, 2.13.x

Component: CLI & API
Major

3.1.7

December 3, 2024

Visit these pages to see if you're ready to upgrade to this version:

Note

This version addresses security vulnerabilities.

3.1.6.1

November 13, 2024

Visit these pages to see if you're ready to upgrade to this version:

Fixes

Issue NumberIssue DescriptionSeverity
PWX-39990As part of node statistics collection, Portworx read the timestamp data stats while its storage component was updating them at the same time, leading to data conflicts.

User Impact: The Portworx storage component restarted due to an invalid memory access issue.

Resolution: A lock mechanism has been added to manage concurrent reads and writes to the timestamp data, preventing conflicts.

Affected Versions: 3.1.0

Component: Storage
Critical

3.1.6

October 02, 2024

Visit these pages to see if you're ready to upgrade to this version:

Fixes

Issue NumberIssue DescriptionSeverity
PWX-38930For PX-StoreV2 deployments with volumes that had a replica factor greater than 1 and were either remotely attached or not accessed through PX-Fast PVCs, if a power loss, kernel panic, or ungraceful node reboot occurred, the data was incorrectly marked as stable due to buffering in the underlying layers, despite being unstable.

User Impact: In these rare situations, this issue can mark PVC data as unstable.

Resolution: Portworx now correctly marks the data as stable , preventing this problem.

Components: PX-StoreV2
Affected Versions: 2.13.x, 3.0.x, 3.1.x
Critical

3.1.5

September 19, 2024

Visit these pages to see if you're ready to upgrade to this version:

Improvements

Portworx has upgraded or enhanced functionality in the following areas:

Improvement NumberImprovement DescriptionComponent
PWX-38849For Sharedv4 volumes, users can now apply the disable_others=true label to limit the mountpoint and export path permissions to 0770, effectively removing access for other users and enhancing the security of the volumes.Volume Management
PWX-38791The FlashArray Cloud Drive volume driveset lock logic has been improved to ensure the driveset remains locked to its original node, which can otherwise detach due to a connection loss to the FlashArray during a reboot, preventing other nodes from claiming it:
  • The lock expiry has been extended to 5 minutes to prevent prolonged reboots from causing the driveset to detach.
  • During KVDB drive removal, such as during KVDB failover, a temporary driveset is used to facilitate the removal of the KVDB drive.
Drive & Pool Management
PWX-38714During the DriveSet check, if device mapper devices are detected, Portworx cleans them before mounting FlashArray Cloud Drive volumes. This prevents mounting issues during failover operations on a FlashArray Cloud Drive volume.Drive & Pool Management
PWX-37642The logic for the sharedv4 mount option has been improved:
  • Users can override the default options (rw and no_root_squash) with the root_squash or ro option using pxctl.
  • Users cannot modify mount permissions using os.chmod when the root_squash option is specified in /etc/exports.
Sharedv4 Volumes

Fixes

Issue NumberIssue DescriptionSeverity
PWX-36679Portworx could not perform read or write operations on Sharedv4 volumes if NFSD version 3 was disabled in /etc/nfs.conf.

User Impact: Read or write operations failed on Sharedv4 volumes.

Resolution: Portworx no longer depends on the specific enabled NFSD version and now only checks if the service is running.

Components: Shared Volumes
Affected Versions: 3.1.0
Major
PWX-38888In some cases, when a FlashArray Direct Access volume failed over between nodes, Portworx version 3.1.4 did not properly clean up the mount path for these volumes.

User Impact: Application pods using FlashArray Direct Access volumes were stuck in the Terminating state.

Resolution: Portworx now properly handles the cleanup of FlashArray Direct Access volume mount points during failover between nodes.

Components: Volume Management
Affected Versions: 3.1.4
Minor

3.1.4

August 15, 2024

Visit these pages to see if you're ready to upgrade to this version:

Fixes

Issue NumberIssue DescriptionSeverity
PWX-37590Users running on environments with multipath version 0.8.8 and using FlashArray devices, either as Direct Access Volumes or Cloud Drive Volumes, may have experienced issues with the multipath device not appearing in time.

User Impact: Users saw Portworx installations or Volume creation operations fail.

Resolution: Portworx is now capable of running on multipath version 0.8.8.

Components: Drive and Pool Management
Affected Versions: 3.1.x, 3.0.x, 2.13.x
Major

3.1.3

July 16, 2024

Visit these pages to see if you're ready to upgrade to this version:

Improvements

Portworx has upgraded or enhanced functionality in the following areas:

Improvement NumberImprovement DescriptionComponent
PWX-37576Portworx has significantly reduced the number of vSphere API calls during the booting process and pool expansion.Drive & Pool Management

Fixes

Issue NumberIssue DescriptionSeverity
PWX-37870When PX-Security is enabled on a cluster that is also using Vault for storing secrets, the in-tree provisioner (kubernetes.io/portworx-volume) fails to provision a volume.

User Impact: PVCs became stuck in a Pending state with the following error: failed to get token: No Secret Data found for Secret ID.

Resolution: Use the CSI provisioner (pxd.portworx.com) to provision the volumes on clusters that have PX-Security enabled.

Components: Volume Management
Affected Versions: 3.0.3, 3.1.2
Major
PWX-37799A KVDB failure sometimes Portworx to restart when creating cloud backups.

User Impact: Users saw Portworx restart unexpectedly.

Resolution: Portworx now raises an alert, notifying users of a backup failure instead of unexpectedly restarting.

Components: Cloudsnaps
Affected Versions: 3.1.x, 3.0.x, 2.13.x
Major
PWX-37661If the credentials provided in px-vsphere-secret were invalid, Portworx failed to create a Kubernetes client, and the process would restart every few seconds leading to many login failures continuously.

User Impact: Users saw a large number of client creation trials, which may have lead to the credentials being blocked or too many API calls.

Resolution: If the credentials are invalid, Portworx will now wait for secret to be changed before trying to log in again.

Components: Drive and Pool Management
Affected Versions: 3.1.x, 3.0.x, 2.13.x
Major
PWX-37339Sharedv4 service failover did not work correctly when a node had a link-local IP from the subnet 169.254.0.0/16. In clusters running OpenShift 4.15 or later, Kubernetes nodes may have a link-local IP from this subnet by default.

User Impact: Users saw disruptions in applications utilizing sharedv4-service volumes when the NFS server node went down.

Resolution: Portworx has been improved to prevent VM outages in such situations.

Components: Sharedv4
Affected Versions: 3.1.0.2
Major

3.1.2.1

July 8, 2024

Visit these pages to see if you're ready to upgrade to this version:

Fixes

Issue NumberIssue DescriptionSeverity
PWX-37753Portworx reloaded and reconfigured VMs on every boot, which is a costly activity in vSphere.

User Impact: Users saw a significant number of VM reload and reconfigure activities during Portworx restarts, which sometimes overwhelmed vCenter.

Resolution: Portworx has been optimized to minimize unnecessary reload and reconfigure actions for VMs. Now, these actions are mostly triggered only once during the VM's lifespan.

Component: Drive and Pool Management

Affected Versions: 3.1.x, 3.0.x, 2.13.x
Major
PWX-35217Portworx maintained two vSphere sessions at all times. These sessions would become idle after Portworx restarts, and vSphere would eventually clean them up. vSphere counts idle sessions toward its session limits, which caused an issue if all nodes restarted simultaneously in a large cluster.

User Impact: In large clusters, users encountered the 503 Service Unavailable error if all nodes restarted simultaneously.

Resolution: Portworx now actively terminates sessions after completing activities like boot and pool expansion. Note that in rare situations where Portworx might not close the sessions, users may still see idle sessions. These sessions are cleaned by vSphere based on the timeout settings of the user's environment.

Component: Drive and Pool Management

Affected Versions: 3.1.x, 3.0.x, 2.13.x
Major
PWX-36727When a user decommissioned a node, Portworx would process the node deletion in the background. And for every volume delete or update operation, it checked if all nodes marked as decommissioned had no references to these volumes, which took a long time to delete a node.

User Impact: The Portworx cluster went down as the KVDB node timed out.

Resolution: The logic for decommissioning nodes has been improved to prevent such situations.

Component: KVDB

Affected Versions: 3.1.x, 3.0.x, 2.13.x
Minor

3.1.2

June 19, 2024

Visit these pages to see if you're ready to upgrade to this version:

New features

Portworx by Pure Storage is proud to introduce the following new features:

Improvements

Portworx has upgraded or enhanced functionality in the following areas:

Improvement NumberImprovement DescriptionComponent
PWX-33044Portworx will perform additional live VM migrations to ensure a KubeVirt VM always uses the block device directly by running the VM on the volume coordinator node.Sharedv4
PWX-23390Stork will now raise events on a pod or VM object if it fails to schedule them in a hyperconverged fashion.Stork and DR
PWX-37113In KubeVirt environments, Portworx no longer triggers RebalanceJobStarted and RebalanceJobFinished alarms every 15 minutes due to the KubeVirt fix-vps job. Alarms are now raised only when the background job is moving replicas.Storage
PWX-36600The output of the rebalance HA-update process has been improved to display the state of each action during the process.Storage
PWX-36854The output of the pxctl volume inspect command has been improved. The Kind field can now be left empty inside the claimRef, allowing the output to include application pods that are using the volumes.Storage
PWX-33812Portworx now supports Azure PremiumV2_LRS and UltraSSD_LRS disk types.Drive and Pool Management
PWX-36484A new query parameter ce=azure has been added for Azure users to identify the cloud environment being used. The parameter ensures that the right settings and optimizations are applied based on the cloud environment.Install
PWX-36714The timeout for switching licenses from floating to Portworx Enterprise has been increased, avoiding timeout failures.Licensing

Fixes

Issue NumberIssue DescriptionSeverity
PWX-36869When using a FlashArray on Purity 6.6.6 with NVMe-RoCE, a change in the REST API resulted in a deadlock in Portworx.

User Impact: FlashArray Direct Access attachment operations never completed, and FlashArray Cloud Drive nodes failed to start.

Resolution: Portworx now properly handles the changed API for NVMe and does not enter a deadlock.

Component: FA-FB

Affected Versions: 3.1.x, 3.0.x, 2.13.x
Critical
PWX-37059In disaggregated mode, storageless nodes restarted every few minutes attempting to claim the storage driveset and ended up being unsuccessful.

User Impact: Due to storageless node restarts, some customer applications experienced IO disruption.

Resolution: When a storage node goes down, Portworx now stops storageless nodes from restarting in a disaggregated mode, avoiding them to claim the storage driveset.

Component: Drive and Pool Management

Affected Versions: 3.1.x, 3.0.x, 2.13.x
Major
PWX-37351If the drive paths changed due to a node restart or a Portworx upgrade, it led to a storage down state on the node.

User Impact: Portworx failed to restart because of the storage down state.

Components: Drive & Pool Management

Affected Versions: 3.1.0.3, 3.1.1.1
Major
PWX-36786An offline storageless node was auto-decommissioned under certain race conditions, making the cloud-drive driveset orphaned.

User Impact: When Portworx started as a storageless node using this orphaned cloud-drive driveset, it failed to start since the node's state was decommissioned.

Resolution: Portworx now auto-cleans such orphaned storageless cloud-drive drivesets and starts successfully.

Component: Drive and Pool Management

Affected Versions: 3.1.x, 3.0.x, 2.13.x
Major
PWX-36887When one of the internal KVDB nodes was down for several minutes, Portworx added another node to the KVDB cluster. Portworx initially added the new KVDB member as a learner. If, for some reason, KVDB connectivity was lost for more than a couple of minutes after adding the learner, the learner stayed in the cluster and prevented a failover to a different KVDB node.

User Impact: The third node was not able to join the KVDB cluster with the error Peer URLs already exists. KVDB continued to run with only two members.

Resolution: When Portworx encounters the above error, it removes the failed learner from the cluster, thereby allowing the third node to join.

Component: Internal KVDB

Affected Versions: 3.0.x, 3.1.1
Major
PWX-36873When Portworx was using HashiCorp's Vault configured with Kubernetes or AppRole authentication, it attempted to automatically refresh the access tokens when they expired. If the Kubernetes Service Account was removed or the AppRole expired, the token-refresh kept failing, and excessive attempts to refresh it caused a crash of the Vault service on large clusters.

User Impact: The excessive attempts to refresh the tokens caused a crash of the Vault service on large clusters.

Resolution: Portworx nodes now detect excessive errors from the Vault service and will avoid accessing Vault for the next 5 minutes.

Component: Volume Management

Affected Versions: 3.0.5, 3.0.3
Major
PWX-36601Previously, the default timeout for rebalance HA-update actions was 30 minutes. This duration was insufficient for some very slow setups, resulting in HA-update failures.

User Impact: The rebalance job for HA-update failed to complete. In some cases, the volume's HA-level changed unexpectedly.

Resolution: The default rebalance HA-update timeout has been increased to 5 hour

Components: Storage

Affected Versions: 2.13.x, 3.0.x, 3.1.x
Major
PWX-35312In version 3.1.0, a periodic job that fetched drive properties caused an increase in the number of API calls across all platforms.

User Impact: The API rate limits approached their maximum capacity more quickly, stressing the backend.

Resolution: Portworx improved the system to significantly reduce the number of API calls on all platforms.

Component: Cloud Drives

Affected Versions: 3.1.0
Major
PWX-30441For AWS users, Portworx did not update the drive properties for the gp2 drives that were converted to gp3 drives.

User Impact: As the IOPS of such drives changed, but not updated, pool expansion failed on these drives.

Resolution: During the maintenance cycle, that is required for converting gp2 drives to gp3, Portworx now refreshes the disk properties of these drives.

Component: Cloud Drives

Affected Versions: 3.1.x, 3.0.x, 2.13.x
Major
PWX-36139During pool expansion with the add-drive operation using the CSI provider on a KVDB node, there is a possibility of the new drive getting the StorageClass of the KVDB drive instead of the data drive, if they are different.

User Impact: In such a case, a drive might have been added but the pool expansion operation failed, causing some inconsistencies.

Resolution: Portworx takes the StorageClass of only the data drives present in the node.

Component: Pool Management

Affected Versions: 3.1.x, 3.0.x, 2.13.x
Minor

Known issues (Errata)

Issue NumberIssue DescriptionSeverity
PD-3031For an Azure cluster with storage and storageless nodes using Premium LRS or SSD drive types, when a user updates the Portworx StorageClass to use PremiumV2 LRS or Ultra SSD drive types, the changes might not reflect on the existing nodes.

Workaround: StorageClass changes will apply only to the new nodes added to the cluster. For existing nodes, perform the following steps:
  1. Label all existing storage nodes with the label portworx.io/node-type=storage.
  2. Label all existing storageless nodes with the label portworx.io/node-type=storageless.
  3. Add the environment variable ENABLE_ASG_STORAGE_PARTITIONING: "true" in the StorageClass.
  4. Update the StorageClass to the new drive types.
  5. Scale the node pool and label the node with portworx.io/node-type=storage.
  6. Increase MaxStorageNodesPerZone.
Component: Drive and Pool Management

Affected versions: 3.1.2
Major
PD-3012If maxStorageNodesPerZone is set to a value greater than the current number of worker nodes in an AKS cluster, additional storage nodes in an offline state may appear post-upgrade due to surge nodes.

Workaround: Manually delete any extra storage node entries created during the Kubernetes cluster upgrade by following thenode decommission process.Components: Cloud Drives

Affected versions: 2.13.x, 3.0.x, 3.1.x
Major
PD-3013Pool expansion may fail if a node is rebooted before the expansion process is completed, displaying errors such as drives in the same pool not of the same type.

Workaround: Retry the pool expansion on the impacted node.

Components: Drive and Pool Management

Affected versions: 3.1.2
Major
PD-3035Users may encounter issues with migrations of legacy shared volumes to shared4v service volumes appearing stuck if performed on a decommissioned node.

Workaround: If a node is decommissioned during a migration, the pods running on that node must be forcefully terminated to allow the migration to continue.

Component: Shared4v Volumes

Affected version: 3.1.2
Major
PD-3030In environments where multipath is used to provision storage disks for Portworx, incorrect shutdown ordering may occur, causing multipath to shut down before Portworx. This can lead to situations where outstanding IOs from applications, still pending in Portworx, may fail to reach the storage disk.

Workaround:
  • In host environments using systemd, ensure that the multipath service is set with a shutdown dependency after the Portworx service.
  • For other environments, explore options to correctly set the dependency chain.
Components: Drive & Pool Management

Affected Versions: 3.1.2
Major

3.1.1

April 03, 2024

Visit these pages to see if you're ready to upgrade to this version:

Improvements

Portworx has upgraded or enhanced functionality in the following areas:

Improvement NumberImprovement DescriptionComponent
PWX-35939For DR clusters, the cluster domain of the nodes is exposed in the node inspect and node enumerate SDK responses. This information is used by the operator to create the pod disruption budget, preventing loss during Kubernetes upgrades.DR and Migration
PWX-35395When Portworx encounters errors like checksum mismatch or bad disk sectors while reading data from the backend disk the IOOperationWarning alert will be raised. This alert is tracked by the metric px_alerts_iooperationwarning_total.Storage
PWX-35738Portworx now queries an optimized subset of VMs to determine the driveset to attach, avoiding potential errors during an upgrade where a transient state of a VM could have resulted in an error during boot.Cloud Drives
PWX-35397The start time for Portworx on both Kubernetes and vSphere platforms has been significantly reduced by eliminating repeated calls to the Kubernetes API and vSphere servers.Cloud Drives
PWX-35042The Portworx CLI has been enhanced with the following improvements:
  • The output of the pxctl volume inspect command now includes additional fields for the vSphere and FlashArray drives:
    • Datastore/FlashArray Name
    • DriveID/VMDK Path
    • Drive Path
  • Introduced a new flag --cloud-drive-id for the pxctl volume list command to list all volumes that are part of a drive.
    Syntax: pxctl volume list --cloud-drive-id <drive-id>
Cloud Drives
PWX-33493For pool expansion operations with the pxctl sv pool expand command, the add-disk and resize-disk flags have been renamed to add-drive and resize-drive, respectively. The command will continue to support the old flags for compatibility.Cloud Drives
PWX-35351The OpenShift Console now displays the Used Space for CSI sharedV4 volumes.Sharedv4
PWX-35187Customers can now obtain the list of Portworx images from the spec generator.Spec Generator
PWX-36543If the current license is set to expire within the next 60 days, Portworx now automatically updates the IBM Marketplace license to a newer one upon the restart of the Portworx service.Licensing
PWX-36496The error messages for pxctl license activate have been improved to return a more appropriate error message in case of double activation.Licensing

Fixes

Issue NumberIssue DescriptionSeverity
PWX-36416When a PX-StoreV2 pool reached its full capacity and could not be expanded further using the resize-drive option, it went offline due to a pool full condition.

User Impact: If pool capacity reached a certain threshold, the pool went offline.

Resolution: Since PX-StoreV2 pools cannot be expanded using the add-drive operation. You can increase the capacity on a node by adding new pools to it:
  1. Run the pxctl sv drive add --newpool command to add new pools to the node.
  2. Transfer replicas from the pool nearing full capacity to a new pool.
Components: Pool Management

Affected Versions: 3.0.0
Critical
PWX-36344A deadlock in the Kubernetes Config lock led to failed pool expansion.

User Impact: Customers needed to restart Portworx if pool expansion became stuck.

Resolution: An unbuffered channel that resulted in a deadlock when written to in a very specific window is now changed to have a buffer, breaking the deadlock.

Components: Pool Management

Affected Versions: 2.13.x, 3.0.x
Major
PWX-36393Occasionally, Portworx CLI binaries were installed incorrectly due to issues (e.g., read/write errors) that the installation process failed to detect, causing the Portworx service to not start.

User Impact: Portworx upgrade process failed.

Resolution: Portworx has improved the installation process by ensuring the correct installation of CLI commands and detecting these errors during the installation.

Components: Install

Affected Versions: 2.13.x, 3.0.x
Major
PWX-36339For a sharedv4 service pod, there was a race condition where the cached mount table failed to reflect the unmounting of the path.

User Impact: Pod deletion got stuck in the Terminating state, waiting for the underlying mount point to be deleted.

Resolution: Force refresh of cache for an NFS mount point if it is not attached and is already unmounted. This will ensure that the underlying mount path gets removed and the pod terminates cleanly.

Components: Sharedv4

Affected versions: 2.13.x, 3.0.x
Major
PWX-36522When FlashArray Direct Access volumes and FlashArray Cloud Drive volumes were used together, the system couldn't mount the PVC due to an Invalid arguments for mount entry error, causing the related pods to not start.

User Impact: Application pods failed to start.

Resolution: The mechanism to populate the mount table on restart has been changed to ensure an exact device match rather than a prefix-based search, addressing the root cause of the incorrect mount entries and subsequent failures.

Components: Volume Management

Affected version: 3.1.0
Major
PWX-36247The field portworx.io/misc-args had an incorrect value of -T dmthin instead of -T px-storev2 to select the backend type..

User Impact: Customers had to manually change this argument to -T px-storev2 after generating the spec from the spec generator.

Resolution: The value for the field has been changed to -T px-storev2.

Components: FA-FB

Affected version: 3.1.0
Major
PWX-35925When downloading air-gapped bootstrap specific for OEM release (e.g. px-essentials), the script used an incorrect URL for the Portworx images.

User Impact: The air-gapped bootstrap script fetched the incorrect Portworx image, particularly for Portworx Essentials.

Resolution: The air-gapped bootstrap has been fixed, and now efficiently handles the OEM release images.

Components: Install

Affected version: 2.13.x, 3.0.x
Major
PWX-35782In a synchronous DR setup, a node repeatedly crashed during a network partition because Portworx attempted to operate on a node from another domain that was offline and unavailable.

User Impact: In the event of a network partition between the two domains, temporary node crashes could occur.

Resolution: Portworx now avoids the nodes that are not online or unavailable from other domain.

Components: DR and Migration

Affected version: 3.1.0
Major
PWX-36500Older versions of Portworx installations with FlashArray Cloud Drive displayed an incorrect warning message in the pxctl status output on RHEL 8.8 and above OS versions, even though the issue had been fixed in the multipathd package that comes with these OS versions.

User Impact: With Portworx version 2.13.0 or above, users on RHEL 8.8 or higher who were using FlashArray Cloud Drives saw the following warning in the pxctl status output:
WARNING: multipath version 0.8.7 (between 0.7.7 and 0.9.3) is known to have issues with crashing and/or high CPU usage. If possible, please upgrade multipathd to version 0.9.4 or higher to avoid this issue.

Resolution: The output of pxctl status has been improved to display the warning message for the correct RHEL versions.

Components: FA-FB

Affected version: 2.13.x, 3.0.x, 3.1.0
Major
PWX-33030For FlashArray Cloud Drives, when the skip_kpartx flag was set in the multipath config, the partition mappings for device mapper devices did not load, prevented Portworx from starting correctly.

User Impact: This resulted in a random device (either a child or a parent/dm device) with the UUID label being selected and attempted to be mounted. If a child device was chosen, the mount would fail with a Device is busy error.

Resolution: Portworx now avoids such a situation by modifying the specific unbuffered channel to include a buffer, thus preventing the deadlock.

Components: FA-FB

Affected version: 2.13.x, 3.0.x
Minor

3.1.0.1

March 20, 2024

Visit these pages to see if you're ready to upgrade to this version:

important

This is a hotfix release intended for IBM Cloud customers. Please contact the Portworx support team for more information.

Fixes

Issue NumberIssue DescriptionSeverity
PWX-36260When installing Portworx version 3.1.0 from the IBM Marketplace catalog, the PX-Enterprise IBM Cloud license for a fresh installation is valid until November 30, 2026. However, for existing clusters that were running older versions of Portworx and upgraded to 3.1.0, the license did not automatically update to reflect the new expiry date of November 30, 2026.

User Impact: With the old license expiring on April 2, 2024, Portworx operations could be affected after this date.

Resolution: To extend the license until November 30, 2026, follow the instructions on the Upgrading Portworx on IBM Cloud via Helm page to update to version 3.1.0.1.

Components: Licensing

Affected versions: 2.13.x, 3.0.x, 3.1.0
Critical

3.1.0

January 31, 2024

Visit these pages to see if you're ready to upgrade to this version:

info

Starting with version 3.1.0:

  • Portworx CSI for FlashArray and FlashBlade license SKU will only support Direct Access volumes and no Portworx volumes. If you are using Portworx volumes, reach out to the support team before upgrading Portworx.
  • Portworx Enterprise will exclusively support kernel versions 4.18 and above.

New features

Portworx by Pure Storage is proud to introduce the following new features:

  • The auto_journal profile is now available to detect the IO pattern and determine whether the journal IO profile is beneficial for an application. This detector analyzes the incoming write IO pattern to ascertain whether the journal IO profile would improve the application's performance. It continuously analyzes the write IO pattern and toggles between the none and journal IO profiles as needed.
  • A dynamic labeling feature is now available, allowing Portworx users to label Volume Placement Strategies(VPS) flexibly and dynamically. Portworx now supports the use of dynamic labeling through the inclusion of ${pvc.labels.labelkey} in values.

Improvements

x Portworx has upgraded or enhanced functionality in the following areas:

Improvement NumberImprovement DescriptionComponent
PWX-31558Google Anthos users can now generate the correct Portworx spec from Portworx Central, even when storage device formats are incorrect.Spec Generation
PWX-28654Added the NonQuorumMember flag to the node inspect and Enumerate SDK API responses. This flag provides an accurate value depending on whether a node contributes to cluster quorum.SDK/gRPC
PWX-31945Portworx now provides an internal API for listing all storage options on the cluster.SDK/gRPC
PWX-29706Portworx now supports a new streaming Watch API that provides updates on volume information that has been created, modified, or deleted.SDK/gRPC
PWX-35071Portworx now distinguishes between FlashArray and FlashBlade calls, routing them to appropriate backends based on the current volume type (file or block), thereby reducing the load on FlashArray or FlashBlade backends.FA-FB
PWX-34033For FlashArray and FlashBlade integrations, many optimizations have been made in caching and information sharing, resulting in a significant reduction in number of REST calls made to the backing FlashArray and FlashBlade.FA-FB
PWX-35167The default timeout for the FlashBlade Network Storage Manager (NSM) lock has been increased to prevent Portworx restarts.FA-FB
PWX-30083Portworx now manages the TTL for alerts instead of relying on etcd's key expiry mechanism.KVDB
PWX-33430The error message displayed when a KVDB lock times out has been made more verbose to provide a better explanation.KVDB
PWX-34248The sharedv4 parameter in a StorageClass enables users to choose between sharedv4 and non-shared volumes:
  • When you set this parameter to true, you can create both ReadWriteMany and ReadWriteOnce PVCs.
  • When you set this parameter to false, you can create ReadWriteOnce volumes only. If such a StorageClass with sharedv4 parameter set to false is specified in a ReadWriteMany PVC, the volume creation fails and PVC remains stuck in a pending status.
Sharedv4
PWX-35113Users can now enable the forward-nfs-attach-enable storage option for applications using sharedv4 volumes. This allows Portworx to attach a volume to the most suitable available nodes.Sharedv4
PWX-32278On the destination cluster, all snapshots are now deleted during migration when the parent volume is deleted.Stork
PWX-32260The resize-disk option for pool expansion is now also available on TKGS clusters.Cloud Drives
PWX-32259Portworx now uses cloud provider identification by reusing the provider's singleton instance, avoiding repetitive checks if the provider type is already specified in the cluster spec.Cloud Drives
PWX-35428In environments with slow vCenter API responses, Portworx now caches specific vSphere API responses, reducing the impact of these delays.Cloud Drives
PWX-33561When using the PX-StoreV2 backend, Portworx now detaches partially attached driversets for cloud-drives only when the cloud-drives are not mounted.Cloud Drives
PWX-33042In a disaggregated deployment, storageless nodes can be converted to storage nodes by changing the node label to portworx.io/node-type=storageCloud Drives
PWX-28191AWS credentials for Drive Management can now be provided through a Kubernetes secret px-aws in the same namespace where Portworx is deployed.Cloud Drives
PWX-34253Azure users will now see accurate storage type displays: Premium_LRS is identified as SSD, and NVME storage is also correctly represented.Cloud Drives
PWX-31808Pool deletion is now allowed for vSphere cloud drives.Cloud Drives
PWX-32920vSphere drives can now be resized up to a maximum of 62 TB per drive.Pool Management
PWX-32462Portworx now permits most overlapping mounts and will only reject overlapping mounts if a bidirectional (i.e., shared) parent directory mount is present.px-runc
PWX-32905Portworx now properly detects the NFS service on OpenShift platforms.px-runc
PWX-35292To reduce log volume in customer clusters, logs generated when a volume is not found during CSI mounting have been moved to the TRACE level.CSI
PWX-34995Portworx CSI for FlashArray and FlashBlade license SKU now counts Portworx and FA/FB drives separately based on the drive type.Licensing
PWX-35452The mount mapping's lock mechanism has been improved to prevent conflicts between unmount and mount processes, ensuring more reliable pod start-ups.Volume Management
PWX-33577The fstrim operation has been improved for efficiency:
  • If Portworx restarts during an fstrim schedule window, the fstrim process will now automatically restart.
  • If a fstrim operation fails within its scheduled window, the volume will now be automatically re-queued for fstrim.
Storage

Fixes

Issue NumberIssue DescriptionSeverity
PWX-31652Portworx was unable to identify the medium for the vSphere cloud drives.

User Impact: Portworx deployment failed on vSphere with cloud drives.

Resolution: Portworx now successfully identifies the drive medium type correctly and can be deployed on a cluster with vSphere cloud drives.

Components: Drive & Pool Management

Affected Versions: 2.13.x
Critical
PWX-35430Requests for asynchronous DR migration operations were previously load balanced to nodes that were not in the same cluster domain.

User Impact: In hybrid DR setups, such as one where cluster A is synchronously paired with cluster B, and cluster B is asynchronously paired with cluster C, any attempts to migrate from Cluster B to Cluster C would result in failure, showing an error that indicates a BackupLocation not found.

Resolution: Portworx now ensures that migration requests are load balanced within nodes in the same cluster domain as the initial request.

Components: DR and Migration

Affected Versions: 3.0.4
Critical
PWX-35277In an asynchronous DR deployment, if security/auth is enabled in a Portworx cluster, migrations involving multiple volumes would fail with authentication errors.

User Impact: Migrations in asynchronous DR setups involving multiple volumes failed with authentication errors.

Resolution: Authentication logic has been modified to handle migrations involving multiple volumes on the auth enabled clusters.

Components: DR and Migrations

Affected versions: 3.0.0
Critical
PWX-34369When using HTTPS endpoints for cluster pairing, Portworx incorrectly parsed the HTTPS URL scheme.

User Impact: Cluster pairing would fail when using an HTTPS endpoint.

Resolution: Portworx has now corrected the HTTPS URL parsing logic.

Components: DR and Migration

Affected versions: 3.0.0
Critical
PWX-35466Cloudsnaps or asynchronous DR operations failed when attempted from a metro cluster due to inaccessible credentials. This issue specifically occurred if the credential was not available from both domains of the metro cluster.

User Impact: Cloudsnap operations or asynchronous DR from metro clusters could fail if the required credentials were not accessible in both domains.

Resolution: Portworx now detects a coordinator node that has access to the necessary credentials for executing cloudsnaps or asynchronous DR operations.

Components: DR and Migration

Affected versions: 3.0.4
Critical
PWX-35324FlashArray Direct Access volumes are formatted upon attachment. All newly created volumes remain in a pending state until they are formatted. If Portworx was restarted before a volume had been formatted, it would delete the volume that was still in the pending state.

User Impact: The newly created FlashArray Direct Access volumes were deleted.

Resolution: Portworx now avoids deleting volumes that are in the pending state.

Components: FA-FB

Affected versions: 3.0.x
Critical
PWX-35279Upon Portworx startup, if there were volumes attached from a FlashArray that was not registered in the px-pure-secret, Portworx would detach them as part of a cleanup routine.

User Impact: Non-Portworx disks, including boot drives and other FlashArray volumes, were mistakenly detached from the node and required reconnection.

Resolution: Portworx no longer cleans up healthy FlashArray volumes on startup.

Components: FA-FB

Affected versions: 2.13.11, 3.0.0, 3.0.4
Critical
PWX-34377Portworx was incorrectly marking FlashBlade Direct Attach volumes being transitioned to read-only status. This incorrect identification led to a restart of all pods associated with these volumes.

User Impact: The restart of running pods resulted in application restarts or failures.

Resolution: Checks within Portworx that were leading to false identification of Read-Only transitions for FlashBlade volumes have been fixed.

Components: FA-FB

Affected versions: 3.0.4
Critical
PWX-32881The CSI driver failed to register after the Anthos storage validation test suite was removed and a node was re-added to the cluster.

User Impact: The CSI server was unable to restart if the Unix domain socket had been deleted.

Resolution: The CSI server now successfully restarts and restores the Unix domain socket, even if the socket has been deleted. Update to this version if your workload involves deleting the kubelet directory during node decommissioning.

Components: CSI

Affected versions: 3.0.0
Critical
PWX-31551The latest OpenShift installs have more strict SELinux policies, which prevent non-privileged pods to access the csi.sock CSI interface file.

User Impact: Portworx install failed.

Resolution: All Portworx CSI pods are now configured as privileged pods.

Components: oci-monitor

Affected versions: 2.13.x, 3.0.x
Critical
PWX-31842On TKGI clusters, if Portworx service and pods were restarted, it led to excessive mounts (mount-leaks).

User Impact: The IO operations on the node would progressively slow down, until the host would completely hang.

Resolution: The mountpoints that are used by Portworx have been changed.

Components: oci-monitor

Affected versions: 2.1.1
Critical
PWX-35603When running Portworx on older Linux systems (specifically those using GLIBC 2.31 or older) in conjunction with newer versions of Kubernetes, Portworx previously failed to detect dynamic updates of pod credentials and tokens, hence led to Unauthorized errors when utilizing Kubernetes client APIs.

User Impact: Users could encounter Unauthorized errors when using Kubernetes client APIs.

Resolution: Dynamic token updates are now processed correctly by Portworx.

Components: oci-monitor

Affected versions: 3.0.1
Critical
PWX-34250If encryption was applied on both the client side (using an encryption passphrase) and the server side (using Server-Side Encryption, SSE) for creating credential commands, this approach failed to configure S3 storage in Portworx to use both encryption methods.

User Impact: Configuration of S3 storage would fail in the above mentioned condition.

Resolution: Users can now simultaneously use both server-side and client-side encryption when creating credentials for S3 or S3-compatible object stores.

Components: Cloudsnaps

Affected versions: 3.0.2, 3.0.3, 3.0.4
Critical
PWX-22870Portworx installations would by default automatically attempt to install NFS packages on the host system. However, since NFS packages add new users/groups, they were often blocked on Red Hat Enterprise Linux / CentOS platforms with SELinux enabled.

User Impact: Sharedv4 volumes failed to attach on platforms with SELinux enabled.

Resolution: Portworx installation is now more persistent on Red Hat Enterprise Linux / CentOS platforms with SELinux enabled.

Components: IPV6

Affected versions: 2.5.4
Major
PWX-35332Concurrent access to an internal data structure containing NFS export entries resulted in a Portworx node crashing with the fatal error: concurrent map read and map write in knfs.HasExports error.

User Impact: This issue triggered a restart of Portworx on that node.

Resolution: A lock mechanism has been implemented to prevent this issue.

Components: Sharedv4

Affected versions: 2.10.0
Major
PWX-34865When upgrading Portworx from version 2.13 (or older) to version 3.0 or newer, the internal KVDB version was also updated. If there was a KVDB membership change during the upgrade, the internal KVDB lost quorum in some corner cases.

User Impact: The internal KVDB lost quorum, enforcing Portworx upgrade of a KVDB node that was still on an older Portworx version.

Resolution: In some cases, Portworx now chooses a different mechanism for the KVDB membership change.

Components: KVDB

Affected versions: 3.0.0
Major
PWX-35527When a Portworx KVDB node went down and subsequently came back online with the same node ID but a new IP address, Portworx nodes on the other servers continued to use the stale IP address for connecting to KVDB.

User Impact: Portworx nodes faced connection issues while connecting to the internal KVDB, as they attempted to use the outdated IP address.

Resolution: Portworx now updates the correct IP address on such nodes.

Component: KVDB

Affected versions: 2.13.x, 3.0.x
Major
PWX-33592Portworx incorrectly applied the time set by the execution_timeout_sec option.

User Impact: Some operations time out before the time set through the execution_timeout_sec option.

Resolution: The behavior of this runtime option is now fixed.

Components: KVDB

Affected versions: 2.13.x, 3.0.x
Major
PWX-35353Portworx installations (version 3.0.0 or newer) failed on Kubernetes systems using Docker container runtime versions older than 20.10.0.

User Impact: Portworx installation failed on Docker container runtimes older than 20.10.0.

Resolution: Portworx can now be installed on older Docker container runtimes.

Components: oci-monitor

Affected versions: 3.0.0
Major
PWX-33800In Operator version 23.5.1, Portworx was configured so that a restart of the Portworx pod would also trigger a restart of the portworx.service backend.

User Impact: This configuration caused disruptions in storage operations.

Resolution: Now pod restarts do not trigger a restart of the portworx.service backend.

Components: oci-monitor

Affected versions: 2.6.0
Major
PWX-32378During the OpenShift upgrade process, the finalizer service, which ran when Portworx was not processing IOs, experienced a hang and subsequently timed out.

User Impact: This caused the OpenShift upgrade to fail.

Resolution: The Portworx service now runs to stop Portworx and sets the PXD_timeout during OpenShift upgrades.

Components: oci-monitor

Affected versions: 2.13.x, 3.0.x
Major
PWX-35366When the underlying nodes of an OKE cluster were replaced multiple times (due to upgrades or other reasons), Portworx failed to start, displaying the error Volume cannot be attached, because one of the volume attachments is not configured as shareable.

User Impact: Portworx became unusable on nodes that were created to replace the original OKE worker nodes.

Resolution: Portworx now successfully starts on such nodes.

Components: Cloud Drives

Affected versions: 2.13.x, 3.0.x
Major
PWX-33413After an upgrade, when a zone name case was changed, Portworx considered this to be a new zone.

User Impact: The calculation of the total storage in the cluster by Portworx became inaccurate.

Resolution: Portworx now considers a zone name with the same spelling, regardless of case, to be the same zone. For example, Zone1, zone1, and ZONE1 are all considered the same zone.

Components: Cloud Drives

Affected versions: 2.12.1
Major
PWX-33040For Portworx users using cloud drives on the IBM platform, when the IBM CSI block storage plugin was unable to successfully bind Portworx cloud-drive PVCs (for any reason), these PVCs remained in a pending state. As a retry mechanism, Portworx created new PVCs. Once the IBM CSI block storage plugin was again able to successfully provision drives, all these PVCs got into a bound state.

User Impact: A large number of unwanted block devices were created in users' IBM accounts.

Resolution: Portworx now cleans up unwanted PVC objects during every restart and KVDB failover.

Components: Cloud Drives

Affected versions: 2.13.0
Major
PWX-35114The storageless node could not come online after Portworx was deployed and showed the failed to find any available datastores or datastore clusters error.

User Impact: Portworx failed to start on the storageless node which had no access to a datastore.

Resolution: Storageless nodes can now be deployed without any access to a datastore.

Components: Cloud Drives

Affected versions: 2.13.x, 3.0.x
Major
PWX-33444If a disk that was attached to a node became unavailable, Portworx continuously attempted to find the missing drive-set.

User Impact: Portworx failed to restart.

Resolution: Portworx now ignores errors related to missing disks and attempts to start by attaching to the available driveset, or it creates a new driveset if suitable drives are available on the node.

Components: Cloud Drives

Affected versions: 2.13.x, 3.0.x
Major
PWX-33076When more than one container mounted to a docker volume, all of them mounted to the same path as the mount path was not unique as it only had the volume name.

User Impact: When one container used to go offline, it would unmount for the other container mounted to the same volume.

Resolution: The volume mount HTTP request ID is now attached to the path which makes the path unique for every mount to the same volume.

Components: Volume Management

Affected versions: 2.13.x, 3.0.x
Major
PWX-35394Host detach operation on the volume failed with the error HostDetach: Failed to detach volume.

User Impact: A detach or unmount operation on a volume would get stuck if attach and detach operations were performed in quick succession, leading to incomplete unmount operations.

Resolution: Portworx now reliably handles detach or unmount operations on a volume, even when attach and detach operations are performed in quick succession.

Components: Volume Management

Affected Versions: 2.13.x, 3.0.x
Major
PWX-32369In a synchronous DR setup, cloudsnaps with different objectstores for each domain failed to backup and cleanup the expired cloudsnaps.

User Impact: The issue occurred because of a single node, which did not have access to both the objectstores, was performing cleanup of the expired cloudsnaps.

Resolution: Portworx now designates two nodes, one in each domain, to perform the cleanup of the expired cloudsnaps.

Components: Cloudsnaps

Affected versions: 2.13.x, 3.0.x
Major
PWX-35136During cloudsnap deletions, some objects were not removed because the deletion requests exceeded the S3 API's limit for the number of objects that could be deleted at once.

User Impact: This would leave objects on S3 for deleted cloudsnaps, thereby consuming S3 capacity.

Resolution: Portworx has been updated to ensure that deletion requests do not exceed the S3 API's limit for the number of objects that can be deleted.

Components: Cloudsnaps

Affected versions: 2.13.x, 3.0.x
Major
PWX-34654Cloudsnap status returned empty results without any error for a taskID that was no longer in the KVDB.

User Impact: No information was provided for users to take corrective actions.

Resolution: Portworx now returns an error instead of empty status values.

Components: Cloudsnaps

Affected versions: 2.13.x, 3.0.x
Major
PWX-31078When backups were restored to a namespace different from the original volume's, the restored volumes retained labels indicating the original namespace, not the new one.

User Impact: The functionality of sharedv4 volumes would impact due to the labels not accurately reflecting the new namespace in which the volumes were located.

Resolution: Labels for the restored volume have been fixed to reflect the correct namespace in which the volume resides.

Components: Cloudsnaps

Affected versions: 2.13.x, 3.0.x
Major
PWX-32278During migration, on destination cluster the orphan snapshot was left behind even though parent volume was not present during certain error scenarios.

User Impact: This can lead to an increase in capacity usage.

Resolution: Now, such orphan cloudsnaps are deleted when the parent volume is deleted.

Components: Asynchronous DR

Affected versions: 2.13.x, 3.0.x
Major
PWX-35084Portworx incorrectly determined the number of CPU cores when running on hosts enabled with cGroupsV2.

User Impact: This created issues when limiting the CPU resources, or pinning the Portworx service to certain CPU cores.

Resolution: Portworx now properly determines number of available CPU cores.

Components: px-runc

Affected versions: 3.0.2
Major
PWX-32792On OpenShift 4.13, Portworx did not proxy portworx-service logs. It kept journal logs from multiple machine IDs, which caused the Portworx pod to stop proxying the logs from portworx.service.

User Impact: In OpenShift 4.13, the generation of journal logs from multiple machine IDs led to the Portworx pod ceasing to proxy the logs from portworx.service.

Resolution: Portworx log proxy has been fixed to locate the correct journal log using the current machine ID.

Components: Monitoring

Affected versions: 2.13.x, 3.0.x
Major
PWX-34652During the ha-update process, all existing volume labels were removed and could not be recovered.

User Impact: This resulted in the loss of all volume labels, significantly impacting volume management and identification.

Resolution: Volume labels now do not change during the ha-update process.

Components: Storage

Affected versions: 2.13.x, 3.0.x
Major
PWX-34710A large amount of log data was generated during storage rebalance jobs or dry runs.

User Impact: This led to log files occupying a large amount of space.

Resolution: The volume of logging data has been reduced by 10%.

Components: Storage

Affected versions: 2.13.x,
Major
PWX-34821In scenarios where the system is heavily loaded and imbalanced, elevated syncfs latencies were observed. This situation led to the fs_freeze call, responsible for synchronizing all dirty data, timing out before completion.

User Impact: Users experienced timeouts during the fs_freeze call, impacting the normal operation of the system.

Resolution: Restart the system and retry the snapshot operation.

Components: Storage

Affected versions: 3.0.x
Major
PWX-33647When the Portworx process are restarted, it verifies the existing mounts on the system for sanity. If one of the mounts was NFS mount of a Portworx volume, the mount point verification would hung as Portworx was in the process of starting up.

User Impact: The Portworx process would not come up and would enter an infinite wait, waiting for the mount point verification to return.

Resolution: When Portworx is starting up, it now skips the verification of Portworx-backed mount points to allow the startup process to continue.

Components: Storage

Affected versions: 3.0.2
Major
PWX-33631Portworx applied locking mechanisms to synchronize requests across different worker nodes during the provisioning of CSI volumes, to distribute workloads evenly causing decrease in performance for CSI volume creation.

User Impact: This synchronization approach led to a decrease in performance for CSI volume creation in heavily loaded clusters.

Resolution: If experiencing slow CSI volume creation, upgrade to this version.

Components: CSI

Affected versions: 2.13.x, 3.0.x
Major
PWX-34355In certain occasions, while mounting an FlashArray cloud drive disks backing a storage pool, Portworx used the single path device instead of multipath device.

User Impact: Portworx entered in the StorageDown state.

Resolution: Portworx now identifies the multipath device associated with a given device name and uses this multipath device for mounting operations.

Components: FA-FB

Affected versions: 2.10.0, 2.11.0, 2.12.0, 2.13.0, 2.13.11, 3.0.0
Major
PWX-34925When a large number of FlashBlade Direct Access volumes were created subsequently could lead to restating of Portworx with the fatal error: sync: unlock of unlocked mutex error.

User Impact: When trying to create a large number of FlashBlade volumes concurrently, Portworx process might get restarted due to contention on the lock.

Resolution: Improved the locking mechanism to avoid this error.

Components: FA-FB

Affected versions: 3.0.4
Major
PWX-35680The Portworx spec generator was incorrectly defaulting telemetry to be disabled when the StorageCluster spec was generated outside of the Portworx Central UI. This does not affect customers who applied a storagecluster with an empty telemetry spec or generated their spec through the UI.

User Impact: Telemetry was disabled by default.

Resolution: To enable telemetry, users should explicitly specify it if intended.

Components: Spec-Gen

Affected versions: 2.12.0, 2.13.0, 3.0.0
Major
PWX-34325When operating Kubernetes with the containerd runtime and a custom root directory set in the containerd configuration, the installation of Portworx would fail.

User Impact: Portworx install would fail, resulting in unusual error messages due to a bug in containerd.

Resolution: The installation will now intercept the error message and replace it with a clearer message that includes suggestions on how to fix the Portworx configuration.

Components: Installation

Affected versions: 3.0.0
Minor
PWX-33557The CallHome functionality sometimes unconditionally attempted to send the data to the local telemetry service.

User Impact: This caused errors, if the telemetry was disabled.

Resolution: The CallHome now sends data only if the Telemetry has been enabled.

Components: Monitoring

Affected versions: 3.0.0
Minor
PWX-32536Portworx installation failed on certain Linux systems using cGroupsV2 and containerd container runtimes, as it was unable to properly locate container identifiers.

User Impact: Portworx installation failed.

Resolution: The container scanning process has been improved to ensure successful Portworx installation on such platforms.

Components: oci-monitor

Affected versions: 2.13.x, 3.0.x
Minor
PWX-30967During volume provisioning, snapshot volume labels are included in the count. The nodes were disqualified for provisioning when volume_anti_affinity or volume_affinity VPS was configured, resulting in volume creation failures.

User Impact: When stale snapshots existed, the creation of volumes using the VPS with either volume_anti_affinity or volume_affinity setting would fail.

Resolution: Upgrade to this version and retry previously failed volume creation request.

Components: Stork

Affected versions: 2.13.2
Minor
PWX-33999During the installation of NFS packages, Portworx incorrectly interpreted any issues or errors that occurred as timeout errors.

User Impact: Portworx misrepresented and masked the original issues.

Resolution: Portworx now accurately processes NFS installation errors during its installation.

Components: px-runc

Affected versions: 2.7.0
Minor
PWX-33008Creation of a proxy volume with CSI enabled and RWX access mode failed due to the default use of sharedv4 for all RWX volumes in CSI.

User Impact: Users could not create proxy volumes with CSI enabled and RWX access mode.

Resolution: To successfully create proxy volumes with CSI and RWX access mode, upgrade to this version.

Components: Sharedv4

Affected versions: 3.0.0
Minor
PWX-34326The Portworx CSI Driver GetPluginInfo API returned an incorrect CSI version.

User Impact: This resulted in confusion when the CSI version was retrieved by the Nomad CLI.

Resolution: The Portworx CSI Driver GetPluginInfo API now returns the correct CSI version.

Components: CSI

Affected versions: 2.13.x,3.0.x
Minor
PWX-31577Occasionally, when a user requested cloudsnap to stop, it would lead to incorrect increase in the available resources.

User Impact: More cloudsnaps were started and they were stuck in the NotStarted state as resources were unavailable.

Resolution: Stopping cloudsnaps does not incorrectly now increase the available resources, thus avoiding the issue.

Components: Cloudsnaps

Affected versions: 2.13.x, 3.0.x
Minor

Known issues (Errata)

Issue NumberIssue DescriptionSeverity
PD-2673KubeVirt VM or container workloads may remain in the Starting state due to the remounting of volumes failing with a device busy error.

Workaround:
  • For KubeVirt VMs, stop and start the VM.
  • For container deployments, scale down the replicas to 0, then scale back up.
Components: Sharedv4

Affected versions: 2.13.x, 3.0.x
Critical
PD-2546In a synchronous DR deployment, telemetry registrations might fail on the destination cluster.

Workaround:
  • If both the clusters can be accessed from one system, run the following command to copy the telemetry certs:
    kubectl get secret pure-telemetry-certs --context primary_cluster_context --export -o yaml &#124; kubectl apply --context secondary_cluster_context -f -
  • If the clusters cannot be accessed from the same system, output the secret content to a file, then copy and apply it on the destination cluster.
Components: Synchronous DR

Affected versions: 3.0.4
Critical
PD-2574If a disk is removed from an online pool using the PX-StoreV2 backend, it may cause a kernel panic.

Workaround: To avoid kernel panic, do not remove disks from an online pool or node.

Components: Storage

Affected versions: NA
Critical
PD-2387In OpenShift Container Platform (OCP) version 4.13 or newer, application pods using Portworx sharedv4 volumes can get stuck in Terminating state. This is because kubelet is unable to stop the application container when an application namespace is deleted.

Workaround:
  1. Identify the nodes where the sharedv4 volumes used by the affected pod are attached.
  2. Run the following command on these nodes to restart the NFS server:
    systemctl restart nfs-server.
Allow a few minutes for the process to complete. If the pod is still stuck in the Terminating state, reboot the node on which the pod is running. Note that after rebooting, it might take several minutes for the pod to transition out of the Terminating state.

Components: Sharedv4

Affected versions: 3.0.0
Major
PD-2621Occasionally, deleting a TKGi cluster with Portworx fails with the Warning: Executing errand on multiple instances in parallel. error.

Workaround: Before deleting your cluster, perform the following steps:
  1. Get your pod disruption budget objects:
    kubectl get poddisruptionbudget -A
  2. Delete these objects:
    kubectl delete poddisruptionbudget -n <object-namespace> px-kvdb px-storage
Once these objects are removed, delete your TKGi cluster.

Components: Kubernetes Integration

Affected versions:
Major
PD-2631After resizing a FlashArray Direct Access volume with a filesystem (such as ext4, xfs, or others) by a significant amount, you might not be able to detach the volume, or delete the pod using this volume.

Workaround: Allow time for the filesystem resizing process to finish. After the resize is complete, retry the operations.

Components: FA-FB

Affected versions: 2.13.x, 3.0.x, 3.1.0
Major
PD-2597Online pool expansion with the add-disk operation might fail when using the PX-StoreV2 backend.

Workaround: Enter the pool into maintenance mode, then expand your pool capacity.

Components: Storage

Affected versions: 3.0.0, 3.1.0
Major
PD-2585The node wipe operation might fail with the Node wipe did not cleanup all PX signatures. A manual cleanup maybe required. error on a system with user setup device names containing specific Portworx reserved keywords(such as pwx).

Workaround: You need to rename or delete devices that use Portworx reserved keywords in their device names before retrying the node wipe operation. Furthermore, it is recommended not to use Portworx reserved keywords such as px, pwx, pxmd, px-metadata, pxd, or pxd-enc while setting up devices or volumes, to avoid encountering such issues.

Components: Storage

Affected versions: 3.0.0
Major
PD-2665During a pool expansion operation, if a cloud-based storage disk drive provisioned on a node is detached before the completion of the pool resizing or rebalancing, you can see the show drives: context deadline exceeded error in the output of the pxctl sv pool show command.

Workaround: Ensure that cloud-based storage disk drives involved in pool expansion operations remain attached until the resizing and rebalancing processes are fully completed. In cases where a drive becomes detached during this process, hard reboot the node to restore normal operations.

Component: PX-StoreV2

Affected versions: 3.0.0, 3.1.0
Major
PD-2833With Portworx 3.1.0, migrations might fail between two clusters if one of the clusters is running a version of Portworx older than 3.1.0, resulting in a key not found error.

Workaround: Ensure that both the source and destination clusters are upgraded to version 3.1.0 or newer.

Components: DR & Migration

Affected Versions: 3.1.0
Minor
PD-2644If an application volume contains a large number of files (e.g., 100,000) in a directory, changing the ownership of these files can take a long time, causing delays in the mount process.

Workaround: If the ownership change is taking a long time, Portworx by Pure Storage recommends setting fsGroupChangePolicy to OnRootMismatch. For more information, see the Kubernetes documentation.

Components: Storage

Affected versions: 2.13.x, 3.0.x
Minor
PD-2359When a virtual machine is transferred from one hypervisor to another and Portworx is restarted, the CSI container might fail to start properly and shows the CrashLoopBackoff error.

Workaround: Remove the topology.portworx.io/hypervisor label from the affected node.

Components: CSI

Affected versions: 2.13.x, 3.0.x
Minor
PD-2579When the Portworx pod (oci-mon) cannot determine the management IP used by the Portworx container, the pxctl status command output on this pod shows a Disabled or Unhealthy status.

Workaround: This issue is related to display only. To view the correct information, run the following command directly on the host machine:
kubectl exec -it <oci-mon pod> -- nsenter --mount=/host_proc/1/ns/mnt -- pxctl status.

Components: oci-monitor

Affected versions: 2.13.0
Minor