Skip to main content
Version: 3.6

Portworx Enterprise Release Notes

3.6.0

April 06, 2026

To install or upgrade Portworx Enterprise to version 3.6.0, ensure that your cluster meets all system requirements and is running Operator 26.1.0 or later, Stork 26.2.0 or later, and one of the supported kernels.

New Features

  • Secure Boot Support for Portworx Enterprise
    Portworx Enterprise supports systems with UEFI Secure Boot enabled. Secure Boot allows only signed and trusted kernel modules to load during system startup, adding an extra layer of security by validating module authenticity.
    Portworx Enterprise kernel modules are signed and compatible with Secure Boot. To run Portworx on Secure Boot-enabled systems, you must manually enroll the Portworx Secure Boot signing certificate into the system’s Machine Owner Key (MOK) list.
    For more information, see Secure Boot for Portworx Enterprise.

  • Vault Integration for FlashArray and FlashBlade Credentials
    Portworx Enterprise supports storing Pure Storage FlashArray and FlashBlade credentials in Vault. Instead of providing credentials directly in configuration files, you can securely create and manage them as secrets in Vault.
    When deploying Portworx Enterprise with FlashArray, create a Vault secret that contains the credentials and reference it during installation. Portworx Enterprise retrieves the credentials securely at runtime, improving security and centralizing credential management.
    For more information, see Add FlashArray Configuration to a Secret Store Provider.

  • Support for Multiple Secrets Providers
    Portworx Enterprise supports configuring multiple secrets providers within a single deployment. You can assign a different secrets provider to each feature, enabling finer-grained control over how and where sensitive data is stored.
    For example, you can use Kubernetes Secrets for volume encryption while using Vault to store cloud provider credentials such as vSphere credentials or FlashArray access tokens.
    For more information, see Configure Multiple Secrets Providers.

  • Azure ARM Lock Integration for Managed Disks
    Portworx Enterprise integrates with Azure Resource Manager (ARM) locks to protect Azure-managed disks (cloud drives) in AKS deployments. When disk locking is enabled, Portworx Enterprise automatically applies a CanNotDelete lock to each disk it provisions, preventing accidental or external deletion while allowing read and update operations.
    For more information, see Locking Azure Cloud Drives on AKS Clusters.

  • Volume Tagging Support for FADA and FACD
    Portworx Enterprise supports FlashArray Cloud Drives and Direct Access volumes tagging that enables custom metadata on volumes that allows easier filtering and sorting of volumes, and enables adding more identifying information to them.
    For more information, see Custom Disk Tags.

  • Dynamic Resync Bandwidth Control
    Portworx Enterprise supports dynamic control of resync and replica-add bandwidth. You can configure a maximum bandwidth limit to prevent resync traffic from impacting application I/O. Portworx Enterprise prioritizes application traffic and dynamically adjusts resync usage based on load, ensuring it never exceeds the configured threshold while optimizing resync completion time.
    For more information, see Dynamic Resync Bandwidth Control.

Early Access Features

  • Garden Linux and Gardener Support
    Portworx Enterprise introduces support for Garden Linux and Gardener environments. This enhancement extends Portworx Enterprise support to Kubernetes environments running on Garden Linux and Gardener, improving deployment flexibility and ecosystem alignment.

  • Integrate Portworx Enterprise with Everpure Fusion
    Portworx Enterprise supports integration with Everpure Fusion, combining Everpure Fusion’s fleet-level management with Portworx’s Kubernetes-native data services.
    For more information, see Portworx Fusion Controller Documentation.

  • Readahead Support for Sequential Read Workloads
    Portworx Enterprise supports readahead to optimize sequential read workloads. Readahead detects sequential read I/O patterns and prefetches data from disk into memory before the application requests it. Serving data from memory reduces disk and network overhead, improves read throughput and IOPS, and lowers read latency.
    For more information, see Readahead.

  • New and Improved Implementation of Direct Access Volumes with PX-CSI Integration
    Portworx Enterprise integrates with PX-CSI to orchestrate FlashArray Direct Access and FlashBlade Direct Access Volumes' operations.
    For more information, see Integrate PX-CSI into Portworx Enterprise.

Direct Availability Features

  • Kube Datastore (KDS) and Dynamic Pools for Replica 1 Volumes
    Portworx Enterprise introduces Kube Datastore (KDS), a distributed datastore that provides highly available, resilient, and efficient datastores for shared storage backend (SAN) and cloud drives. KDS enables clusters to scale automatically within seconds without downtime or performance degradation, while maintaining high availability and resiliency. For more information, see Kube Datastore.

    Portworx Enterprise supports KDS dynamic pools for volumes with repl=1 in environments using FlashArray Cloud Drives (FACD) and PX-StoreV2. KDS dynamic storage pools are designed for shared storage backend (SAN) environments and leverages the capabilities of the underlying FlashArray volumes to deliver superior data reduction, data protection, and resiliency.
    For more information, see Dynamic Pools for Volumes with Replication Factor 1.

Improvements

Improvement NumberImprovement DescriptionComponent
PWX-43116Portworx Enterprise now logs detailed error information when node diagnostics fail due to Portworx-related issues, improving troubleshooting and overall user experience. The diagnostics collection has been enhanced to include additional Kubernetes objects related to storage, workloads, configuration, networking, versioning, controller state, and secrets.
Diagnostics collection operates in a read-only mode against the Kubernetes API. Portworx Enterprise also automatically redacts all collected data before persisting or sharing the diagnostics bundle, ensuring that sensitive information such as secrets, tokens, keys, and certificates is masked or removed.
For more information, see On-demand diagnostics using PortworxDiag custom resource.
Telemetry
PWX-43455Portworx Enterprise has enhanced Grafana dashboards to improve observability and troubleshooting:
  • Volume dashboard: Added namespace and PVC variables with multi-select and cascading filters. Simplified filtering by removing the volume name filter. Updated panel legends to display Namespace/PVC/VolumeName format for easier identification. Introduced separate read and write panels, and renamed the Volume usage panel to Volume: Used vs. Total Capacity for clarity.
  • Node dashboard: Updated queries to use node exporter metrics where Portworx data is not required. Improved node correlation using the px_cluster_cpu_percent metric with node labels. Standardized query filters (node=~"$Instances" for Portworx and instance=~"$Instances" for node exporter). Enhanced dropdown usability by adding Cluster and Instance labels.
  • Cluster dashboard: Added a new panel to track cluster-wide memory utilization.
  • Grafana deployment: Increased memory limits to improve stability and prevent out-of-memory (OOM) issues.
  • DR dashboard: Updated dashboard configuration to use a unified Prometheus datasource UID across all panels, replacing datasource: null for improved consistency.
Monitoring
PWX-48856Portworx Enterprise now enables KVDB TLS by default for all installations, improving cluster security out of the box. Specifications generated through Portworx Central automatically include KVDB TLS, ensuring that cluster metadata communication is encrypted without requiring additional configuration. This enhancement eliminates the need for manual TLS enablement and helps meet security requirements by default.KVDB

Fixes

Issue NumberIssue DescriptionSeverity
PWX-37188During CloudSnap backup or restore operations, CloudSnap status tracking could cause Portworx to crash with concurrent map iteration and map write error.

User Impact: Portworx could restart when multiple CloudSnap backup or restore operations run at the same time as status queries.

Resolution:
Portworx Enterprise now correctly synchronizes access to CloudSnap status data to prevent concurrent access conflicts during backup, restore, and status query operations.

Components: Storage
Affected Versions: 3.0.4
Minor
PWX-44589The KVDB dump rotation process had inconsistencies in its retention logic. Monthly grouping was based on a rolling 31-day window rather than calendar months, and retention kept older dumps instead of the most recent ones. Additionally, dump files were retained for up to one year, which could result in excessive storage consumption.

User Impact: KVDB backup dump files could accumulate unnecessarily, consuming significant disk space and making retention behavior difficult to predict.

Resolution:
Portworx Enterprise now improves KVDB dump rotation as follows:
  • Monthly grouping is aligned with calendar months so all dumps from the same month are grouped together.
  • Retention keeps the newest dump per month instead of the oldest.
  • The current-month rotation frequency is changed from every three days to every seven days.
  • The maximum retention period is reduced from one year to six months, and older dump files are automatically removed.


Components: KVDB
Affected Versions: 3.2.0
Minor
PWX-48515Lock contention in the in-memory KVDB (kv_mem.go) implementation caused performance bottlenecks during read-heavy operations. Several functions, including the /status handler and periodic liveness and readiness probes, relied on exclusive locks. This led to unnecessary contention and, in some environments, contributed to frequent container restarts due to probe timeouts.

User Impact: In environments with high volume and snapshot activity, Portworx containers could experience increased latency during health checks. This could result in liveness or readiness probe failures and frequent container restarts.

Resolution:
Portworx Enterprise now replaces exclusive mutex locks in kv_mem.go with RWMutex. This allows multiple read operations to execute concurrently while still preserving safe write access. This significantly reduces lock contention for common read-heavy operations, improving responsiveness and stability during health checks, volume operations, and snapshot workflows.

Components: Control Plane
Affected Versions: 3.3.0
Minor
PWX-48795In certain error scenarios, NFS thread handling did not log sufficient diagnostic information. Some retry failures when updating NFS threads were not logged consistently. Additionally, log messages related to thread updates were suppressed by a 12-hour threshold, limiting visibility. When the error Cannot allocate memory occurred, system memory information was not included in the logs, making troubleshooting more difficult.

User Impact: Administrators had limited visibility into NFS thread update failures and memory-related issues. This made it harder to diagnose NFS overload conditions and related performance problems.

Resolution:
Portworx Enterprise improves logging in the NFS thread handling component. Thread update failures are now logged consistently, including during retries. The 12-hour log suppression threshold has been adjusted to improve visibility. When a Cannot allocate memory error occurs, Portworx Enterprise now logs system memory information (for example, the output of free -h) to assist with troubleshooting.

Components: Control Plane, Shared Volumes
Affected Versions: 3.4.0
Minor
PWX-49349When Portworx Enterprise was configured with separate data and management network interfaces, and the management interface was assigned a link-local IP address (for example, 169.254.0.2/17), some KubeVirt VMs using Sharedv4 volumes could enter a Paused state during Portworx Enterprise or platform upgrades. This issue did not occur when the data interface used a link-local IP address. It also did not affect KubeVirt VMs using Shared Raw Block volumes.

User Impact: During Portworx Enterprise or platform upgrades, a subset of KubeVirt VMs using Sharedv4 volumes could transition to a Paused state and require a manual restart to recover.

Resolution:
Portworx Enterprise now correctly handles link-local IP addresses configured on either the data or management interfaces. KubeVirt VMs using Sharedv4 volumes no longer enter a Paused state during upgrades when link-local IP addresses are present.

Components: Shared Volume (NFS)
Affected Versions: 3.4.1
Minor
PWX-49486New nodes fail to join a cluster that was migrated from a DaemonSet-based deployment to the Operator when the original cluster name contained underscores (_). In such setups, the StorageCluster name used hyphens (to comply with Kubernetes naming constraints), while the original cluster name with underscores was preserved in the portworx.io/cluster-id annotation. This mismatch caused node identity generation to fail during cluster join.

User Impact: New nodes were unable to join the cluster after migration, preventing cluster expansion or node replacement.

Resolution:
Portworx Enterprise now correctly identifies the StorageCluster by additionally validating the portworx.io/cluster-id annotation when resolving cluster identity. This ensures new nodes can successfully join the cluster even when the STC name differs from the original cluster name.

Components: Control Plane
Affected Versions: 3.5.2
Minor
PWX-51883Portworx Enterprise reported incorrect usage metrics for Shared Raw Block (RWX Block) volumes. Metrics such as px_volume_usage_bytes and px_volume_fs_usage_bytes do not reflect actual filesystem usage because the filesystem resides inside the guest VM, which Portworx Enterprise cannot inspect.

User Impact: Capacity-related alerts for Shared Raw Block volumes could be inaccurate and repeatedly triggered, even when the actual filesystem usage inside the VM was within limits.

Resolution:
Portworx Enterprise no longer generates capacity-full alerts for Shared Raw Block volumes. This prevents false alerts for volumes where filesystem usage cannot be accurately determined by Portworx Enterprise.

Components: Control Plane
Affected Versions: 3.5.2
Minor
PWX-52148In earlier Portworx Enterprise versions, PX-StoreV2 could let the backend device optimal I/O size override the user-specified maximum pool size. As a result, pools created with max_pool_size_tb did not always use the configured limit consistently.

User Impact: Pools could be created with a maximum pool size larger than the value specified in max_pool_size_tb, which resulted in inconsistent pool sizing across nodes.

Resolution:
Portworx now ignores the backend device optimal I/O size when calculating the maximum pool size and honors the user-specified max_pool_size_tb value consistently for all pools.

Components: Control Plane
Affected Versions: 3.5.2
Minor
PWX-52399During pool resize, a node can enter pool maintenance mode. If an encrypted volume is attached at that time, encrypted volume cleanup can stall because the storage service is unavailable.

User Impact: The node can get stuck in pool maintenance mode, and users must exit maintenance mode from another node and reboot the affected node to recover.

Resolution:
Portworx Enterprise now skips encrypted volume cleanup during pool maintenance. This prevents nodes from getting stuck during pool maintenance triggered by pool resize.

Components: Drive & Pool Management
Affected Versions: 3.6.0
Minor
PWX-52411Encrypted volumes created by Portworx Enterprise did not support space reclamation using fstrim because encrypted device mappings were created without enabling discard support.

User Impact: Encrypted volumes could not reclaim unused space after data deletion. As a result, storage usage could remain higher than expected, and AutoFSTrim operations could fail and generate alerts.

Resolution:
Portworx Enterprise now enables discard support for encrypted volumes to allow fstrim operations. For encrypted volumes created in Portworx Enterprise 3.5.x or earlier releases, detach and reattach the volumes once after upgrading your cluster to 3.6.0 or later, to apply the fix and restore fstrim functionality.

Components: Control Plane
Affected Versions: 3.5.2
Minor
PWX-52744On every Portworx Enterprise restart, a false SvMotionMonitoringSuccess alert could be generated even when no Storage vMotion operation occurred.

User Impact: Users could see unnecessary SvMotionMonitoringSuccess alerts after a Portworx Enterprise restart, which could cause confusion even though no Storage vMotion operation was performed.

Resolution:
Portworx Enterprise now prevents false SvMotionMonitoringSuccess alerts during restart when no Storage vMotion operation has occurred.

Components: Control Plane
Affected Versions: 3.6.0
Minor

Known issues (Errata)

Issue NumberIssue DescriptionSeverity
PWX-47702

If Portworx Enterprise restarts during volume creation, the volume can be left in an intermediate non-idempotent state that prevents the operation from recovering automatically, which causes the PVC to remain in a Pending state.


User Impact: The PVC does not bind, and pods that reference it remain stuck waiting for storage.


Workaround: Delete the affected PVC (and the corresponding Portworx volume, if it remains present) and recreate the PVC to retry provisioning after Portworx stabilizes.

Components: Shared Volume (NFS)
Affected Versions: 3.6.0

Minor
PWX-47913

On a storageless node, KVDB can fail to bootstrap with the error: Failed to take the lock on drive set <storageless-node-id>: ConfigMap is locked.
This issue occurs when Portworx attempts to provision or reprovision KVDB on a storageless metadata node and encounters a stale DriveSet lock entry.

User Impact: KVDB failover or rebalance to the affected storageless metadata node can fail. As a result, the cluster can remain under-replicated for KVDB members, and the affected node can be reported incorrectly under nodes using local drives even though it is a storageless node.


Workaround: Restart Portworx on the affected storageless node. This clears the stale state and allows KVDB provisioning to succeed.

Components: Volume and Drive Management
Affected Versions: 3.6.0

Minor
PWX-51772

During continuous pool failovers, a pool failover plan can fail after deactivation completes if the source node comes back online while the failover plan is still in progress. In this case, Portworx Enterprise attempts to resume the same plan and try to lock the same volumes again, even though those volume create locks are already held by the cluster coordinator. This lock conflict can cause the plan to transition to a failed state.

User Impact: Pool failover or rebalance operations can fail and leave plans in a PLAN_FAILED state after reaching the deactivation_done stage. This can interrupt automated recovery workflows during repeated node restarts or continuous pool failover testing and may require manual retry to restore the pool to a healthy state.


Workaround: Retry the failed pool failover or pool migration operation.

Components: Dynamic Pools
Affected Versions: 3.6.0

Minor
PWX-51808

Pool expansion can fail if the drives in a pool do not report the same size after a previous expansion attempt only partially completes on the backend. In this condition, Portworx Enterprise detects a size mismatch across the pool drives and marks the resize operation as failed.

User Impact: The affected pool can go offline, and subsequent pool expansion attempts can fail with a size mismatch error. This can prevent capacity expansion until all drives in the pool are brought to the same size.


Workaround: Manually resize the affected drives on the FlashArray so that all pool drives have the same size, and then retry the pool expansion.

Components: Drive & Pool Management
Affected Versions: 3.6.0

Minor
PWX-51866

When you run a PX Backup (CloudSnap) while a storage pool failover is in progress, the backup for volumes on the moving pool may fail.

User Impact: The affected disk backup fails intermittently, which can cause the VM backup job to complete with failures and require a rerun.


Workaround: Retry the backup after the pool failover completes.

Components: Dynamic Pools
Affected Versions: 3.6.0

Minor
PWX-51924

Pod creation can get stuck in the ContainerCreating state due to a mount failure with the error Sentinel mount already exists but requires cleanup for volume. This occurs in rare circumstances when the server believes the client is no longer using the mount, which leaves a stale mount behind.

User Impact: Pods remain stuck in the ContainerCreating state because the volume cannot be mounted. Subsequent mount attempts fail due to the presence of stale sentinel mounts from previously terminated pods. This can block application startup and require manual intervention.


Workaround: Manually unmount the sentinel mount to clean up the stale mounts on the affected node and restart the pod. In some cases, restarting Portworx on the node can also clear stale mount state.

Components: Shared Volume (NFS)
Affected Versions: 3.6.0

Minor
PWX-52266

After a rebalance or pool migration failure, a storage pool can remain offline on the source node. If you restart the source node or attempt a maintenance cycle, Portworx can fail to initialize the pool with an error.

User Impact: The affected pool stays offline, and the source node might fail to recover through a maintenance cycle. This can leave the node in a degraded state until the pool conflict is cleared.


Workaround: Do not use a maintenance cycle to recover the affected pool. Instead, schedule a new pool migration for the offline pool. This can recover the pool and clear the conflict.

Components: Dynamic Pools
Affected Versions: 3.6.0

Minor
PWX-52422

On a storageless node labeled with px/metadata-node=true, KVDB failover can fail because Portworx Enterprise can enter a self-deadlock while reprovisioning the KVDB drive.

User Impact: KVDB replacement can fail on the affected metadata node, which can stall operations that depend on successful KVDB failover. In migration scenarios, this can cause node drain jobs to fail or remain pending and can delay migration completion.


Workaround: Restart Portworx on the affected node where KVDB failover does not complete.

Components: KVDB
Affected Versions: 3.6.0

Minor
PWX-52423

During a Portworx upgrade or NFS server restart, a SharedV4 (service) hotplug volume mount can become stale during server failover or restart. This can result in stale NFS file handles and cause KubeVirt live migration to fail.

User Impact: Live migration of VMs that use SharedV4-backed hotplug volumes can fail. The affected hotplug disks can become inaccessible until recovery, disrupting VM mobility operations.


Workaround: Restart the virtual machine to remount the affected hot-plug volumes and clear the stale NFS file handle condition.

Components: Shared Volume (NFS)
Affected Versions: 3.6.0

Minor
PWX-52427/PWX-52921

Portworx depends on a node being marked unschedulable and the MachineConfigPool (MCP) being in an updating state to trigger certain actions. However, the MCP can remain in an updating state even after a node has completed an upgrade or before an upgrade starts. In these scenarios, cordoning a node can incorrectly trigger an unintended pool failover.

User Impact: Unintended pool failovers can occur when a node is cordoned while the MCP is inaccurately reported as updating. This can lead to unnecessary pool movement and potential disruption. Failover occurs only when the node is marked unschedulable (cordoned).


Workaround: No workaround.

Components: Dynamic Pools
Affected Versions: 3.6.0

Minor
PWX-52679

When a large number of VMs are started simultaneously on PX-StoreV1 clusters, KubeVirt VMs with hotplug volumes may fail to start. This occurs because when multiple VMs are started simultaneously, virt-launcher pods are scheduled first, while hotplug pods remain in a Pending state due to scheduler queue delays and volume enumeration latency. As a result, the VirtualMachineInstance (VMI) times out while waiting for the hotplug pod to be scheduled, causing the VM startup to fail.

User Impact: KubeVirt VMs with hotplug volumes may fail to start during bulk operations. This can delay or disrupt large-scale VM deployments, as VMs repeatedly fail and re-enter the scheduling queue, increasing overall startup time and system load.


Workaround: Start the VMs in smaller batches of 20 VMs at a time for workloads that use hot-plug disks instead of initiating all VMs simultaneously. Starting more than 20 VMs in a batch can cause some VMs to fail to start due to timeouts resulting from delayed Stork scheduling.

Components: Stork & DR
Affected Versions: 3.6.0

Minor
PWX-52863

During pool expansion on PX-StoreV2, Portworx requires entering pool maintenance mode because online expansion is not supported for PX-StoreV2 or px-cache pools. In this state, when encrypted volumes are present, I/O operations take longer time and Portworx can hit request timeouts such as WriteRequest::wait_version and restart unexpectedly.

User Impact: This issue occurs specifically during PX-StoreV2 pool expansion workflows that require maintenance mode. Portworx may restart during the transition, interrupting or delaying the expansion operation. The system typically recovers after the restart, and there is no functional data loss, but the expansion operation may fail and require a retry.


Workaround: No workaround. After the restart, Portworx Enterprise typically recovers and the pool expansion operation can be retried.

Components: Drive & Pool Management
Affected Versions: 3.6.0

Minor