Portworx Enterprise Release Notes
3.5.2.1
February 27, 2026
To install or upgrade Portworx Enterprise to version 3.5.2.1, ensure that your cluster meets all system requirements and is running Operator 25.5.1 or later, Stork 25.6.0 or later, and one of the supported kernels.
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-51669 | In Portworx Enterprise 3.5.2, diagnostic (diags) collection could take an excessive amount of time in large-scale environments. During diags collection, the command pxctl cloudsnap status -s <volume-id> -j was executed for every volume. In clusters with a large number of volumes, this significantly increased the overall diags collection time.User Impact: Diagnostic bundle collection could take an excessive amount of time per node, delaying troubleshooting and impacting time-sensitive support investigations. Resolution: Portworx now optimizes diags collection in large-scale clusters by limiting CloudSnap status collection to the first 500 volumes by default. CloudSnap status collection is also parallelized to significantly reduce overall execution time. If full CloudSnap status collection is required, you can explicitly enable it using the --collect-cloudsnap-status flag with the pxctl sv diags command. These changes ensure that diags collection completes quickly by default, while still providing the option to collect detailed CloudSnap status when needed.Components: Telemetry & Monitoring Affected Versions: 3.5.2 | Minor |
3.5.2
January 22, 2026
To install or upgrade Portworx Enterprise to version 3.5.2, ensure that your cluster meets all system requirements and is running Operator 25.5.1 or later, Stork 25.6.0 or later, and one of the supported kernels.
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-48849 | When a volume has more than 2,000 cloud snapshots stored on an NFS object store, deleting cloud snapshots might leak file descriptors. User Impact: Portworx could eventually run out of file descriptors, causing storage nodes to go down leading to cluster instability. Resolution: Portworx now closes file descriptors used for one-shot or bounded snapshot listings during cloud snapshot deletion, which prevents NFS directory handle leaks when pagination is not fully consumed. Components: Cloudsnaps Affected Versions: 3.5.0 | Minor |
| PWX-49255 | KubeVirt virtual machines that use SharedV4 service volumes as hot-plug disks can pause or fail during Portworx restarts or coordinator node shutdown events (including node restarts). This occurs only in rare cases where a service volume failover attempt fails. In such scenarios, the hot-plug pod restarts, but because the volume is bind-mounted directly into the virt-launcher pod, bypassing kubelet publish and unpublish operations, Portworx does not detect the active mount, which can lead to stalled I/O.User Impact: The affected virtual machine enters a paused or failed state and becomes unavailable. Resolution: Portworx now treats hot-plug SharedV4 service volume failures as a VM-level recovery event. When the hot-plug pod must be deleted, Portworx also restarts the associated virt-launcher pod so the stale bind mount is removed and the VM restarts cleanly instead of remaining paused.Components: Shared V4 Affected Versions: 3.2.x or later | Minor |
| PWX-49481 | During sharedv4 volume failover, Portworx may fail to clean up a stale sentinel mount. When the sentinel cleanup timeout is set to 0 (the default), the background cleanup thread does not run, leaving the sentinel mount in place and preventing the volume from detaching as expected.User Impact: Sharedv4 volumes may remain attached after application teardown or node failover. Tests and workloads that expect the volume to detach can fail, potentially blocking application rescheduling or cleanup workflows. Resolution: Portworx now cleans up stale sentinel mounts during failover and retries cleanup even when the sentinel cleanup thread is disabled, so that the volumes can detach successfully after all pods are terminated. Components: Shared Volumes (NFS) Affected Versions: 3.5.1 | Minor |
| PWX-49829 | When you delete a namespace that contains SharedV4 service volumes, the PVCs might be removed from Kubernetes but the corresponding Portworx volumes might remain attached and not get deleted. This occurrs because Portworx sent the wrong option key names when requesting the server-side unmount during cleanup. As a result, the unmount request failed with an error similar to expected to be called with a caller, leaving stale service state (including mounts under /var/lib/osd/pxns and, in some cases, sentinel paths under /var/lib/osd/pxns/sentinel/<volume-id>).User Impact: After deleting namespaces (including bulk deletions), some Portworx SharedV4 service volumes could remain attached and persist in pxctl volume list, even though the PVCs no longer existed. This required manual cleanup.Resolution: Portworx now uses the correct option keys for server-side unmount requests. With the correct parameters, Portworx can complete the server-side unmount and remove the remaining client/export state so the volume detach and delete workflow finishes successfully when a namespace is deleted. Components: Shared Volumes (NFS) Affected Versions: 3.5.2 | Minor |
| PWX-49862 | A SharedV4 volume attach operation might fail partially when the key-value database (KVDB) experiences high load. The volume state was incorrectly recorded as attached even though the attach did not fully complete, and the corresponding Kubernetes service for the SharedV4 volume was not created. Subsequent mount attempts failed with a service not found error, leaving pods stuck in the ContainerCreating state.User Impact: Pods consuming SharedV4 volumes could fail to start and remain stuck in ContainerCreating, requiring manual intervention to recover.Resolution: Portworx now validates that the required SharedV4 Kubernetes service exists before proceeding with mount operations. If an attach operation fails or leaves the volume in an inconsistent state, Portworx reconciles the volume state and ensures the service is created before allowing mounts. Components: Shared Volumes (NFS) Affected Versions: 3.5.2 | Minor |
Known issues (Errata)
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-46240 | When you upgrade to Portworx Enterprise version 3.5.2 from a version earlier than 3.4.2.1, virtual machines that use Portworx SharedV4 (NFS-backed) volumes as hot-plug disks may enter a paused state because the hot-plug volumes can detach during the upgrade process. User Impact: Virtual machines that use SharedV4 hot-plug volumes may pause and require manual intervention after the upgrade. Workaround: Stop and restart the virtual machine after the upgrade completes. Components: Volume Management | Minor |
| PWX-49150 | During User Impact: The PVC can remain in an Attached state even though it has no active consumers, preventing normal detach operations. Workaround: Manually detach the volume from the host by running: Components: Shared Volumes (NFS) | Minor |
3.5.1
January 06, 2026
To install or upgrade Portworx Enterprise to version 3.5.1, ensure that your cluster meets all system requirements and is running Operator 25.5.0 or later, Stork 25.6.0 or later, and one of the supported kernels.
Improvements
| Improvement Number | Improvement Description | Component |
|---|---|---|
| PWX-37143 | Portworx now uses Sentinel mounts to optimize Sharedv4 (NFS) volume handling when multiple pods on the same node access the same volume. Instead of creating separate NFS mounts for each pod, Portworx creates a single NFS mount per volume per node and uses lightweight bind mounts for additional pods. This enhancement improves performance, reduces mount overhead, speeds up pod restarts, and simplifies unmounting. For more information, see Configure sentinel mount cleanup interval and timeout. | Shared Volumes (NFS) |
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-31343/PWX-47952 | After upgrading the Anthos version in your Portworx cluster, volumes in the pods switched to read-only due to I/O errors and remain in that state. This caused pods to crash (for example, with a CrashLoopBackOff error), and the volumes were not remounted as read-write because Portworx did not consistently detect kernel-initiated read-only remounts in certain edge cases. As a result, automatic recovery, such as pod restarts, did not occur.User Impact: A node could appear decommissioned but not be fully removed from the cluster, requiring users to manually retry the decommission operation. Resolution: Portworx now correctly detects read-only volumes and automatically restarts the affected pods. Components: Storage Affected Versions: 3.5.0 | Minor |
| PWX-41941 | During node decommission, volume replica removal occurs asynchronously and can take time to complete. Portworx marks the node as decommissioned and waits for all replica removals to finish before removing the node from the cluster. In some cases, the Portworx cluster coordinator might miss the replica-removal completion events, causing the node to remain stuck in a decommissioned state. User Impact: A node could appear decommissioned but not fully removed from the cluster, requiring users to retry the decommission operation manually. Resolution: The Portworx cluster coordinator now accurately receives all replica-removal completion events and ensures the final replica is fully removed before deleting the node specification from the cluster. This ensures the decommission process completes successfully without manual intervention and nodes no longer get stuck in a decommissioned state. Components: Storage Affected Versions: 3.5.0 or earlier | Minor |
| PWX-48149 | When taking a cloud snapshot in cases where a metadata chunk was already stored in the cloud, the block-diff collector did not correctly handle a rare boundary condition. During diff calculation, a page containing zero records caused the collector to generate an invalid next offset value. This resulted in an incomplete snapshot. User Impact: CloudSnap backups could become incomplete, and subsequent backups on the affected volume could cause a px-storage crash. Resolution: Portworx now correctly handles pages with zero records during diff calculation and ensures that the next offset is computed safely. CloudSnap diff logic no longer returns invalid offsets, preventing incomplete snapshots and eliminating the crash scenario. Components: Cloudsnaps Affected Versions: 3.5.0 | Minor |
| PWX-48538 | After deleting all the pools in a cluster, the node appears storageless in pxctl status, but the driveset still contains a metadata drive. Because the current driveset is not empty, attempting storageless to storage conversion using the Provision Storage Node (PSN) label fails.User Impact: You cannot convert a storageless node back to a storage node without clearing the leftover metadata drive; pool re-creation via PSN label is blocked. Resolution: Portworx now correctly handles the cleanup of leftover metadata drives when all pools are deleted. If a driveset contains only a metadata device with no remaining storage pools, Portworx automatically clears the stale metadata entry allowing storageless-to-storage conversion through the PSN label to proceed successfully. Components: Storage Affected Versions: 3.5.0 | Minor |
| PWX-48516/PWX-48676 | When migrating a cluster from a DaemonSet-based Portworx deployment to an Operator-managed setup, using the portworx.io/cluster-id annotation on the StorageCluster resource allows you to preserve the original cluster name. However, during KVDB TLS migration, Portworx fails to locate the correct StorageCluster if the cluster name differs from the name specified in the portworx.io/cluster-id annotation.User Impact: Clusters with mismatched STC names and portworx.io/cluster-id annotations could not perform KVDB failovers or add new KVDB nodes.Resolution: Portworx now checks all available StorageCluster resources and identifies the correct one using the portworx.io/cluster-id annotation. This ensures that KVDB TLS migration and failover operations work correctly, even when the STC name differs from the cluster name preserved during migration.Components: KVDB Affected Versions: 3.5.0 | Minor |
| PWX-48719 | To support OpenShift version 4.20 or later, Portworx introduced /var/lib/osd/pxns as a common directory used to export volumes from a node. Although access to actual volume data remained restricted, exporting this directory in a broad read-only mode was flagged as a security concern.User Impact: Security scans could report warnings about the export of /var/lib/osd/pxns directory, even though users could not access data they were not authorized to see.Resolution: Portworx now exports the directory using only the combined access rules of the underlying volumes, ensuring that exports are limited to exactly what each volume allows and addressing the security concern. Components: Shared Volumes Affected Versions: 3.5.0 or earlier | Minor |
| PWX-48765 | On clusters using NBDD, Portworx experienced a segmentation fault in the px-storage process when a GetDiffExtents request was triggered while all storage pools on the node were down. In this state, internal data structures were not fully initialized, leading to a null pointer dereference.User Impact: When the segmentation fault occurs, the px-storage process crashes on the affected node, and the local storage pools may enter an InitErr or Down state. This can temporarily disrupt I/O operations for volumes on that node until Portworx restarts and the pools recover.Resolution: The diff-handling logic has been updated to detect when the local data store is not initialized, for example, when all pools are down, and skip initiating diff extent requests in that state. This prevents null pointer dereferences, avoids segmentation faults, and ensures that nodes handle pool-down conditions when NBDD is enabled. Components: Storage Affected Versions: 3.5.0 | Minor |
| PWX-48794 | After upgrading to AWS SDK v2 in Portworx 3.4.1, stricter data integrity validation was enabled by default for S3 operations. Some object storage systems, such as older versions of MinIO or other S3-compatible backends may not fully support these checks. As a result, CloudSnap uploads and S3 credential creation failed with errors. User Impact: Customers using incompatible S3-compatible object stores experienced CloudSnap backup and credential creation failures after upgrading to Portworx 3.4 or later. Resolution: Portworx has modified the AWS SDK v2 configuration to include checksum headers in S3 requests only when required. This restores compatibility with older MinIO or other S3-compatible backends while maintaining normal operation with AWS S3 and other supported object stores. Components: Cloudsnaps Affected Versions: 3.5.0 | Minor |
| PWX-48829 | When a node hosting multiple virtual machines is abruptly powered off, one of the affected virtual machines could become stuck in recovery mode after being restarted on another node, while other virtual machines recovered successfully. User Impact: One or more virtual machines could remain unreachable after node power loss, requiring manual intervention to recover the affected VM. Resolution: Portworx now ensures consistent recovery handling for virtual machines during abrupt node shutdowns, allowing all affected VMs to recover and start successfully on new nodes without getting stuck in recovery mode. Components: Storage Affected Versions: 3.5.0 or earlier | Minor |
| PWX-48835 | NBDD reported a much larger discard_bytes_total value than expected during snapshot delete and overwrite operations.User Impact: NBDD discarded the entire snapshot range instead of only the intended blocks, leading to misleading discard metrics and unnecessary data discard during snapshot workflows. Resolution: NBDD performs discard operations only on the correct block ranges, and discard_bytes_total accurately reflects the amount of data actually discarded.Components: Snapshots Affected Versions: 3.5.0 | Minor |
| PWX-48967 | Portworx cluster nodes fails to start when shared raw block VMs are running and their underlying PVCs are attached/exported on multiple nodes. If Portworx goes down on several nodes and a node that was hosting these VM PVC exports is auto-decommissioned, volumes may still reference that removed node. User Impact: Portworx repeatedly fails to come up on other nodes because they continue to request the storage spec for the decommissioned node, leaving the cluster in a restart loop and storage unavailable. Resolution: Portworx now prevents cluster startup failures by clearing invalid exported-node references when a node is auto-decommissioned. If a shared raw block volume was still marked as exported on a node that no longer exists, Portworx automatically removes that reference so all remaining nodes can retrieve a valid storage spec and start successfully. Components: Storage Affected Versions: 3.3.0 or later | Minor |
| PWX-48982 | During node startup, Portworx attempted to unmount device-mapper paths for FlashArray Direct Access (FADA) devices if they were mounted under system directories such as /var/lib/osd or /var/lib/kubelet. Because Portworx did not verify whether these devices actually belonged to Portworx-managed FADA volumes, it could unintentionally unmount non-PX FADA devices that were legitimately mounted by the host or other components.User Impact: Nodes could lose valid host-mounted FADA devices during startup, disrupting workloads and preventing dependent components from functioning correctly. Resolution: Portworx now unmounts FADA devices at startup only when a valid Portworx volume is associated with the device, ensuring that non-PX FADA devices remain mounted and unaffected during initialization. Components: Volume Management Affected Versions: 3.5.0 or earlier | Minor |
| PWX-49015/PWX-49036 | When increasing the replication factor of a volume, the operation stalls if the volume had snapshots whose replica sets did not match the parent volume. During the High Availability (HA) add workflow, where Portworx adds a new replica and synchronizes the volume’s data to the new node, Portworx incorrectly attempted to compare data against a snapshot that was not an internal snapshot and did not share the same replicas as the parent volume. This caused the data-comparison phase of the workflow to fail, preventing the HA-add operation from continuing. This is a day 0 issue. User Impact: The volume replication operation could remain stuck indefinitely, preventing users from increasing the HA level of affected volumes. Resolution: Portworx now correctly handles snapshots with mismatched replica sets during HA-add workflow. The HA-add workflow no longer gets blocked when older snapshots have different replica layouts. Replication increases now complete successfully even when snapshots do not match the parent volume’s replica configuration. Components: Storage Affected Versions: 3.5.0 or earlier | Minor |
| PWX-49219 | After upgrading to Portworx Enterprise 3.5.0, some Grafana pool-level panels in the Portworx Performance dashboard stopped displaying data. Portworx introduced new node-level metrics and updated naming, which exposed a defect in the Prometheus metrics registration logic. The internal library registers metrics using only the metric key name, without considering subsystem names. Therefore, when multiple subsystems define metrics with the same key (for example, write_latency_seconds), the first metric registered overwrites the others. User Impact: Pool-focused Grafana panels (for example, pool write latency, write throughput, flush latency, flushed bytes) showed no data, even though the cluster was healthy. Only pool-level visualization was affected; Portworx I/O operations continue to function normally. Resolution: Portworx now correctly registers pool-level metrics by including both the subsystem and metric name during registration, preventing name conflicts between node-level and pool-level metrics. This ensures that all pool metrics are exported to Prometheus and Grafana dashboards display pool-level data correctly after upgrade. Components: Volume Management Affected Versions: 3.5.0 | Minor |
| PWX-49340 | Pods using shared service (Sharedv4) volumes could remain stuck in the ContainerCreating state when the underlying file system contained errors that prevented the volume from being mounted.User Impact: Affected pods failed to start because the volume could not be mounted, even though the cluster and other workloads continued to run normally. Resolution: Portworx now prevents workloads from starting on volumes with detected file system errors until recovery is completed, ensuring that applications do not access corrupted file systems and improving stability for shared volume mounts. Components: Shared Volumes (NFS) Affected Versions: 3.5.0 | Minor |
Known issues (Errata)
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-48939 | Scheduling a SharedV4 application can fail if the required NFS packages are not installed on a node. During Portworx startup, automatic NFS installation may fail (for example, due to transient network issues). Portworx then enters a cooldown state and skips subsequent NFS installation attempts, even though the NFS service is not present on the node. User Impact: Pods that use SharedV4 volumes fail to mount with errors indicating that NFS installation was skipped. As a result, SharedV4 workloads cannot be scheduled or started on affected nodes. Workaround: Restart Portworx on the affected node to retry NFS package installation and start the NFS service successfully. Components: Shared Volumes (NFS) | Minor |
| PWX-49255 | KubeVirt virtual machines that use SharedV4 service volumes as hot-plug disks can pause or fail during Portworx restarts or coordinator node shutdown events (including node restarts). This occurs only in rare cases where a service volume failover attempt fails. In such scenarios, the hot-plug pod restarts, but because the volume is bind-mounted directly into the User Impact: The affected virtual machine enters a paused or failed state and becomes unavailable. Workaround: Restart the virtual machine. Components: Shared V4 | Minor |
| PWX-49481 | During sharedv4 volume failover, Portworx may fail to clean up a stale sentinel mount. When the sentinel cleanup timeout is set to User Impact: Sharedv4 volumes may remain attached after application teardown or node failover. Tests and workloads that expect the volume to detach can fail, potentially blocking application rescheduling or cleanup workflows. Workaround: Set Components: Shared Volumes (NFS) | Minor |
3.5.0
November 19, 2025
To install or upgrade Portworx Enterprise to version 3.5.0, ensure that your cluster meets all system requirements and is running Operator 25.5.0 or later, Stork 25.4.0 or later, and one of the supported kernels.
New Features
-
Support for Cloud Snapshots via CSI
Portworx now supports cloud-based snapshots through the Kubernetes Container Storage Interface (CSI) enabling users to create, delete, and restore cloud snapshots stored in S3-compatible object storage using standard CSI APIs.
For more information, see Manage Cloud snapshots of CSI-enabled volumes. -
Support for automatic provisioning of storage nodes when autoscaling with cluster autoscaler
In an environment where the cluster autoscaler is enabled, you can link the autoscaler to one or morenodePoolsormachineSets. You can preconfigure the node template by adding theportworx.io/provision-storage-node="true"label. This ensures that each node created from that template starts as a storage node automatically.
For more information, see Provisioning Storage Nodes and Configure Kubernetes Cluster Autoscaler to Autoscale Storage Nodes.noteThe parameters
maxStorageNodes,maxStorageNodesPerZone, andmaxStorageNodesPerZonePerNodeGroupare deprecated, and replaced with theinitialStorageNodesparameter. When scaling a Portworx cluster, themaxStorageNodesPerZoneparameter needed an increment to add more storage nodes. Instead of this, you can add new nodes and label them withportworx.io/provision-storage-node="true"to scale the cluster. See:Deprecated Features. -
Enhance CloudSnap Object Size for Improved Backup and Restore Performance
Portworx now supports increasing the CloudSnap backup object size from the default 10 MB to 100 MB, thereby accelerating backup and restore operations for large volumes.
In asynchronous disaster recovery scenarios, using a larger 100 MB object size results in fewer data to track and transfer. This reduces CPU and metadata overhead, and minimizes round trips, especially in high-latency environments.
For more information, see Cloud Snapshots. -
Non-Blocking Device Delete (NBDD) for Snapshot and Volume Deletes
The Non-Blocking Device Delete (NBDD) feature improves the performance and reliability of large volume and snapshot deletions in Portworx Enterprise clusters running on PX-StoreV2. NBDD performs background block discards at a configurable rate before deletion, reducing I/O pauses and helping reclaim storage, especially in SAN or FlashArray (FA) environments.
For more information, see Non-Blocking Device Delete. -
PX-StoreV2 set as default datastore
Portworx now uses PX-StoreV2 as the default datastore for all new deployments on supported platforms.
For more information, see PX-StoreV2. -
Support for Co-location on Portworx RWX block volumes
Portworx now supports volume co-location for KubeVirt virtual machines (VMs) on Portworx RWX block volumes. This feature enables Portworx to place the replicas of all VM volumes on the same set of nodes, improving I/O performance, reducing network traffic, and avoiding unnecessary remote volume access.
For more information, see Manage Shared Block Device (RWX Block) for KubeVirt VMs. -
Portworx Shared RWX Block volumes for KubeVirt VMs on SUSE Virtualization
Portworx now supports ReadWriteMany (RWX) raw block volumes for KubeVirt virtual machines (VMs) on SUSE Virtualization, enabling features such as live migration, high availability, and persistent VM storage across SUSE Virtualization nodes.
For more information, see Manage Portworx RWX Block Volumes on SUSE Virtualization for KubeVirt VMs. -
Portworx Installation Support on SUSE Virtualization with FlashArray
Portworx now supports installation on SUSE Virtualization environments with Pure Storage FlashArray as the backend storage provider. This integration enables administrators and platform engineers to deploy a highly available, scalable, and production-grade storage layer for virtualized and cloud-native workloads running on SUSE platforms.
For more information, see Installation on SUSE Virtualization. -
Enhanced FUSE Driver with Tunable Runtime Options
The Portworx FUSE driver now includes enhanced support for high-performance Portworx workloads through new tunable runtime options. These options allow you to fine-tune FUSE behavior based on workload characteristics and system resources.
For more information, see the Tune Performance guide. -
Increased Volume Attachment Limit per Node
Portworx now supports up to 1,024 volume attachments per node. The volume attachment limit for FADA and SharedV4 volumes remains at 256.
For more information, see Features and configurations supported by different license types.noteYou must upgrade the fuse driver in your Portworx cluster to version 3.5.0 or later to support the new limits.
-
Automatic KVDB Node Rebalancing Across Failure Domains
Portworx now supports automatic rebalancing of internal KVDB nodes across failure domains after a failover. The rebalance check runs every 15 minutes and ensures that the KVDB nodes do not remain concentrated in a single failure domain after failovers or cluster topology changes. -
Support for KubeVirt VMs with IPv6
Portworx now supports KubeVirt VMs with IPv6 networking.
Early Access Features
- Support for storage operations in OpenShift hosted control plane clusters
Portworx now supports storage operations for both management and hosted clusters created using Openshift hosted control plane.
For more information, see Deploy Openshift hosted clusters with Portworx.
Deprecated Features
The parameters maxStorageNodes, maxStorageNodesPerZone, and maxStorageNodesPerZonePerNodeGroup are deprecated. The settings for these parameters were often misunderstood, leading to incorrect configurations. In multiple cases, improper values caused Portworx to provision extra storage drivesets leading to orphaned nodes, loss of Portworx cluster quorum, and application unavailability. These parameters are replaced with the initialStorageNodes parameter and the portworx.io/provision-storage-node="true" label.
If you are upgrading a Portworx cluster, to version 3.5.x or later, and using Portworx Operator 25.5 or later, there is no specific action needed. The values set for the deprecated parameters (maxStorageNodes, maxStorageNodesPerZone, and maxStorageNodesPerZonePerNodeGroup) will remain as is in your configuration, but will have no operational effect. After upgrading, you cannot modify these parameters to new values. You can only unset them. Attempts to set or modify them (on both new and existing clusters) will result in an error.
When installing Portworx Enterprise on a new cluster, Portworx operator will label the nodes automatically to provision them as the storage nodes. There is no new action needed. The Portworx specification, StorageClsuter CR, should not have these deprecated parameters. You can use the optional initialStorageNodes parameter to have only a subset of the nodes provisioned as storage nodes. If you want all nodes to become storage nodes, the initialStorageNodes parameter is not needed.
There are other considerations when using Portworx in a disaggregated setup or when using the Autoscale storage nodes feature especially in the cloud deployments. Refer to Portworx documentation for these considerations.
When you add new nodes to an existing Portworx cluster, the new nodes will come up initially as storageless nodes. You can provision a new driveset and convert a storageless node to a new storage node by adding the portworx.io/provision-storage-node="true" label and removing portworx.io/provision-storage-node-handled label using the following command on the Kubernetes node object:
kubectl label node <node-name> portworx.io/provision-storage-node="true" portworx.io/provision-storage-node-handled-
Note that portworx.io/provision-storage-node label acts as a one-time request and is not an indication of a permanent state of being a storage node. After handling the request, Portworx will add another label portworx.io/provision-storage-node-handled which cancels out the portworx.io/provision-storage-node label and marks the request as handled. In addition, the PortworxNewStorageNodeProvisioned condition is also added to the node object to indicate the result. If the condition status is true, the storage node is provisioned successfully. If the status is false, the node has not been provisioned as a storage node.
For more information, see Provisioning Storage Nodes.