Skip to main content
Version: 3.4

Portworx Enterprise Release Notes

3.4.2.1

December 17, 2025

To install or upgrade Portworx Enterprise to version 3.4.2.1, ensure that you are running one of the supported kernels and all system requirements are met.

Fixes

Issue NumberIssue DescriptionSeverity
PWX-46697When a node stops receiving watch updates directly from KVDB, Portworx attempts to pull updates from other nodes with newer data. During this process, the affected node may advertise an outdated KVDB revision to other nodes, which can cause KVDB compaction to be skipped indefinitely. As a result, the KVDB size may grow significantly over time. If this condition persists, the node’s watch version can fall behind the current state, resulting in the error:
Storage watch error, error: Kvdb watch revision compacted.
When this occurs, affected Portworx nodes may restart unexpectedly.

User Impact: If KVDB compaction is skipped, the KVDB size continues to grow over time. Additionally, unexpected node restarts can temporarily disrupt storage services and impact running applications.

Resolution:
Portworx now correctly advertises the latest KVDB revision it has seen, allowing compaction to proceed as expected. Additionally, Portworx automatically restarts only the affected KVDB watch process instead of restarting the entire node service, minimizing disruption to storage operations.

Components: KVDB
Affected Versions: 3.4.2 or earlier
Minor
PWX-48794After upgrading to AWS SDK v2 in Portworx 3.4.0, stricter data integrity validation was enabled by default for S3 operations. Some object storage systems, such as older versions of MinIO or other S3-compatible backends may not fully support these checks. As a result, CloudSnap uploads and S3 credential creation failed with errors.

User Impact: Customers using incompatible S3-compatible object stores experienced CloudSnap backup and credential creation failures after upgrading to Portworx 3.4 or later.

Resolution:
Portworx has modified the AWS SDK v2 configuration to send checksum headers in S3 requests only when required. This restores compatibility with older MinIO or other S3-compatible backends while maintaining normal operation with AWS S3 and other supported object stores.

Components: Cloudsnaps
Affected Versions: 3.4.0 through 3.4.2
Minor
PWX-48967Portworx cluster nodes fails to start when shared raw block VMs are running and their underlying PVCs are attached/exported on multiple nodes. If Portworx goes down on several nodes and a node that was hosting these VM PVC exports is auto-decommissioned, volumes may still reference that removed node.

User Impact: Portworx repeatedly fails to come up on other nodes because they continue to request the storage spec for the decommissioned node, leaving the cluster in a restart loop and storage unavailable.

Resolution:
Portworx now prevents cluster startup failures by clearing invalid exported-node references when a node is auto-decommissioned. If a shared raw block volume was still marked as exported on a node that no longer exists, Portworx automatically removes that reference so all remaining nodes can retrieve a valid storage spec and start successfully.

Components: Storage
Affected Versions: 3.3.0 and later
Minor
PWX-48982In some Kubernetes environments where nodes are SAN-booted from a FlashArray LUN, the device-mapper path for the root filesystem may change during a platform upgrade (for example, an OCP upgrade that rebuilds initramfs). The path may switch from a user-friendly alias (such as /dev/mapper/fa-boot-device) to a serial-number-based name (such as /dev/mapper/3624a937…). After this change, Portworx is unable to find /var/lib/osd inside the container and fails to start.

User Impact: Portworx cannot come up after a platform upgrade, resulting in nodes remaining offline until manually recovered.

Resolution:
Portworx now avoids unmounting /var/lib/osd even when the device-mapper path changes to a serial-number format, ensuring successful startup after platform upgrades.

Components: Storage
Affected Versions: 3.4.2 or earlier
Minor

3.4.2

November 19, 2025

To install or upgrade Portworx Enterprise to version 3.4.2, ensure that you are running one of the supported kernels and all system requirements are met.

Portworx Enterprise Release Notes

Fixes

Issue NumberIssue DescriptionSeverity
PWX-37941When a large cluster (such as a 200-node setup) was powered off and then powered back on, several idle vSphere sessions created by Portworx were not closed properly. These sessions remained active even after the cluster was fully operational.

User Impact: The idle sessions could accumulate over time, leading to an excessive number of open vSphere sessions. This might result in authentication errors, throttling, or vCenter session exhaustion in large environments.

Resolution:
Portworx now ensures all vSphere sessions are properly closed after use. Only one active session is maintained on the cluster’s coordinator node, which is reused for monitoring Storage vMotion. This prevents session buildup and ensures efficient session management after cluster power cycles.

Components: Drive & Pool Management
Affected Versions: 3.1.2.1
Minor
PWX-43251When sharedv4 is disabled or NFS is not installed, Portworx continues to collect NFS pool metrics, which results in repeated warning messages in the logs, such as failed to get nfs pool stats.

User Impact: Unnecessary warning messages and wasted cycles attempting to read non-existent NFS files.

Resolution:
Portworx now skips NFS pool metrics collection when sharedv4 is disabled or NFS is not installed.

Components: Shared Volumes
Affected Versions: 3.4.1 or earlier
Minor
PWX-47166When px-storage crashed multiple times, old core files were not always cleaned up automatically. Over a period of time, this can fill up /var/cores.

User Impact: Nodes can lose disk space due to accumulated core dumps from px-storage.

Resolution:
Portworx now reliably cleans up px-storage core files using a flexible match (*core*px-storage*). This picks up core files automatically even if the system’s core-file naming pattern differs slightly.

Components: Telemetry & Monitoring
Affected Versions: 3.4.1 or earlier
Minor
PWX-48516When migrating a cluster from a DaemonSet-based Portworx deployment to an Operator-managed setup, using the portworx.io/cluster-id annotation on the StorageCluster resource allows you to preserve the original cluster name. However, during KVDB TLS migration, Portworx fails to locate the correct StorageCluster if the cluster name differs from the name specified in the portworx.io/cluster-id annotation.

User Impact: Clusters with mismatched STC names and portworx.io/cluster-id annotations could not perform KVDB failovers or add new KVDB nodes.

Resolution:
Portworx now checks all available StorageCluster resources and identifies the correct one using the portworx.io/cluster-id annotation. This ensures that KVDB TLS migration and failover operations work correctly, even when the STC name differs from the cluster name preserved during migration.

Components: KVDB
Affected Versions: 3.4.1 or earlier
Minor
PWX-48521Portworx incorrectly counted internally attached volumes, such as those linked to snapshots or temporarily attached during background operations, towards the node attachment limit. This led to attachment failures when cluster had a large number of internally attached volumes and snapshots, as these internal volumes and snapshots artificially inflated the attachment count.

User Impact: Clusters with large number of snapshots or detached parent volumes hit the maximum attachment limit prematurely, even though many of those volumes were not actively attached. This caused new volume attachments to fail with Licensed maximum reached for ‘LocalVolumeAttaches’ feature errors.

Resolution:
Portworx now excludes internally attached volumes from attachment limit tracking. Only user-initiated external attachments are now counted towards the licensed attachment limit. This ensures that internal system operations, such as snapshot creation or temporary attachments, no longer interfere with normal volume usage.

Components: Volume Management
Affected Versions: 3.4.1 or earlier
Minor

3.4.1

October 13, 2025

To install or upgrade Portworx Enterprise to version 3.4.1, ensure that you are running one of the supported kernels and all system requirements are met.

important

Ensure that your object store is compatible with AWS SDK v2. Portworx uses SDK v2 starting with version 3.4.1, following the end of life (EOL) of SDK v1. While Portworx is fully S3-compliant as a client, SDK v2 includes stricter data integrity checks that are enabled by default. Some object storage systems, such as older versions of MinIO or other S3-compatible backends may not fully support these checks and can cause compatibility issues.

Fixes

Issue NumberIssue DescriptionSeverity
PWX-35350Portworx prerequisite checks for FlashArray failed when user_friendly_names was set to yes in the defaults section of /etc/multipath.conf but set to no in the devices section.

User Impact: The prerequisite check incorrectly detected a conflict and blocked installation or validation on systems with mixed multipath settings.

Resolution:
Portworx now checks the user_friendly_names setting in both the devices and defaults sections of /etc/multipath.conf. If both are specified, the value in the devices section overrides the defaults value, preventing false prerequisite check failures.

Components: Drive & Pool Management
Affected Versions: 3.4.0
Minor
PWX-46240When multiple hotplug disks are added to a KubeVirt VM that uses Portworx SharedV4 (NFS-backed) volumes, only the last disk is successfully attached to the VM. Earlier hotplug disks become detached during the sequence, and the VM detects only one disk.

User Impact: VMs may not see all requested hotplug disks. I/O to the missing disks is unavailable until the VM is restarted.

Resolution:
Portworx now supports subpath-aware mount handling for NFS-backed hotplug volumes. Active subpath mounts are preserved, preventing premature unmounts and detaches when multiple hotplug disks are attached.

Components: Volume Management
Affected Versions: 3.4.0
Minor
PWX-46718When you resize a FlashArray Direct Access (FADA) shared raw block volume or disk while a KubeVirt VM is powered off, the VM detects the updated size after it restarts but the underlying FlashArray volume remains at the original capacity.

User Impact: Users are unable to fully utilize the expanded storage.

Resolution:
Portworx now automatically resizes FADA shared raw block volume or disks. After the KubeVirt VM restarts, users can use the full updated storage capacity.

Components: Volume Management
Affected Versions: 3.4.0
Minor

Known issues (Errata)

Issue NumberIssue DescriptionSeverity
PWX-47001

Creating S3 credentials using the pxctl command failed on MinIO servers running versions earlier than 2024-09-22T00-33-43Z.

User Impact: Customers cannot create S3 credentials for cloud providers, which prevents configuring backup or cloud storage operations.

Workaround: Upgrade MinIO to version 2024-09-22T00-33-43Z or later before upgrading to Portworx 3.4 or later.

Components: Cloudsnaps
Affected Versions: 3.4.0

Minor
PWX-47423

CloudSnap uploads fail when using MinIO versions earlier than 2024-09-22T00-33-43Z.

User Impact: Customers upgrading to Portworx 3.4.0 or later while still using MinIO versions earlier than 2024-09-22T00-33-43Z may experience CloudSnap backup failures after the upgrade.

Workaround: Upgrade MinIO to version 2024-09-22T00-33-43Z or later before upgrading to Portworx 3.4 or later.

Components: Cloudsnaps
Affected Versions: 3.4.0

Minor

3.4.0.1

September 15, 2025

To install or upgrade Portworx Enterprise to version 3.4.0.1, ensure that you are running one of the supported kernels and all system requirements are met.

Fixes

Issue NumberIssue DescriptionSeverity
PWX-46786Portworx detects AWS environments by checking node metadata and assumes instance user data is available. This works as expected in real AWS environments, where both metadata and user data are present. In simulated AWS environments, such as EKS Anywhere, the metadata service may exist without user data. In this case, Portworx interprets the setup as invalid and fails to start.

User Impact: In real AWS environments where metadata and user data are present, Portworx starts successfully. In simulated AWS environments, such as EKS Anywhere, Portworx fails to start, preventing workloads from using Portworx volumes.

Resolution:
Portworx now starts successfully in simulated AWS environments, such as EKS Anywhere by safely handling missing user data during AWS provider initialization.

Components: Drive and Pool Management
Affected Versions: 3.4.0
Minor
PWX-46790Portworx 3.4.0 fails to allow adding custom security certificates for S3-compatible storage. This prevents Portworx from trusting private or internal S3 storage systems that do not use publicly trusted certificates.

User Impact: Customers using on-prem S3 compatible storage systems (such as MinIO, Ceph, or StorageGRID) with internal or self-signed certificates cannot connect Portworx to these storage systems. Features that depend on S3 storage, such as backups and snapshots might fail. Customers using Amazon S3 or systems with certificates from public providers are not affected.

Resolution:
Portworx now supports custom security certificates with S3-compatible storage. It can successfully connect when you provide private or self-signed certificates, restoring full functionality for backups, cloud snapshots, and asynchronous disaster recovery.

Components: Cloudsnaps
Affected Versions: 3.4.0
Minor

3.4.0

September 08, 2025

To install or upgrade Portworx Enterprise to version 3.4.0, ensure that you are running one of the supported kernels and all system requirements are met.

New Features

  • Portworx Shared RWX Block volumes for KubeVirt VMs
    Portworx now supports ReadWriteMany (RWX) raw block volumes for KubeVirt virtual machines (VMs), enabling high-performance, shared storage configurations that support live migration of VMs in OpenShift environments. For more information, see Manage Shared Block Device (RWX Block) for KubeVirt VMs.

  • Enhance capacity management by provisioning custom storage pools
    Portworx now enables provisioning of storage pools during and post Portworx installation, enhancing the management of storage capacity. For more information, see Provision storage pool.

  • Journal IO support for PX-StoreV2
    Portworx now supports Journal device setup and journal IO profile volumes for PX-StoreV2. For more information, see Add a journal device.

  • Support for multiple connections on the same NIC interface or bonded NIC using LACP
    Portworx enables the use of multiple connections on the same NIC interface or bonded NIC interfaces using LACP, to enhance performance as data traffic can be distributed across multiple links. For more information, see Configure multiple NICs with LACP NIC Bonding.

  • Pool drain
    Portworx now supports moving volume replicas between storage pools using the pool drain operation. For more information, see Move volumes using pool drain.

  • Cloud snapshots for FlashArray Direct Access (FADA) volumes
    Portworx now supports cloud snapshots for FlashArray Direct Access (FADA) volumes. Cloud snapshots on FADA volumes use the same failure handling, cloud upload, and restore mechanisms as snapshots on PX volumes. For more information, see Cloud Snapshots.

  • DR support for Kubevirt VMs with PX RWX Block Volumes
    Portworx now supports synchronous and asynchronous disaster recovery on KubeVirt virtual machines (VMs) with ReadWriteMany (RWX) raw block volumes running on OpenShift Virtualization version 4.18.5 or later and OpenShift Container Platform version 4.18.x. For more information, see Synchronous Disaster Recovery or Asynchronous Disaster Recovery.

  • Support for AWS workload identity
    Portworx now supports AWS workload identity using IAM Roles for Service Accounts (IRSA), allowing pods to securely access cloud services using short-lived, automatically rotated credentials. For more information see How to use workload identity for AWS cloud operations in Portworx.

  • Support to validate vSphere credentials
    Portworx now supports validating vSphere credentials using the pxctl clouddrive credentials-validate command. For more information, see pxctl clouddrive credentials-validate in pxctl clouddrive.

  • Add Custom Labels to Storage Pools During or After Installation
    Portworx now supports adding custom labels to storage pools during installation when using local or pre-provisioned drives through STC. In addition, admin users can apply custom labels to existing storage pools, including those backed by cloud-based drives using the pxctl command. For more information, see Custom labels for device pools.

  • Active Cluster on FlashArray Direct Access volumes
    Portworx now supports ActiveCluster on FlashArray Direct Access volumes with PX-StoreV1, allowing synchronous replication and automatic failover across multiple FlashArrays. For more information, see Install Portworx with Pure Storage FlashArray Direct Access volumes with ActiveCluster setup.

Early Access Features

  • Portworx Shared RWX Block volumes for KubeVirt VMs on SUSE Virtualization
    Portworx now supports ReadWriteMany (RWX) raw block volumes for KubeVirt virtual machines (VMs) on SUSE Virtualization, enabling features such as live migration, high availability, and persistent VM storage across SUSE Virtualization nodes. For more information, see Manage Portworx RWX Block Volumes on SUSE Virtualization for KubeVirt VMs.

Fixes

Issue NumberIssue DescriptionSeverity
PWX-37332Portworx reset the kernel.core_pattern setting during service startup, overriding user-defined configurations such as systemd-coredump.

User Impact: Custom core dump handling (for example, routing dumps via systemd-coredump) was not preserved. All core dumps were redirected to /var/cores, which could lead to storage issues and disrupt application-specific diagnostics.

Resolution: Portworx no longer overwrites kernel.core_pattern if it is set to use systemd-coredump. This preserves user-defined core dump configurations across service restarts.

Components: Telemetry and Monitoring
Affected Versions: 3.3.1.3 or earlier
Minor
PWX-38702When a SharedV4 volume coordinator node restarts, Portworx removes the storage device before it unmounts the file system. This leaves a stale journal file that prevents subsequent mounts.

User Impact: Pods and other applications using SharedV4 volumes fails to start after a node restart, leaving containers stuck and volumes unavailable.

Resolution: Portworx now treats SharedV4 volumes as encapsulated devices and ensures cleanup happens only after the volume is unmounted. This prevents stale files from blocking remounts and allows pods to start successfully.

Components: Shared Volumes
Affected Versions: 3.3.1.3 or earlier
Minor
PWX-41875Portworx generated false alerts indicating that all NFS threads for SharedV4 volumes were in use, even when enough threads were available.

User Impact: Users received inaccurate NFS thread exhaustion alerts, leading to unnecessary troubleshooting.

Resolution: Portworx now uses dynamic thread allocation for the NFS server. It adjusts the thread count based on load and uses improved metrics to accurately detect thread usage.

Components: Shared Volumes
Affected Versions: 3.3.1.3 or earlier
Minor
PWX-42379On PX-Security enabled clusters running Kubernetes 1.31 or later, expanding an in-tree PersistentVolumeClaim (PVC) fails due to compatibility issues.

User Impact: Users cannot increase storage capacity through standard PVC expansion methods, potentially impacting workloads that require additional storage.

Resolution: The CSI external-resizer is updated to include the necessary csi-translation changes and Portworx Operator now ensures the required PV annotations are present (when guestAccess is disabled). PVC expansion succeeds without any manual steps.

Components: Volume Management
Affected Versions: 3.3.1.3 or earlier
Minor
PWX-43060If a FlashArray Direct Access (FADA) multipath device becomes unavailable after the volume has already been marked as attached external, the volume attachment might become stuck with a file does not exist error. This can happen if the device path is removed while the attachment is in progress, causing the system to lose access to the underlying device without updating the attachment status.

User Impact: VMs failed to start while affected PVCs appeared attached in control-plane state but had no device on the node.

Resolution: Portworx now always verifies and reconciles device presence for Pure FADA volumes when the volume is marked attached. If the device is missing or the backend previously rejected the connect (for example, host reported offline), Portworx retries and completes the attach so the device appears on the node and the pod can proceed.

Components: Volume Management
Affected Versions: 3.3.1.3 or earlier
Minor
PWX-43770Log lines appearing in pxctl status output.

User Impact: When running pxctl status, logging messages were displayed in the output, making it harder to interpret the command results.

Resolution: The log lines have been removed from the default output. They now appear only when the log level is set to debug.

Components: CLI and API
Affected Versions: 3.3.1.3 or earlier
Minor
PWX-43843In clusters using FlashArray-backed KVDB nodes, if two KVDB nodes lose access to their backend drives simultaneously, the KVDB cluster might not recover automatically after drive access is restored.

User Impact: KVDB remains unavailable even after backend drives come back online, requiring manual intervention to restore Portworx control plane functionality.

Resolution: Portworx now automatically recovers KVDB after quorum is re-established in FlashArray environments, without requiring manual action.

Components: KVDB
Affected Versions: 3.3.1.3 or earlier
Minor
PWX-44377On clusters running CRI-O v1.31, Portworx installation failed if the signature_policy setting was enabled in crio.conf.

User Impact: Clusters running CRI-O v1.31 with signature_policy configured could not install or upgrade Portworx successfully, leaving workloads unable to start.

Resolution: Portworx now supports CRI-O v1.31 with signature_policy configured. Installation and upgrade now succeeds, and volume operations continue to work without requiring manual configuration changes.

Components: KVDB
Affected Versions: 3.3.1.3 or earlier
Minor
PWX-44401On Dell servers running Portworx Enterprise 3.2.2.2, NVMe disks in PX-StoreV2 were incorrectly reported as offline even though the pools were online and fully operational. The Dell server kernel did not create the /sys/block/<device>/device/state file, which Portworx uses to determine disk health. Without this file, the system incorrectly marked the disk as offline even though it was functioning normally. Although this issue was initially observed on Dell servers, it can occur on other systems or distributions where the NVMe device state file is unavailable.

User Impact: The incorrect offline disk status caused confusion in system monitoring, though it did not affect data availability, pool functionality, or volume operations.

Resolution: Portworx now handles missing device state files properly. If the system does not provide the state file, the disk is automatically treated as online to avoid false offline status reporting.

Components: Drive and Pool Management
Affected Versions: 3.3.1.3 or earlier
Minor
PWX-44486If the coordinator node of an RWX volume (i.e. the node where the volume is currently attached) is placed into Maintenance mode, application pods using the volume might temporarily experience Input/Output (I/O) disruption and encounter I/O errors.

User Impact: Applications using Sharedv4 volumes might experience errors or delays during node restarts, service restarts, or when nodes enter maintenance.

Resolution: Portworx now supports Sharedv4 service failover and failback operations successfully without I/O disruption. Applications continue running without errors during node restarts, maintenance operations, or failback scenarios.

Components: Storage
Affected Versions: 3.3.1.3 or earlier
Minor
PWX-44623When provisioning virtual machines (VMs) on FlashArray using KubeVirt, the VM might remain in the Provisioning state if the underlying PersistentVolumeClaim (PVC) fails with the error Duplicate volumes found. This issue can occur when the FlashArray is overloaded or responding slowly, causing backend volume creation conflicts that leave the PVC in a Pending state.

User Impact: VM provisioning could stall, leaving temporary PVCs stuck in Pending state.

Resolution: Portworx now blocks cloning from stretched-pod sources to destinations that are not in a pod. This prevents duplicate volume creation and allows temporary PVC provisioning to complete successfully.

Components: Volume Management
Affected Versions: 3.3.1.3 or earlier
Minor
PWX-45110Portworx components such as Stork, Portworx Operator, and Autopilot generated unnecessary list-pods requests during scheduling and other workflows.

User Impact: This caused excessive API server traffic, high memory usage (greater than 15 GiB), and degraded cluster performance.

Resolution: Portworx now makes pod list requests only when you explicitly run a command, reducing API server load and improving stability in large clusters.

Components: Shared Volumes
Affected Versions: 3.3.1.3 or earlier
Minor
PWX-45272Portworx repeatedly sent requests to the /array-connections endpoint, even when the configured realm user lacked the required permissions. These unauthorized requests resulted in continuous 403 errors and increased load on the FlashArray (FA) REST API.

User Impact: Excessive API calls caused high REST queue usage on the FA, which could slow communication and delay PX node startup.

Resolution: Portworx now avoids sending /array-connections requests when a realm user is configured without the necessary permissions. Additionally, when receiving a 429 Too Many Requests response from FA, Portworx applies a backoff timer before retrying. This reduces API load and prevents aggressive restart loops during FA unavailability.

Components: Drive and Pool Management
Affected Versions: 3.3.1.3 or earlier
Minor
PWX-45439AWS Marketplace licenses for PX-Enterprise AWS and PX-Enterprise AWS-DR set to expire.

User Impact: Clusters running with AWS Marketplace licenses risked disruption if the licenses expired without renewal, impacting production workloads.

Resolution: License keys for both AWS Marketplace plans is extended, ensuring uninterrupted service without requiring user intervention.

Components: IX-Licensing & Metering, Marketplaces
Affected Versions: 3.3.1.3 or earlier
Minor
PWX-45926KVDB dump files located at /var/lib/osd/kvdb_backup were stored uncompressed and could grow to several gigabytes, consuming excessive disk space.

User Impact: Users experienced high disk usage due to large KVDB backup files and had to manually compress or remove them to free up space.

Resolution: Portworx now automatically compresses KVDB dump files using gzip., which significantly reduces their disk usage.

Components: KVDB
Affected Versions: 3.3.1.3 or earlier
Minor

Known issues (Errata)

Issue NumberIssue DescriptionSeverity
PWX-45570

When performing an AKS platform upgrade with maxStorageNodesPerZone=0 in the StorageCluster specification, Portworx incorrectly adds a new storage node. After the upgrade completes, AKS deletes the temporary node, but Portworx continues to track it as a storage node, leaving the node in an Offline state.

User Impact: After an AKS upgrade, clusters may display an extra storage node marked Offline. This node remains in the cluster state until manually removed.

Workaround: There are two workarounds to address this issue.

  1. Set maxStorageNodesPerZone: Before triggering the AKS upgrade, set maxStorageNodesPerZone to match the number of existing storage nodes per zone in the cluster specification. This prevents additional storage nodes from being created during the upgrade.
  2. Manually delete the offline node:If an offline node appears after the upgrade, remove it using the following command:
  3. pxctl cluster delete <Offline Node ID> -f

Components: Drive & Pool Management
Affected Versions: 3.4.0

Minor
PWX-46240

When multiple hotplug disks are added to a KubeVirt VM that uses Portworx SharedV4 (NFS-backed) volumes, only the last disk is successfully attached to the VM. Earlier hotplug disks become detached during the sequence, and the VM detects only one new disk.

User Impact: VMs may not see all requested hotplug disks. I/O to the missing disks is unavailable until the VM is restarted.
Workaround: Add hotplug disks one at a time and wait for each to attach before adding the next or restart the VM after adding multiple hotplug disks so all disks attach.

Components: Volume Management
Affected Versions: 3.4.0

Minor
PWX-46786

Portworx detects AWS environments by checking node metadata and assumes instance user data is available. This works as expected in real AWS environments, where both metadata and user data are present. In simulated AWS environments, such as EKS Anywhere, the metadata service may exist without user data. In this case, Portworx interprets the setup as invalid and fails to start.

User Impact: In real AWS environments where metadata and user data are present, Portworx starts successfully. In simulated AWS environments, such as EKS Anywhere, Portworx fails to start, preventing workloads from using Portworx volumes.

Workaround: There is no workaround.

Components: Drive & Pool Management
Affected Versions: 3.4.0

Minor
PWX-46790

Portworx 3.4.0 fails to allow adding custom security certificates for S3-compatible storage. This prevents Portworx from trusting private or internal S3 storage systems that do not use publicly trusted certificates.

User Impact: Customers using private S3 systems (such as MinIO, Ceph, or StorageGRID) with internal or self-signed certificates cannot connect Portworx to these storage systems. Features that depend on S3 storage, such as backups and snapshots might fail. Customers using Amazon S3 or systems with certificates from public providers are not affected.

Workaround: There is no workaround.

Components: Cloudsnaps
Affected Versions: 3.4.0

Minor