Portworx Stork Release Notes
25.6.0
December 10, 2025
This release also addresses security vulnerabilities.
Improvements
| Improvement Number | Improvement Description |
|---|---|
| PWX-46541 | Added new Prometheus metrics for monitoring disaster recovery (DR) operations, including migrations, migration schedules, failover and failback actions, and protected namespaces. Improved the existing metrics and introduced a new Portworx DR Grafana dashboard built on these metrics. For information about the dashboard, see Portworx DR dashboard. For the list of Stork metrics, see Portworx DR and Stork metrics. |
25.5.0
November 24, 2025
This release also addresses security vulnerabilities.
New Features
-
Admin ClusterPair supports migration of namespace-scoped resources: Stork now supports a cross-namespace disaster recovery (DR) capability that enables reuse of a centrally managed ClusterPair. Cluster administrators can define one secure DR connection in an admin namespace, while application teams manage MigrationSchedule objects and trigger failover or failback from their own namespaces by referencing the centralized ClusterPair.
This enhancement removes the previous constraint that ClusterPair, MigrationSchedule, and Migration objects must reside in the same namespace and avoids duplicating DR credentials across application namespaces.
-
A single admin-scoped AdminClusterPair object can now be referenced by namespaced DR resources (MigrationSchedule, Migration, and others). When specified, it takes precedence for all operations. For more information, see Set up an admin ClusterPair.
-
Existing specifications remain backward compatible. If an AdminClusterPair is not specified, behavior remains unchanged, and the local, same-namespace ClusterPair is used.
-
The
storkctlCLI supports cross-namespace usage with the existing--admin-cluster-pairflag. For more information, see the storkctl create migrations and storkctl create migrationschedules.
-
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-46543 | Previously, Stork failed a migration early if any single volume migration failed, leaving other volume migrations in progress. User impact: In asynchronous DR scenarios, if a volume migration failed and a new migration was scheduled quickly, the previous incomplete migration could cause the new one to fail with a Restore InProgress error. Resolution: Stork now waits for all volume migrations to finish before completing the migration, even if some fail. Affected versions: 25.4.1 and earlier | Minor |
| PB-12474 | Namespace label backup includes the namespace in the list even after the volume stage, which results in only PVC resources being included in the resource stage for the new namespace. User impact: This causes the restore to fail because the volume backup did not happen for the newly included namespace. Resolution: Update the namespace list only once during the start of the backup. Affected versions: 25.4.1 and earlier | Minor |
| PB-10321 | VM backups using the filesystem–consistent Backup method could incorrectly succeed without the qemu-guest-agent due to inconsistent behavior across Linux distributions. User impact: On some Linux Distributions, this resulted in crash-consistent backups instead of filesystem–consistent backup. Resolution: Stork now properly detects the absence of the guest agent and fails the backup as expected. Affected versions: 25.4.1 and earlier | Minor |
25.4.1
September 22, 2025
Improvements
| Improvement Number | Improvement Description |
|---|---|
| PWX-46812 | Added the --skip-validation flag to the storkctl create clusterpair command to skip ObjectStoreLocation credential validation. For more information, see storkctl create clusterpair command reference. |
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-46831 | When creating a ClusterPair with the portworx-api Service of type NodePort and specifying the --src-ep and --dest-ep flags, ObjectStoreLocation validation failed. User impact: ClusterPair creation failed, impacting the workflow to set up disaster recovery (DR).Resolution: ObjectStoreLocation validation is now skipped if custom endpoints are specified in the storkctl create clusterpair command.Affected versions: 25.4.0 | Minor |
| PWX-46943 | During ClusterPair creation in an OpenShift cluster with the portworx-api Service of type LoadBalancer, the step that fetched the Portworx cluster pairing token from the destination cluster failed.User impact: ClusterPair creation failed, impacting the workflow to set up disaster recovery (DR).Resolution: When creating a ClusterPair and the portworx-api Service is of type LoadBalancer, use the Service port instead of the targetPort from the Service specification.Affected versions: 25.4.0 | Minor |
25.4.0
September 01, 2025
Improvements
| Improvement Number | Improvement Description |
|---|---|
| PWX-39882 | Added a new storkctl validate command with the backuplocation subcommand to verify backup location credentials and connectivity using the Portworx API. You can now validate a backup location by running storkctl validate backuplocation <name> -n <namespace>. Replace <name> with the backup location name and <namespace> with its namespace. For more information, see storkctl validate command refence. |
| PWX-43926 | Added a user confirmation prompt when creating failover or failback if the last migration in the referenced MigrationSchedule is in PartialSuccess. This prompt can be bypassed using the new --force flag. |
| PWX-38076 | Added support for skipping only the source cluster domain deactivation during Synchronous (Metro) DR failover. Introduced a new custom resource field and the --skip-source-cluster-domain-deactivation flag in the storkctl failover command. This allows scaling down apps on the source without deactivating the source cluster domain. For more information, see the Controlled Failover tab in the Perform failover. |
| PWX-33029 | You can now improve pod startup performance by enabling Stork to set fsGroupChangePolicy=OnRootMismatch for pods that use ReadWriteMany or ReadOnlyMany volumes with fsGroup configured, by setting spec.stork.args.enable-fsgroup-optimization to true in the StorageCluster custom resource. |
| PWX-43928 | Stork now supports array indexing in ResourceTransformation paths to modify fields within specific elements of arrays. For example, to modify the image of the first container in a Pod object, use the path spec.containers[0].image. For more information, see ResourceTransformation. |
| PWX-44332 | Improved handling of pre-exec and post-exec rules for pods and virtual machines to minimize application freeze time while ensuring data consistency during disaster recovery (DR). |
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-43835 | Failover failed for apps using the boolean suspend option when no value was provided.User Impact: Failover operations failed due to parsing an empty string in the suspend field of app configurations.Resolution: Failover logic now defaults empty suspend values to true, avoiding failures caused by invalid boolean parsing.Affected Versions: 25.3.2 and earlier | Minor |
| PWX-45513 | ImageStreamTag resources were not included in migrations, causing failures when associated with ImageStream resources.User Impact: Migrations failed when apps used ImageStream resources that referenced ImageStreamTag resources that were not migrated.Resolution: Stork now collects and migrates both ImageStream and ImageStreamTag resources on OpenShift.Affected Versions: 25.3.2 and earlier | Minor |
| PWX-45601 | KubeVirt virtual machine resources such as VirtualMachine, DataVolume, and associated PersistentVolumeClaim objects were re-created on the target cluster for each migration.User Impact: These resources were redundantly re-created on every migration, even when unchanged, increasing migration time and resource usage. Resolution: Fixed the hash check logic to prevent re-creation of KubeVirt resources when they have not changed since the last migration. Affected Versions: 25.3.2 and earlier | Minor |
| PWX-44830 | Reverse migrations were incorrectly marked as PartialSuccess when resources already existed at the source cluster.User Impact: Migrations appeared partially successful when certain resources failed to apply because they were already present. This commonly occurred when operator-managed resources were created on the destination cluster before Stork attempted to migrate them, leading to confusion about the actual resource state. Resolution: Stork now treats already existing resources on the destination as successfully migrated and excludes them from failure evaluation during reverse migration. Affected Versions: 25.3.2 and earlier | Minor |
| PWX-44535 | PVC creation failed on the destination cluster during asynchronous disaster recovery (DR) operations for forklifted virtual machines. User Impact: These operations failed because PVCs owned by the virtual machine were not handled correctly during migration. Resolution: Updated the DR workflow to support cases where the virtual machine is the owner of the PVCs. Affected Versions: 25.3.2 and earlier | Minor |
| PWX-44460 | Migration of forklifted Windows virtual machines failed because the associated ControllerRevision for VirtualMachineClusterPreference was not migrated to the destination cluster.User Impact: Windows VMs failed to start after asynchronous DR. Resolution: Stork now migrates the ControllerRevision associated with VirtualMachineClusterPreference, allowing Windows VMs to start successfully on the destination.Affected Versions: 25.3.2 and earlier | Minor |
| PWX-44593 | Unnecessary GET calls to PersistentVolumes were made when the StorageClass had no snapshot policy and the PVC lacked a StorageClass reference.User Impact: Excessive GET PV calls could cause Kubernetes API throttling, leading to delays in backup operations.Resolution: Stork now skips GET PV calls when the StorageClass does not define a snapshot policy.Affected Versions: 25.3.2 and earlier | Minor |
| PB-11551 | At scale, KDMP offload backups/restores hit PVC binding issues, as Kubernetes or the storage layer could not keep up with multiple and simultaneous volume claims, causing PVC bound timeouts. User Impact: Users experienced backup/restore failures with PVC bound timeout issues. Resolution:. Fixed PVC bound timeouts that occurred during large-scale operations to ensure KDMP backups and restores complete successfully at scale. Affected Versions: 25.3.2 and earlier | Major |
| PB-11723 | FADA volume backups used to fail during snapshot creation for CSI+Offload backups due to transient errors in FA environments with high IOPS. User Impact: FADA volume backups used to fail, as a result user was unable to offload the CSI backup to the backup location. Resolution: Stork ignores such transient errors and completes snapshot creation successfully and these backups are later uploaded to the chosen backup location. Affected Versions: 25.3.2 and earlier | Major |
| PWX-45854 | The command executor Pod failed to run when the command targeted a specific container in a multi-container Pod object. User Impact: Rules with background actions failed to run during volume snapshots. Resolution: Volume snapshots with rules targeting a specific container in a multi-container Pod object now run successfully. Affected Versions: 25.3.2 and earlier | Minor |
| PWX-45820 | When more than one Portworx SharedV4 volume is used by a KubeVirt VM, a race condition can occur between the termination of the CDI uploader pod and the startup of the VM’s virt-launcher pod. This can cause the root disk volume to remain attached to a node while the virt-launcher pod is being scheduled. If Kubernetes determines that this node is unsuitable for scheduling the pod, the pod starts on a different node, resulting in non-optimal scheduling. A VM scheduled this way may shut down unexpectedly during a future planned node maintenance event. This issue affects the VM only once and does not recur after it has been restarted.Resolution: Fixed the race condition. VMs with multiple SharedV4 volumes now start correctly without suboptimal scheduling. Affected Versions: 25.3.2 and earlier | Minor |
| PWX-39881 | ClusterPair creation succeeds even when the object store BackupLocation credentials are invalid. As a result, users might assume that migrations will succeed. However, migrations later fail because Portworx cannot connect to the object store due to incorrect credentials or configuration.Resolution: Added validation to storkctl to check BackupLocation credentials before creating a ClusterPair. If validation fails, the ClusterPair is not created. This validation is supported in Portworx version 3.4.0 and later.Affected Versions: 25.3.2 and earlier | Minor |
| PWX-42891 | If the webhook is enabled (which it is by default) and the user later disables it, the mutating webhook remains in the cluster. User Impact: This may confuse users, as the webhook remains in the cluster even when it is disabled in the Portworx StorageCluster specification. Resolution: The mutating webhook is now automatically removed when it is disabled in the Portworx StorageCluster specification. Affected Versions: 25.3.2 and earlier | Minor |
| PWX-44460 | Migration of forklifted Windows VMs to the destination was failing. User Impact: Asynchronous disaster recovery (DR) for these forklifted Windows VMs fails. Resolution: Migrating the VirtualMachineClusterPreference ControllerRevision to the destination ensures the VM starts successfully after migration. Affected Versions: 25.3.2 and earlier | Minor |
Known issues (Errata)
| Issue Number | Issue Description |
|---|---|
| PWX-46831 | When creating a ClusterPair that uses a portworx-api Service of type NodePort and includes the --src-ep and --dest-ep flags, ObjectStoreLocation validation fails. This causes ClusterPair creation to fail, which prevents disaster recovery (DR) setup.Affected versions: 25.4.0 |
| PWX-46943 | When creating a ClusterPair in an OpenShift cluster that uses a portworx-api Service of type LoadBalancer, the step to fetch the Portworx cluster pairing token from the destination cluster fails. This causes ClusterPair creation to fail, which prevents disaster recovery (DR) setup.Affected versions: 25.4.0 |
25.3.2
August 13, 2025
Fix
| Issue Number | Issue Description | Severity |
|---|---|---|
| PB-11492 | Issue: When backing up Portworx volumes from a PX-Security enabled cluster with Stork version 25.3.1 or 25.3.0, PXB used to back up only the resources, leaving out the volumes. User Impact: User was able to back up only Portworx volume resources but not volume data in the specified environment. Resolution: Upgrade to Stork version 25.3.2 on your PX-Security enabled application cluster to ensure successful backups of Portworx volumes (and resources). Affected Versions: 25.3.0, 25.3.1 | Major |
25.3.1
July 22, 2025
Fix
| Issue Number | Issue Description | Severity |
|---|---|---|
| PB-11673 | Issue: Case 1: Stork < 25.3.0 Deleting the associated StorageClass (SC) causes backups of FADA and FBDA volumes to fail. Without the SC, Stork can't identify the volume as FADA and incorrectly sends the snapshot request to PXE, (which does not support cloud snapshots), resulting in a partial success backup. Non-FADA volume backups succeed in this scenario. Case 2: Stork = 25.3.0 FADA, FBDA, and PXE volume backups fail if SC is not present. User Impact: User will experience backup failures of FADA, FBDA and/or PXE volumes if Stork version is 25.3.0 and below. Resolution: Upgrade to Stork version 25.3.1 for your backups of PXE volumes (that are not FADA/FBDA) to succeed even if the corresponding SC is deleted. Note: Stork 25.3.1 does not resolve failures that occur with FADA volumes lacking an associated SC. This issue will be resolved when PXE supports FADA volume backups. Affected Versions: 25.3.0 and earlier | Major |
Known issues (Errata)
| Issue Number | Issue Description |
|---|---|
| PWX-45820 | When more than one Portworx SharedV4 volume is used by a KubeVirt VM, a race condition can occur between the termination of the CDI uploader pod and the startup of the VM’s virt-launcher pod. This can cause the root disk volume to remain attached to a node while the virt-launcher pod is being scheduled. If Kubernetes determines that this node is unsuitable for scheduling the pod, the pod starts on a different node, resulting in non-optimal scheduling. A VM scheduled this way may shut down unexpectedly during a future planned node maintenance event. This issue affects the VM only once and does not recur after it has been restarted.Workaround: Restart the VM once after creation and after the uploader pod has terminated. This typically completes within a few minutes. Components: SharedV4, KubeVirt Affected Versions: All |
25.3.0
July 08, 2025
This release addresses security vulnerabilities.
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-44567 | Live migration failed for virtual machines (VMs) with volumes that had device minor numbers greater than 255.User Impact: Operations such as Portworx upgrades failed when KubeVirt VMs were in use. Resolution: Stork now correctly identifies Portworx devices with minor numbers greater than 255, allowing live migration to succeed in these cases.Affected Versions: 25.2.2 and earlier | Minor |
| PWX-44456 | The webhook controller failed to find the PVC in the cache. User Impact: This could result in a Stork panic in the mutating webhook path if PVC caching is delayed. Resolution: Stork now handles missing PVCs in the cache by adding a nil check, allowing pod scheduling to proceed using the default scheduling path. Affected Versions: 25.2.2, 25.2.1, and 25.2.0 | Minor |
| PWX-43584 | DataVolumes were not being migrated along with KubeVirt VirtualMachines. User Impact: The OpenShift UI showed DataVolumesReady=False for the VirtualMachines on the destination cluster. Reverse migration was delayed when storage.usePopulators was enabled in the DataVolume.Resolution: Stork now migrates DataVolumes and uses the DataVolume claim adoption utility to adopt PVCs created during migration, instead of creating new ones. Affected Versions: 25.2.2 and earlier | Minor |
| PB-11415 | Issue: When you initiate a KDMP backup with a large number of volumes with Offload option enabled, backup used to fail. User Impact: Such backups used to fail with time-out issue. Resolution: KDMP backup with a large number of volumes with Offload option enabled succeeds without any hassles. Affected versions: 25.2.2 and 25.2.1 | Minor |
| PB-11212 | Issue: During a CSI backup with offload to S3 and an NFS backup location, failure in backing up a single volume caused the entire backup to be marked as Failed, making it unusable for restore—even though snapshots of other volumes were successfully backed up. Also, this process left uncleaned snapshot copies.User Impact: Backups with only a single failed volume were rendered completely unusable, preventing users from restoring any successful data. Resolution: Such backups are now marked as Partial Success, allowing users to restore the successfully backed-up volumes. In addition, snapshot copies created during backup process are cleaned up.Affected versions: 25.2.2 and earlier | Minor |
| PB-11077 | Issue: When you trigger parallel scheduled backups for namespaces that contain secrets, PXB was also backing up its internal secrets and those associated with NFS backup jobs. User Impact: Unintended system-generated secrets were also getting backed up along with user-created secrets. Resolution: PXB only backs up user-created secrets now. Affected versions: 25.2.1 | Minor |
| PB-10584 | Issue: Backup uploads to the backup location were failing immediately upon encountering transient issues such as network interruptions or backup location unavailability, due to the absence of retry logic. User Impact: Users faced backup failures in situations where a retry might have resolved the issue, resulting in reduced reliability of the backup process. Resolution: Stork has introduced configurable retry options for backup uploads using DoRetryWithTimeout. The retry behavior can be customized through the stork-config ConfigMap by setting uploadMaxRetries and uploadRetryInterval. Transient upload errors are now automatically retried based on these settings.Affected versions: 25.2.2 and earlier |
Known issues (Errata)
| Issue Number | Issue Description | Severity |
|---|---|---|
| PB-11551 | For backups that use CSI with offload and an NFS location, backups might fail with the error PVC restore timed out after 30m0s. This occurs because the binding job for WaitForFirstConsumer PVCs is created only in the SnapshotRestoreInProgress stage. In large-scale setups, delays in reaching this stage can cause the backup to time out.Workaround: Disable the PVC watcher by setting spec.stork.args.pvc-watcher to false in the StorageCluster custom resource object.Affected versions: 25.3.0 | Minor |
25.2.2
May 14, 2025
Improvements
| Improvement Number | Improvement Description |
|---|---|
| PWX-42977 | We have optimized Stork to reduce memory consumption by removing caching of:
|
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-43167 | When multiple AppRegistration (appreg) objects existed for the same CRD GVK, those with empty suspendOptions could overwrite valid configurations on others. This caused KubeVirt virtual machines (VMs) to fail to scale automatically during disaster recovery (DR) failover or failback.User Impact: VMs did not scale down or up automatically during DR events, requiring manual intervention by operators. Resolution: Stork now populates default suspendOptions only when missing, and only for supported GVKs. Existing custom settings are preserved, and unused GVKs are excluded from registration. This update also resolves a PVC migration race condition that previously led to recreated PVCs being incorrectly treated as errors.Affected Versions: 25.2.1 and earlier | Minor |
Known issues (Errata)
| Issue Number | Issue Description | Severity |
|---|---|---|
| PWX-43584 | Stork currently does not support KubeVirt VM Disaster Recovery workflows when the cdi.kubevirt.io/storage.usePopulator annotation is set to "true" in the DataVolume spec. During failback, the CDI controller may recreate the PVC before Stork can, leading to the new PVC pointing to a fresh volume instead of the migrated one. This can cause the VM to use an unintended volume after failback. The DR workflow functions as expected when storage.usePopulator is explicitly set to "false".Affected versions: All | Minor |
25.2.1
April 14, 2025
Improvements
| Improvement Number | Improvement Description |
|---|---|
| PB-9573 | Portworx Backup now handles CSI snapshot reconciliation more robustly. It ignores revision mismatch errors that occur when the external snapshotter updates the persistent volume claim (PVC) spec during finalizer updates. This prevents snapshot failures and ensures backup success for CSI, KDMP, and local snapshot volumes. |
| PB-9989 | Portworx Backup now supports scheduling backups using the time zone configured on the Stork instance running in the application cluster. This change reduces confusion and helps ensure that backup schedules align with the cluster’s local time settings. |
| PB-9990 | Portworx Backup improves resilience in virtual machine (VM) backup workflows. If a preExec rule fails for one VM, backups for other VMs in the same schedule can still continue. Recommendation: If you're using Stork versions earlier than 25.2.1 and Portworx Backup versions earlier than 2.8.4, configure a separate schedule for each VM to help isolate backup failures. |
| PB-9988 | Previously, restoring a VM with a static IP in the same cluster could fail due to network conflicts. This enhancement adds support for such workflows through two annotations:
|
| PB-10126 | Added the advancedResourceLabelSelector parameter to all resourceCollector APIs. This new string-based field allows users to define advanced label selector expressions compatible with kubectl filtering, including support for logical operators such as in, notin, and set-based matching. This enhancement provides greater flexibility and precision in resource selection by aligning behavior with native Kubernetes label filtering. Example: advancedResourceLabelSelector: "env in (dev, prod), tier != frontend" |
25.2.0
April 3, 2025
- If you are using Portworx Backup and planning to install or upgrade your Stork version to 25.2.0 in your air-gapped environment, ensure that you pull the following
kopiaexecutorandnfsexecutorimages from the following Docker image paths and push them to your custom image registry.
docker.io/openstorage/kopiaexecutor:master-latest
docker.io/openstorage/nfsexecutor:master-latest - The following status field names in the
GroupVolumeSnapshotscustom resource definition (CRD) are updated from PascalCase to camelCase. Update these field names in automation scripts, if applicable. For more information, see GroupVolumeSnapshot.dataSourceconditionsparentVolumeIDtaskIDvolumeSnapshotName
New Features
Migration Preview: Added support for migration previews in storkctl create migration and storkctl create migrationschedule.
-
For
storkctl create migration: Use the--previewflag to display the resources that would be included in the migration without running it. Use--previewFile <filename>to save the preview output to a file. For more information, see Preview resources before starting migration. -
For
storkctl create migrationschedule: Similarly, use the--previewand--preview-fileflags to view and save the resources that would be migrated as part of the schedule. For more information, see Preview resources before creating migration schedule.
This feature helps confirm which resources will be migrated before execution, including non-namespaced resources such as ClusterRoles and ClusterRoleBindings.
Improvements
| Improvement Number | Improvement Description |
|---|---|
| PWX-35523 | The MigrationSchedule controller now prunes old failed Migration custom resources, retaining only the five most recent failures. This prevents accumulation and reduces unnecessary Kubernetes API calls during reconciliation. |
| PWX-37807 | Stork health monitoring now includes Pods using Portworx volumes as ephemeral volumes, ensuring they are properly evicted if the underlying Portworx node becomes unavailable. |
| PWX-41385 | Stork scheduling now considers Pods that use Portworx volumes as ephemeral volumes. This enables placement decisions based on storage availability. |
| PWX-41617 | Improved performance of storkctl get <resource> --all-namespaces by replacing per-namespace iteration with a single API call that retrieves resources across all namespaces, significantly reducing wait time in clusters with many namespaces. |
| PWX-42338 | Pod scheduling is now faster when multiple pods are scheduled in parallel on a heavily loaded cluster. The volume listing process has been optimized, reducing delays and improving workload deployment times. |
25.1.0
February 12, 2025
Improvements
| Improvement Number | Improvement Description |
|---|---|
| PB-9039 | Portworx Backup now supports storing and restoring the status of Custom Resources (CR) during backup and restore operations. This feature ensures that CRs retain their status upon restoration, improving consistency and recovery reliability. To enable this feature in the UI, you must set the corresponding flag in the ConfigMap. Note: Only create and delete operations are supported for this flag in both manual and scheduled backups. |
| PB-8121 | Portworx Backup includes virtualMachineInstancetype and VirtualMachinePreference resources when backing up KubeVirt VMs. This ensures that these resources can be restored successfully. |
| PB-8466 | Portworx Backup now captures NetworkAttachmentDefinition resources during KubeVirt VM backups. This ensures that NetworkAttachmentDefinition resources can be restored successfully. |
Fixes
| Issue Number | Issue Description | Severity |
|---|---|---|
| PB-6688 | Backup failed for Portworx volumes encrypted at the storage class level using CSI and Kubernetes Secrets on non-secure clusters. User Impact: Users had to rely on alternative encryption methods to secure their Portworx volumes. Resolution: Portworx Backup now successfully backs up volumes encrypted at the storage class level using CSI and Kubernetes Secrets on non-secure clusters. Affected Versions: 24.4.0 and earlier | Minor |
| PB-9267 | In PX-Security enabled clusters, if the required user token is unavailable or misconfigured, backups for encrypted volumes fail. However, the system incorrectly marks the partial backup status as successful. User Impact: Users might assume that backups completed successfully, even if one or more volumes failed to back up. Resolution: Now, volumeInfo correctly marks volumes as failed when token retrieval fails and skips reconciliation for these volumes. This ensures that the partial backup status accurately reflects the backup state.Affected Versions: 24.4.0 and earlier | Minor |