Portworx Backup Release Notes
Portworx Backup 2.9.1
August 26, 2025
Refer to these topics before you start install or upgrade tasks:
Fix
| Issue Number | Description | Severity |
|---|---|---|
| PB-11756 | Issue: When backups were deleted from an NFS backup location, Portworx Backup initiated a separate delete job for each PVCs and these jobs deleted the volume data. There was no upper limit on the number of concurrent delete jobs. Deleting several backups at once resulted in a sudden spike in active jobs. User Impact: In clusters with a strict job quota, this behavior caused resource pressure on the cluster nodes. Resolution: Portworx Backup now supports configurable limits for NFS backups with the Portworx, CSI, and KDMP drivers. This feature allows users to define the maximum number of concurrent delete jobs, helping to prevent quota exhaustion and providing greater control over backup deletion management. Affected Versions: 2.5.0 ~ 2.9.0 | Major |
Portworx Backup 2.9.0
July 8, 2025
Compatible version of Stork for Portworx Backup version 2.9.0 is 25.3.1. Make sure you have upgraded your Stork version to 25.3.1.
Refer to these topics before you start install or upgrade tasks:
Features
This feature allows users to initiate a retry of failed VM backups from an existing backup or schedule that included multiple VMs. The retry operation targets only the VMs that failed in the previous run, creating a new backup while skipping the ones that were successfully backed up, thus optimizing performance and resource usage.
Portworx Backup deployment on proxy-enabled clusters
Portworx Backup can be deployed in proxy-enabled or network-restricted Kubernetes cluster environments across all supported platforms (except IKS). It supports proxy configuration through Helm values directly or via a Kubernetes Secret, whose name is provided through a Helm parameter, including setups requiring authentication and custom CA certificates to ensure secure and compliant communication.
Portworx Backup allows you to suspend, resume, or delete multiple backup schedules in bulk, simplifying backup management across large environments. You can filter schedules by their names, schedule policies, or a combination of both to precisely target the desired schedules.
This feature lets you back up only the Kubernetes PVC and PV specifications without including the actual volume data. It helps reduce storage usage and speeds up backups by allowing you to exclude specific volumes based on labels like volume type, driver, or NFS server. This is ideal when volume data is already protected elsewhere or when the same volumes will be reused during restore.
Enhancements
- The Portworx Backup web console now provides more in-depth visibility, with improved status indicators and detailed VM backup data to simplify monitoring and streamline backup management
- The web console introduces a redesigned, user-friendly left navigation pane that simplifies access to essential features and enhances overall usability.
- You can sort select columns on the dashboard, cluster overview, backup, restore, and all backups pages in ascending or descending order by clicking the arrows near the column headers.
Fixes
| Issue Number | Description | Severity |
|---|---|---|
| PB-11673 | Issue: Case 1: Stork < 25.3.0 Deleting the associated StorageClass (SC) causes backups of FADA and FBDA volumes to fail. Without the SC, Stork can't identify the volume as FADA and incorrectly sends the snapshot request to PXE, (which does not support cloud snapshots), resulting in a partial success backup. Non-FADA volume backups succeed in this scenario. Case 2: Stork = 25.3.0 FADA, FBDA, and PXE volume backups fail if SC is not present. User Impact: User will experience backup failures of FADA, FBDA and/or PXE volumes if Stork version is 25.3.0 and below. Resolution: Upgrade to Stork version 25.3.1 for your backups of PXE volumes (that are not FADA/FBDA) to succeed even if the corresponding SC is deleted. Note: Stork 25.3.1 does not resolve failures that occur with FADA volumes lacking an associated SC. This issue will be resolved when PXE supports FADA volume backups. Affected Versions: All | Major |
| PB-11231 | Issue: ResourceExport CRs for scheduled backups were not deleted because they were stuck in the Initial or Failed state before the backup job was created. As a result, the cleanup process failed to retrieve job-related resources by ID, as the job did not exist.User Impact: Orphaned ResourceExport CRs may persist indefinitely, leading to namespace clutter and potential resource management issues within the cluster over time.Resolution: Portworx Backup detects ResourceExport CRs in the specified state without a job ID. In these cases, it skips cleanup steps that depend on the job ID and gracefully deletes the ResourceExport CR.Affected versions: 2.7.x, 2.8.x | Major |
| PB-11230 | Issue: The controller incorrectly triggered an export job for a ResourceExport CR marked for deletion. No jobs should be initiated once a delete request is issued.User Impact: The controller operated on a stale ResourceExport CR instance copy. As a result, it failed to detect that the CR had already been marked for deletion.Resolution: The controller now pulls the latest CR state from the API server rather than using a cached copy, ensuring accurate status checks and avoiding export job creation for CRs marked for deletion. Affected versions: 2.7.x, 2.8.x | Minor |
| PB-11212 | Issue: During a CSI backup with offload to S3 and an NFS backup location, failure in backing up a single volume caused the entire backup to be marked as Failed, making it unusable for restore—even though snapshots of other volumes were successfully backed up. Also, this process left uncleaned snapshot copies.User Impact: Backups with only a single failed volume were rendered completely unusable, preventing users from restoring any successful data. Resolution: Such backups are now marked as Partial Success, allowing users to restore the successfully backed-up volumes. In addition, snapshot copies created during backup process are cleaned up.Affected versions: 2.7.x, 2.8.x | Major |
| PB-11191 | Issue: The post-install-hook relied on an outdated Kubernetes Python client (v17.17.0), which raised a ValueError from v1.list_node() when the names field in the node specification was null, as seen in Anthos cluster.User Impact: Users experienced failures when attempting to list nodes if the names field was not provided, leading to disruptions during the upgrade. Resolution: The Kubernetes Python client is upgraded to stable version that treats the names field as optional to prevent ValueError and to improve post-install-hook stability.Affected versions: 2.7.x, 2.8.x | Minor |
| PB-11158 | Issue: Backups with large number of VMs hung when the ApplicationBackupCR exceeded etcd’s 1.5 MB limit, primarily due to numerous PVCs and their lengthy names. This triggered a request is too large error, causing the backup process to get stuck indefinitely instead of failing gracefully.User Impact: Such backups hung indefinitely without displaying any clear error messages in the web console. As a result, users were unable to complete backups or determine the cause of the failure. Resolution: The backup process now fails gracefully with an appropriate error message when the ApplicationBackupCR exceeds the etcd size limit.Affected versions: 2.7.x, 2.8.x | Major |
| PB-11152 | Issue: The px-backup pod used to enter a CrashLoopBackOff state if an invalid or non-existent platform credential UID was provided while adding a Rancher cluster via the API.User Impact: Users were encountering repeated px-backup pod crashes, affecting backup reliability in Rancher clusters and requiring manual intervention. Resolution: Portworx Backup now handles validation logic for nil platform credentials to ensure stable px-backup pod operation.Affected versions: 2.7.x, 2.8.x | Major |
| PB-11052 | Issue: An extra secret was created but not cleaned up while taking CSI-based offload to S3 backups, leading to unnecessary resource usage and potential security risks from leftover secrets. User Impact: Users had experienced increased resource usage in their environments, which could have affected performance and led to potential security vulnerabilities. Resolution: Upgrade to Portworx Backup version 2.9.0 or later and Stork version 25.3.0 or later to ensure proper cleanup of unused secrets. Affected versions: 2.2.x~2.8.x | Minor |
| PB-10648 | Issue: During SMTP configuration validation, Portworx Backup was unable to validate multiple email addresses. User Impact: The Validation failed prompt was incorrectly displayed even though the emails were successfully delivered to the added recipients. Resolution: Portworx Backup web console now displays successful notification when multiple email addresses are added to receive email alerts. Affected version: 2.8.4 | Minor |
| PB-10354 | Issue: Before version 2.9.0, if a VM backup had a failed PVC volume, it was marked as partial success. However, restore would fail since all VM volumes must be backed up successfully for a valid restore.User Impact: A VM backup is not restorable if even a single volume fails. Resolution: If any volume in a VM backup fails, the VM is marked as failed from Portworx Backup 2.9.0. For multi-VM backups, the status is marked as partial success, and only VMs with all volumes successfully backed up can be restoredAffected version: 2.8.4 | Major |
| PB-10352 | Issue: In a scheduled VM backup, if any of the VMs selected for the backup were deleted before the next scheduled run, Portworx Backup did not report that the deleted VM was skipped in the backup. User Impact: Because the backup was marked successful, users might assume that all the selected VMs were backed up, including the deleted ones. Resolution: Starting from 2.9.0, such backups are marked as partial success.Affected version: 2.8.4 | Minor |
| PB-3492 | Issue: The application used to become temporarily unresponsive if the user attempts to back up application data using the CSI with offload-to-S3 option. User Impact: Users experienced application freezes during the CSI with offload backup process, potentially causing delays in backup completion and impacting overall system performance. Resolution: The freeze during CSI with offload-to-S3 backups is fixed. Affected versions: 2.3.x ~ 2.8.x | Minor |
| PB-3481 | Issue: MongoDB pods started in an incorrect order or before all peers were resolvable, which caused a replica set split and prevented the election of a primary node (isWritablePrimary: false). As a result, Portworx Backup crashed during installation or upgrade.User Impact: Portworx Backup failed to start or operate correctly when MongoDB pods did not initialize properly, particularly in resource-constrained environments or after node restarts. Resolution: The MongoDB startup process now waits until all peer pods are DNS-resolvable before proceeding. Additionally, the podManagementPolicy is set to OrderedReady to ensure safe, sequential pod initialization and reliable replica set formation.Affected versions: 2.3.x~2.8.x | Major |
Known Issue
| Issue Number | Description |
|---|---|
| PB-11395 | Issue: When a VM backup includes a large number of VMs and all of them fail due to freezeTimeout, the backup operation takes an unusually long time to be marked as completely failed.Workaround: Identify the cause for snapshot delays. If the freezeTimeout is set lower than the default value of 300 seconds, increase it to a higher value or reset it to the default value. |