Portworx Backup Release Notes
Portworx Backup 2.10.1
December 15, 2025
Refer to these topics before you start install or upgrade tasks:
Features
Edit label selectors on backup schedules
Portworx Backup now allows you to modify label selectors on existing backup schedules without recreating them. Any backup schedule with resource, namespace, or VM label selectors can now be altered to change the associated resource, namespace, or VM label selectors, without the need to delete and recreate.
Enhanced Prometheus metrics for backup operations
Portworx Backup now provides comprehensive operational metrics via its metrics endpoint for consumption by external monitoring tools. These metrics expose detailed information regarding backup, enabling better observability and troubleshooting on external monitoring systems.
Fixes
| Issue Number | Description | Severity |
|---|---|---|
| PB-13531 | Issue: Pre-upgrade validation could fail in environments where the Keycloak PostgreSQL database username was customized from the default, resulting in authentication errors during upgrade. User Impact: Upgrades to Portworx Backup 2.10.0 could be blocked at the pre-upgrade step on clusters using a non-default Keycloak database username. Resolution: The pre-upgrade workflow now correctly reads and uses the configured Keycloak PostgreSQL username from deployment settings, ensuring validation succeeds in customized environments. This fix is included in Portworx Backup 2.10.1. Affected Versions: Portworx Backup 2.5.x & above (environments with a customized Keycloak database username) | Minor |
Portworx Backup 2.10.0
November 24, 2025
Refer these topics before you start install or upgrade tasks:
Features
Integration with SUSE Rancher projects access control
Portworx Backup now provides seamless integration with SUSE Rancher's project based access control. Portworx Backup users now can view & access Kubernetes namespaces mapped to their Rancher project(s) based on their LDAP/SAML group membership obtained via popular providers like OpenLDAP or Ping Identity. This feature extends your access-control from SUSE Rancher into Portworx Backup, preventing unauthorized or unintended data exposure by enforcing namespace-level filtering based on Rancher's Projects configuration.
Portworx Backup introduces flexible namespace management capabilities that allow scheduled backups to gracefully handle missing namespaces, alongside ability to edit backup schedules to remove or add namespaces. Now the system proceeds to perform backup with available namespaces and marks the backup as Partial Success, if it finds some of the specified namespaces missing. This feature improves backup resilience even when namespace availability changes and provides greater usability by allowing users to remove or append namespaces to schedule.
Password customization for internal databases
Portworx Backup now has easier mechanism based on Kubernetes secrets, to provide and rotate credentials for its internal databases. You can also enable optional encryption of the internal-database and specify encryption keys via Kubernetes secret. Now making Portworx Backup's security and protection capabilities more usable.
mTLS support for Portworx Backup
Portworx Backup can now run with mTLS when deployed into a customer‑managed service mesh, enabling encrypted, mutually authenticated traffic across Portworx Backup microservices. This adds Helm managed integration for Istio and Linkerd, allowing UI access with HTTPs protocal aligning with enterprise security policies thus preventing unauthorized access and man-in-the-middle attacks.
Batch alerting for Backup schedules
Portworx Backup adds batch alerting for schedule operations, aggregating failures from pause, resume, and delete actions across multiple schedules into a single consolidated alert. Each alert lists the affected schedule objects with per‑object error reasons captured during each update cycle, so you can quickly pinpoint what failed and why. Alerts are grouped and continuously updated to reflect the current state - new failures are added and resolved items are removed - reducing email noise while preserving real‑time visibility.
Capture notes for backup schedule changes
Portworx Backup now lets you add a note when suspending, resuming, or editing schedules, including during bulk actions where a single note applies to all selected schedules. The latest note is shown when viewing schedule details, so that one can understand why backup schedule status was altered.
Portworx Backup now extends its existing functionality for sharing backup, by allowing the sharer to specify with backup can be used for restore only (or) with full access rights by user or group with access to Backup Location and its Cloud Credential.
Backup and Restore FADA Volumes
The FADA volumes backup and restore feature has graduated from early access to general availability. It allows Portworx Backup to support FADA volume types effectively by using native Portworx snapshots (PXD-based) for both block and file system PersistentVolumeClaim (PVC) modes.
Enhancements
Enhanced SSL/TLS Certificate Management for Ansible Module
The Portworx Backup Ansible collection now provides comprehensive SSL/TLS certificate management with support for custom CA certificates, mutual TLS authentication, and flexible certificate validation options. This enterprise-ready feature enables unified SSL configuration through inventory variables that automatically get applied across all modules, eliminating the need for per-module certificate setup while supporting self-signed certificates, private certificate authorities, and corporate PKI deployments for enhanced security in production environments.
Optional UUID for backup resources
Portworx Backup now supports optional UUID parameters across all interfaces (REST API, gRPC API, Ansible collection, and CLI), allowing users to reference backup resources using either human-readable names or UUIDs. This enhancement simplifies resource management by enabling users to work with user-defined names instead of complex UUID strings, while still maintaining UUID support for programmatic integrations that require guaranteed unique identifiers. The flexible approach improves user experience and scriptability while preserving backward compatibility with existing UUID-based workflows.
Generic backup repositories now leaner
Scheduled backup now uses split backup repositories for generic backups, creating smaller schedule‑bound repositories per PVC. Only when the incremental threshold is reached, a new full backup is started in the new repo. Thus keeping repositories small and stable, reducing maintenance time on large datasets. Portworx Backup also has improved cleanup mechanisms employed, ensuring zero size snapshots are removed and stale folders are periodically pruned. Thereby preserving active data and improving space reclamation in your backup locations. This enhancement is compatible when using Portworx Backup 2.9.x+ alongwith Stork 25.2.x+. In such environments, on Portworx Backup upgrade, schedules are migrated to allow new capabilities, while existing backups continue to work in their current layout.
Fixes
| Issue Number | Description | Severity |
|---|---|---|
| PB-11135 | Issue: Helm charts required individual registry and repository configuration for each image in the px-central-values.yaml file, forcing users to manually edit multiple fields when redirecting images to different registries or repositories.User Impact: Managing registry and repository settings across multiple images was repetitive, time-consuming, and error-prone. Users deploying in environments with private or mirrored registries had to manually update numerous entries, significantly increasing the risk of misconfiguration and deployment failures. Resolution: Introduced global registry (images.registry) and repository(images.repo) parameters in the px-central-values.yaml file that automatically apply to all images. Users can now configure registry and repository settings once at the global level, eliminating redundant configuration and reducing the potential errors during deployment. Affected Versions: 2.9.0 and below | Minor |
| PB-10193 | Issue: NFS/KDMP job pod crashes caused by memory-related issues (container OOM-Killed events or node-level OutOfMemory conditions) lacked clear diagnostic information in logs and the web console, making it difficult to identify the root cause of failures. User Impact: Troubleshooting NFS/KDMP job pod crashes was challenging due to insufficient visibility into memory-related failure causes. Users could not easily determine whether crashes resulted from OOM-Killed events or node-level OutOfMemory conditions, leading to prolonged resolution times and inefficient debugging processes. Resolution: Enhanced logging and web console reporting now provide clear identification of memory-related crash causes for NFS/KDMP job pods. The system explicitly indicates whether failures are due to OOM-Killed events or OutOfMemory conditions, enabling faster diagnosis and more efficient troubleshooting workflows. Affected Versions: 2.9.0 and below | Minor |
| PB-11861 | Issue: Backup owners could not grant Restore or Full access for a backup to users who already had access to the associated BackupLocation and CloudCredentials resources. User Impact: Limited sharing capability for Backup Share. Resolution: In px-backup 2.10.0, backup sharing is expanded. Backup owners can now share backups with Restore or Full access to users who already have access to the corresponding BackupLocation and CloudCredentials. The backup owner does not need to be the owner of these resources to grant Restore/Full access. Affected Versions: All versions | Minor |
| PB-11908 | Issue: Users could previously specify both cloud credentials and platform credentials for the same cluster, leading to potential conflicts during cluster operations. User Impact: Cluster create and update operations now validate that only one type of credential (cloud OR platform) is provided, preventing configuration conflicts and improving operation reliability. Resolution: Added credential exclusivity validation with clear error messages. Cluster create and update now enforce that only one credential type - Cloud or Platform - is provided. Affected Versions: 2.9.0 and below | Minor |
| PB-11755 | Issue: Maintenance pods for non‑S3 BackupLocation's failed to run after an S3 BackupLocation was used with DisableSSL and a self‑signed TLS certificate.User Impact: For KDMP backups using a non‑S3 BackupLocation, maintenance did not run and repositories were not cleaned up. Resolution: Self‑signed certificate handling now applies only to S3 BackupLocation's; non‑S3 maintenance runs as expected. Affected Versions: 2.9.0 and below | Minor |
| PB-12674 | Issue: With NFS BackupLocation's, PX volume backup deletion was issuing excessive READDIR calls, increasing load on the NFS server. User Impact: Occasional slowness during PX volume backup deletion when using NFS BackupLocation's. Resolution: Avoids extra reads by deleting the PX volume snapshot folder directly. Affected Versions: 2.9.1 | Minor |
| PB-11202 | Issue: Kopia repository connect timeout was fixed at 1 minute for backup, restore, maintenance, and delete operations; jobs failed if repository connection exceeded 60 seconds. User Impact: In higher-latency environments, repository connect attempts could time out and abort jobs before a successful connection. Resolution: Introduced the configurable KDMP_KOPIA_CONNECT_TIMEOUT setting. Set it in the kdmp-config ConfigMap for backup/restore (KDMP) jobs, and in the px-backup-config ConfigMap for maintenance/delete jobs. If not set, the default remains 1m.Affected Versions: 2.8.4 | Minor |
| PB-12202 | Issue: When PX‑Central was installed in a namespace other than px-backup, the proxy configuration did not include the namespace‑qualified px‑backup service, causing the px‑backup component to appear “Down” on the Service Status page.User Impact: Health status for px‑backup could be reported as “Down,” and related operations could fail through the proxy. Resolution: Helm now automatically includes the correct namespace‑qualified px‑backup service in the proxy bypass list, so health checks and internal calls succeed after install or upgrade. Affected Versions: 2.9.0 | Minor |
Known Issues
| Issue Number | Description |
|---|---|
| PB-12045 | Issue: During CloudSnap backups with FADA volumes, attachment failures may occur when a node reaches its maximum attachment limit of 256 volumes. This happens because detach operations after backups are asynchronous, temporarily increasing the total number of active attachments. Workaround: Keep the number of FADA volume attachments per node well below the 256 limit, or reduce the number of FADA volumes included in each backup to ensure CloudSnap operations complete successfully |
| PB-6901 | Issue: Restores for namespaces with Istio enabled can enter a partial success state because both Istio and Stork attempt to create the istio-ca-root-cert (and sometimes istio-ca-crl) ConfigMaps at the same time, resulting in a resource conflict.Workaround: Exclude the istio-ca-root-cert and istio-ca-crl ConfigMaps from the restore. Restores with the Replace policy will fail for Istio-enabled namespaces because these ConfigMaps are pre-created by the Istio control plane and cannot be replaced. |
PXB 2.9.1
August 26, 2025
Refer to these topics before you start install or upgrade tasks:
Fix
| Issue Number | Description | Severity |
|---|---|---|
| PB-11756 | Issue: When backups were deleted from an NFS backup location, Portworx Backup initiated a separate delete job for each PVCs and these jobs deleted the volume data. There was no upper limit on the number of concurrent delete jobs. Deleting several backups at once resulted in a sudden spike in active jobs. User Impact: In clusters with a strict job quota, this behavior caused resource pressure on the cluster nodes. Resolution: PXB now supports configurable limits for NFS backups with the Portworx, CSI, and KDMP drivers. This feature allows users to define the maximum number of concurrent delete jobs, helping to prevent quota exhaustion and providing greater control over backup deletion management. Affected Versions: 2.5.0 ~ 2.9.0 | Major |
PXB 2.9.0
July 8, 2025
Compatible version of Stork for Portworx Backup version 2.9.0 is 25.3.1. Make sure you have upgraded your Stork version to 25.3.1.
Refer to these topics before you start install or upgrade tasks:
Features
This feature allows users to initiate a retry of failed VM backups from an existing backup or schedule that included multiple VMs. The retry operation targets only the VMs that failed in the previous run, creating a new backup while skipping the ones that were successfully backed up, thus optimizing performance and resource usage.
PXB deployment on proxy-enabled clusters
PXB can be deployed in proxy-enabled or network-restricted Kubernetes cluster environments across all supported platforms (except IKS). It supports proxy configuration through Helm values directly or via a Kubernetes Secret, whose name is provided through a Helm parameter, including setups requiring authentication and custom CA certificates to ensure secure and compliant communication.
Portworx Backup (PXB) allows you to suspend, resume, or delete multiple backup schedules in bulk, simplifying backup management across large environments. You can filter schedules by their names, schedule policies, or a combination of both to precisely target the desired schedules.
This feature lets you back up only the Kubernetes PVC and PV specifications without including the actual volume data. It helps reduce storage usage and speeds up backups by allowing you to exclude specific volumes based on labels like volume type, driver, or NFS server. This is ideal when volume data is already protected elsewhere or when the same volumes will be reused during restore.
Enhancements
- The PXB web console now provides more in-depth visibility, with improved status indicators and detailed VM backup data to simplify monitoring and streamline backup management
- The web console introduces a redesigned, user-friendly left navigation pane that simplifies access to essential features and enhances overall usability.
- You can sort select columns on the dashboard, cluster overview, backup, restore, and all backups pages in ascending or descending order by clicking the arrows near the column headers.
Fixes
| Issue Number | Description | Severity |
|---|---|---|
| PB-11673 | Issue: Case 1: Stork < 25.3.0 Deleting the associated StorageClass (SC) causes backups of FADA and FBDA volumes to fail. Without the SC, Stork can't identify the volume as FADA and incorrectly sends the snapshot request to PXE, (which does not support cloud snapshots), resulting in a partial success backup. Non-FADA volume backups succeed in this scenario. Case 2: Stork = 25.3.0 FADA, FBDA, and PXE volume backups fail if SC is not present. User Impact: User will experience backup failures of FADA, FBDA and/or PXE volumes if Stork version is 25.3.0 and below. Resolution: Upgrade to Stork version 25.3.1 for your backups of PXE volumes (that are not FADA/FBDA) to succeed even if the corresponding SC is deleted. Note: Stork 25.3.1 does not resolve failures that occur with FADA volumes lacking an associated SC. This issue will be resolved when PXE supports FADA volume backups. Affected Versions: All | Major |
| PB-11231 | Issue: ResourceExport CRs for scheduled backups were not deleted because they were stuck in the Initial or Failed state before the backup job was created. As a result, the cleanup process failed to retrieve job-related resources by ID, as the job did not exist.User Impact: Orphaned ResourceExport CRs may persist indefinitely, leading to namespace clutter and potential resource management issues within the cluster over time.Resolution: PXB detects ResourceExport CRs in the specified state without a job ID. In these cases, it skips cleanup steps that depend on the job ID and gracefully deletes the ResourceExport CR.Affected versions: 2.7.x, 2.8.x | Major |
| PB-11230 | Issue: The controller incorrectly triggered an export job for a ResourceExport CR marked for deletion. No jobs should be initiated once a delete request is issued.User Impact: The controller operated on a stale ResourceExport CR instance copy. As a result, it failed to detect that the CR had already been marked for deletion.Resolution: The controller now pulls the latest CR state from the API server rather than using a cached copy, ensuring accurate status checks and avoiding export job creation for CRs marked for deletion. Affected versions: 2.7.x, 2.8.x | Minor |
| PB-11212 | Issue: During a CSI backup with offload to S3 and an NFS backup location, failure in backing up a single volume caused the entire backup to be marked as Failed, making it unusable for restore—even though snapshots of other volumes were successfully backed up. Also, this process left uncleaned snapshot copies.User Impact: Backups with only a single failed volume were rendered completely unusable, preventing users from restoring any successful data. Resolution: Such backups are now marked as Partial Success, allowing users to restore the successfully backed-up volumes. In addition, snapshot copies created during backup process are cleaned up.Affected versions: 2.7.x, 2.8.x | Major |
| PB-11191 | Issue: The post-install-hook relied on an outdated Kubernetes Python client (v17.17.0), which raised a ValueError from v1.list_node() when the names field in the node specification was null, as seen in Anthos cluster.User Impact: Users experienced failures when attempting to list nodes if the names field was not provided, leading to disruptions during the upgrade. Resolution: The Kubernetes Python client is upgraded to stable version that treats the names field as optional to prevent ValueError and to improve post-install-hook stability.Affected versions: 2.7.x, 2.8.x | Minor |
| PB-11158 | Issue: Backups with large number of VMs hung when the ApplicationBackupCR exceeded etcd’s 1.5 MB limit, primarily due to numerous PVCs and their lengthy names. This triggered a request is too large error, causing the backup process to get stuck indefinitely instead of failing gracefully.User Impact: Such backups hung indefinitely without displaying any clear error messages in the web console. As a result, users were unable to complete backups or determine the cause of the failure. Resolution: The backup process now fails gracefully with an appropriate error message when the ApplicationBackupCR exceeds the etcd size limit.Affected versions: 2.7.x, 2.8.x | Major |
| PB-11152 | Issue: The px-backup pod used to enter a CrashLoopBackOff state if an invalid or non-existent platform credential UID was provided while adding a Rancher cluster via the API.User Impact: Users were encountering repeated px-backup pod crashes, affecting backup reliability in Rancher clusters and requiring manual intervention. Resolution: PXB now handles validation logic for nil platform credentials to ensure stable px-backup pod operation.Affected versions: 2.7.x, 2.8.x | Major |
| PB-11052 | Issue: An extra secret was created but not cleaned up while taking CSI-based offload to S3 backups, leading to unnecessary resource usage and potential security risks from leftover secrets. User Impact: Users had experienced increased resource usage in their environments, which could have affected performance and led to potential security vulnerabilities. Resolution: Upgrade to PXB version 2.9.0 or later and Stork version 25.3.0 or later to ensure proper cleanup of unused secrets. Affected versions: 2.2.x~2.8.x | Minor |
| PB-10648 | Issue: During SMTP configuration validation, PXB was unable to validate multiple email addresses. User Impact: The Validation failed prompt was incorrectly displayed even though the emails were successfully delivered to the added recipients. Resolution: PXB web console now displays successful notification when multiple email addresses are added to receive email alerts. Affected version: 2.8.4 | Minor |
| PB-10354 | Issue: Before version 2.9.0, if a VM backup had a failed PVC volume, it was marked as partial success. However, restore would fail since all VM volumes must be backed up successfully for a valid restore.User Impact: A VM backup is not restorable if even a single volume fails. Resolution: If any volume in a VM backup fails, the VM is marked as failed from PXB 2.9.0. For multi-VM backups, the status is marked as partial success, and only VMs with all volumes successfully backed up can be restoredAffected version: 2.8.4 | Major |
| PB-10352 | Issue: In a scheduled VM backup, if any of the VMs selected for the backup were deleted before the next scheduled run, PXB did not report that the deleted VM was skipped in the backup. User Impact: Because the backup was marked successful, users might assume that all the selected VMs were backed up, including the deleted ones. Resolution: Starting from 2.9.0, such backups are marked as partial success.Affected version: 2.8.4 | Minor |
| PB-3492 | Issue: The application used to become temporarily unresponsive if the user attempts to back up application data using the CSI with offload-to-S3 option. User Impact: Users experienced application freezes during the CSI with offload backup process, potentially causing delays in backup completion and impacting overall system performance. Resolution: The freeze during CSI with offload-to-S3 backups is fixed. Affected versions: 2.3.x ~ 2.8.x | Minor |
| PB-3481 | Issue: MongoDB pods started in an incorrect order or before all peers were resolvable, which caused a replica set split and prevented the election of a primary node (isWritablePrimary: false). As a result, PXB crashed during installation or upgrade.User Impact: PXB failed to start or operate correctly when MongoDB pods did not initialize properly, particularly in resource-constrained environments or after node restarts. Resolution: The MongoDB startup process now waits until all peer pods are DNS-resolvable before proceeding. Additionally, the podManagementPolicy is set to OrderedReady to ensure safe, sequential pod initialization and reliable replica set formation.Affected versions: 2.3.x~2.8.x | Major |
Known Issue
| Issue Number | Description |
|---|---|
| PB-11395 | Issue: When a VM backup includes a large number of VMs and all of them fail due to freezeTimeout, the backup operation takes an unusually long time to be marked as completely failed.Workaround: Identify the cause for snapshot delays. If the freezeTimeout is set lower than the default value of 300 seconds, increase it to a higher value or reset it to the default value. |
PXB 2.8.4
April 14, 2025
Refer these topics before you start install or upgrade tasks:
Portworx Backup offers the following features and enhancements in this release:
Features
Advanced filters
The Advanced Filter Option for Label Filtering in PXB extends label-based filtering to support VM resource labels, enabling more granular selection of VM resources during backup operations. It introduces the ability to include or exclude resources based on label criteria, following the standard Kubernetes label selector syntax. For more information, refer to backup namespaces and VMs by labels.
Partial Success for VM Backups
VM backups are now more resilient and efficient with per-VM execution of PreExec and PostExec rules. This ensures that if one VM encounters an issue, the backup can still proceed smoothly for the others. Volume-level status is now accurately tracked per VM, offering better visibility and reporting. The partial success for VM approach also reduces freeze times, resulting in faster and more reliable backups.
Enhancements
Retry failed/partially successful VM/namespace backups
This enhancement allows users to retry backups partially successful Virtual Machines (VMs) or namespace backups within a multi-VM or multi-namespace backup job. When certain VMs or namespaces in a backup fail, users can trigger a retry operation to reattempt backups only for those failed/partially succeeded VMs or namespaces without impacting the ones that succeeded.
Static IP-safe VM Restore
Two new annotations are introduced in Stork to provide users greater control and flexibility during VM restore, especially in environments with static IP configurations, improving restore reliability and reducing manual intervention:
-
px-backup/skip-vm-start: prevents the restored VM from starting automatically and allows users to manually configure a new static IP before VM startup, avoiding conflicts. -
px-backup/skip-mac-masking: retains the original MAC address instead of applying a masked one during restore. Besides, this annotation is useful for scenarios where MAC consistency is required.
Fixes
| Issue Number | Description | Severity |
|---|---|---|
| PB-10122 | Issue: When you edit an existing backup schedule through Schedules tab and enable or disable custom rules, the latest rules were not propagated to the backend. User Impact: Upcoming scheduled backups used to fail. Resolution: Modification to existing schedules work as expected and the scheduled backups succeed. Affected Versions: 2.8.0, 2.8.1, 2.8.2, 2.8.3 | Major |
| PB-9989 | Issue: Backup schedules were not considering timezone of the application cluster and used to always pick up UTC time zone. User Impact: Users were encountering difficulties during creation of backup schedules. Resolution: All backup schedules take the timezone configured on Stork spec on application cluster. Affected Versions: 2.8.0, 2.8.1, 2.8.2, 2.8.3 | Major |
| PB-9988 | Issue: Restoring KubeVirt VMs with static IPs can lead to IP conflicts, requiring manual shutdown of the source VM User Impact: If user wants to restore a VM with static IP on the same cluster it used to fail. Resolution: PXB provides the px-backup/skip-vm-start annotation to skip automatic VM start after restore for static IP configuration, and px-backup/skip-mac-masking to retain the original MAC address.Affected Versions: 2.8.0, 2.8.1, 2.8.2, 2.8.3 | Major |
| PB-9573 | Issue: Sometimes CSI snapshot fails with revision mismatch error when external snapshotter tries to update the PVC spec for finaliser update. User Impact: Native CSI or KDMP/local snapshot backups fails randomly. Resolution: PXB ignores the revision mismatch error from volume snapshot and allows it to reconcile for the specified variants of backups to succeed. Affected Versions: 2.8.0, 2.8.1, 2.8.2, 2.8.3 | Major |
2.8.3
February 12, 2025
Refer these topics before you start install or upgrade tasks:
Portworx Backup offers the following features and enhancements in this release:
Features
Health Check
Starting with version 2.8.3, Portworx Backup runs a health check before installation or upgrade to confirm that the deployment environment meets all prerequisites. This proactive validation helps ensure a seamless installation or upgrade. For more information, refer to health check and health check matrix.
Parallel Backup Schedules
The feature ensures scheduled backup job get triggered on time with uninterrupted scheduling, even if a previous backup is still in progress. This prevents delays caused by large volumes or bandwidth constraints, maintaining consistent data protection. For more information, refer to parallel backup schedule.
Enhancements
Email alerts with SMTP TLS
This enhancement allows configuring custom certificates for SMTP connections, ensuring compliance with security standards. It also enables STARTTLS support during email validation for secure encryption upgrades. These improvements enhance the security and flexibility of SMTP communication. For more information, refer to Email alerts with SMTP TLS.
Backup and restore of VM instance type and preference CRs
You can now backup VirtualMachineInstancetype, VirtualMachinePreference and NetworkAttachmentDefinition VM resources restore them successfully. For more information, refer to Virtual machine backup.
PXB deployment through Argo CD
Portworx Backup can now be deployed through Argo CD by updating minimal configuration parameters in few clicks. This enhancement ensures automated, consistent, and scalable deployments across Kubernetes clusters. For more information on parameters and their values for this deployment mode, refer install and upgrade topics.
Ansible Parameters
Two new parameters are introduced in Ansible modules to enhance the backup and restore efficiency and reliability:
-
keep_cr_status: enables user to preserve CR status during backup and restore with the status sub-resource -
parallel_backup: allows scheduled backups to run parallely and is applicable only for the backup that has only Portworx volumes.
Fixes
| Issue Number | Description | Severity |
|---|---|---|
| PB-9209 | Issue: Multi-namespace backup and restore used to fail with Namespace Not Found error if the admin namespace (admin-ns) was configured in Stork on application cluster.User Impact: User could not backup and restore on the application cluster in this scenario. Resolution: PXB now validates if admin namespace configured in Stork (on application cluster) exists and then succeeds backup and restore operations. Affected Versions: 2.7.x, 2.8.0, 2.8.1, 2.8.2 | Major |
| PB-8743 | Issue: There was inadequate information in px-backup pod logs in case of backup failures.User Impact: Debugging backup failures was challenging for the support team due to insufficient information. Resolution: Backup failure reasons are now logged in px-backup pods from the applicationbackup CR, providing users with accurate information to facilitate resolution.Affected Versions: 2.8.0, 2.8.1, 2.8.2 | Major |
| PB-6688 | Issue: Backup fails for Portworx volumes encrypted at the storage class level using CSI and Kubernetes secrets when running on a non-secure cluster. User Impact: Data was not backed up and users had to encrypt their Portworx volumes using alternative Portworx encryption methods. Resolution: Backups are now successful in the above scenario for encrypted volumes Portworx volumes. Affected Versions: 2.6.x, 2.7.x, 2.8.0, 2.8.1, 2.8.2 | Major |
| PB-6114 | Issue: For any NFS backup due to some transient errors (network-related or NFS server issues) the backup content may not be readable and it will be marked as cloud backup missing state in the show details (view json). User Impact: User cannot restore such backups even though there is no backup error in this scenario. Resolution: PXB now waits for the errors to be resolved, fetches the required cloud files and succeeds the backup. Affected Versions: 2.6.x, 2.7.x, 2.8.0, 2.8.1, 2.8.2 | Major |
| PB-3604 | Issue: Image pulls during installation on the Mirantis Kubernetes Engine (MKE) platform failed with an ImagePullBackOff error.User Impact: This error prevented users from proceeding with the installation. Resolution: Users must now include the images.pullSecrets[0]=docregistry-secret parameter in the Helm command during installation or upgrade if they were using docregistry-secret to pull Portworx Backup images into the px-backup namespace from a private registry.Affected Versions: 2.6.x, 2.7.x, 2.8.0, 2.8.1, 2.8.2 | Major |
Known Issue
Portworx Backup team is aware of the following known issue and it will be fixed in the upcoming releases:
| Issue Number | Description |
|---|---|
| PB-9754 | Issue: Upgrading Portworx Backup to version 2.8.3 fails when using an external Prometheus and Alertmanager due to a health check validation, displaying the error: prometheus-operator not found.User Impact: The upgrade process is blocked when external Prometheus and Alertmanager components are in use. Workaround: To proceed with the upgrade, bypass the health check by adding --set pxbackup.skipValidations=true to the Helm upgrade command. |
| PB-8413 | Issue: If a user has taken backup in a cloud container/blob and the user goes ahead and deletes the container, currently the backup deletion of this backup wont go through. User Impact: When the user deletes the backup location container and the backup moves to cloud file missing status, user will not be able to delete that particular backup and it will remain in deleting state forever. |
2.8.2
January 24, 2025
Refer these topics before you start install or upgrade tasks:
Portworx Backup offers the following fix in this release:
Fix
| Issue Number | Description | Severity |
|---|---|---|
| PB-9312 | Issue: Possible exposure of certain sensitive information in logs. Resolution: Logs are now sanitized to prevent any exposure of confidential data to ensure enhanced security and compliance. If you are on PXB version 2.6.0 or 2.7.x or 2.8.x, upgrade to version 2.6.1 or 2.7.4 or 2.8.2 respectively. If you are on versions earlier than 2.6.0, please contact the Pure Storage support team for upgrade assistance. Affected Versions: 2.6.0, 2.7.x, 2.8.0, 2.8.1 | Major |
2.8.1
December 13, 2024
Refer these topics before you start install or upgrade tasks:
Portworx Backup offers the following enhancement and fixes in this release: