Portworx Backup Release Notes
Portworx Backup 2.8.4
April 14, 2025
Refer these topics before you start install or upgrade tasks:
Portworx Backup offers the following features and enhancements in this release:
Features
Advanced filters
The Advanced Filter Option for Label Filtering in Portworx Backup extends label-based filtering to support VM resource labels, enabling more granular selection of VM resources during backup operations. It introduces the ability to include or exclude resources based on label criteria, following the standard Kubernetes label selector syntax. For more information, refer to backup namespaces and VMs by labels.
Partial Success for VM Backups
VM backups are now more resilient and efficient with per-VM execution of PreExec and PostExec rules. This ensures that if one VM encounters an issue, the backup can still proceed smoothly for the others. Volume-level status is now accurately tracked per VM, offering better visibility and reporting. The partial success for VM approach also reduces freeze times, resulting in faster and more reliable backups.
Enhancements
Retry failed/partially successful VM/namespace backups
This enhancement allows users to retry backups partially successful Virtual Machines (VMs) or namespace backups within a multi-VM or multi-namespace backup job. When certain VMs or namespaces in a backup fail, users can trigger a retry operation to reattempt backups only for those failed/partially succeeded VMs or namespaces without impacting the ones that succeeded.
Static IP-safe VM Restore
Two new annotations are introduced in Stork to provide users greater control and flexibility during VM restore, especially in environments with static IP configurations, improving restore reliability and reducing manual intervention:
-
px-backup/skip-vm-start: prevents the restored VM from starting automatically and allows users to manually configure a new static IP before VM startup, avoiding conflicts. -
px-backup/skip-mac-masking: retains the original MAC address instead of applying a masked one during restore. Besides, this annotation is useful for scenarios where MAC consistency is required.
Fixes
| Issue Number | Description | Severity |
|---|---|---|
| PB-10122 | Issue: When you edit an existing backup schedule through Schedules tab and enable or disable custom rules, the latest rules were not propagated to the backend. User Impact: Upcoming scheduled backups used to fail. Resolution: Modification to existing schedules work as expected and the scheduled backups succeed. Affected Versions: 2.8.0, 2.8.1, 2.8.2, 2.8.3 | Major |
| PB-9989 | Issue: Backup schedules were not considering timezone of the application cluster and used to always pick up UTC time zone. User Impact: Users were encountering difficulties during creation of backup schedules. Resolution: All backup schedules take the timezone configured on Stork spec on application cluster. Affected Versions: 2.8.0, 2.8.1, 2.8.2, 2.8.3 | Major |
| PB-9988 | Issue: Restoring KubeVirt VMs with static IPs can lead to IP conflicts, requiring manual shutdown of the source VM User Impact: If user wants to restore a VM with static IP on the same cluster it used to fail. Resolution: Portworx Backup provides the px-backup/skip-vm-start annotation to skip automatic VM start after restore for static IP configuration, and px-backup/skip-mac-masking to retain the original MAC address.Affected Versions: 2.8.0, 2.8.1, 2.8.2, 2.8.3 | Major |
| PB-9573 | Issue: Sometimes CSI snapshot fails with revision mismatch error when external snapshotter tries to update the PVC spec for finaliser update. User Impact: Native CSI or KDMP/local snapshot backups fails randomly. Resolution: Portworx Backup ignores the revision mismatch error from volume snapshot and allows it to reconcile for the specified variants of backups to succeed. Affected Versions: 2.8.0, 2.8.1, 2.8.2, 2.8.3 | Major |
Portworx Backup 2.8.3
February 12, 2025
Refer these topics before you start install or upgrade tasks:
Portworx Backup offers the following features and enhancements in this release:
Features
Health Check
Starting with version 2.8.3, Portworx Backup runs a health check before installation or upgrade to confirm that the deployment environment meets all prerequisites. This proactive validation helps ensure a seamless installation or upgrade. For more information, refer to health check and health check matrix.
Parallel Backup Schedules
The feature ensures scheduled backup job get triggered on time with uninterrupted scheduling, even if a previous backup is still in progress. This prevents delays caused by large volumes or bandwidth constraints, maintaining consistent data protection. For more information, refer to parallel backup schedule.
Enhancements
Email alerts with SMTP TLS
This enhancement allows configuring custom certificates for SMTP connections, ensuring compliance with security standards. It also enables STARTTLS support during email validation for secure encryption upgrades. These improvements enhance the security and flexibility of SMTP communication. For more information, refer to Email alerts with SMTP TLS.
Backup and restore of VM instance type and preference CRs
You can now backup VirtualMachineInstancetype, VirtualMachinePreference and NetworkAttachmentDefinition VM resources restore them successfully. For more information, refer to Virtual machine backup.
Portworx Backup deployment through Argo CD
Portworx Backup can now be deployed through Argo CD by updating minimal configuration parameters in few clicks. This enhancement ensures automated, consistent, and scalable deployments across Kubernetes clusters. For more information on parameters and their values for this deployment mode, refer install and upgrade topics.
Ansible Parameters
Two new parameters are introduced in Ansible modules to enhance the backup and restore efficiency and reliability:
-
keep_cr_status: enables user to preserve CR status during backup and restore with the status sub-resource -
parallel_backup: allows scheduled backups to run parallely and is applicable only for the backup that has only Portworx volumes.
Fixes
| Issue Number | Description | Severity |
|---|---|---|
| PB-9209 | Issue: Multi-namespace backup and restore used to fail with Namespace Not Found error if the admin namespace (admin-ns) was configured in Stork on application cluster.User Impact: User could not backup and restore on the application cluster in this scenario. Resolution: Portworx Backup now validates if admin namespace configured in Stork (on application cluster) exists and then succeeds backup and restore operations. Affected Versions: 2.7.x, 2.8.0, 2.8.1, 2.8.2 | Major |
| PB-8743 | Issue: There was inadequate information in px-backup pod logs in case of backup failures.User Impact: Debugging backup failures was challenging for the support team due to insufficient information. Resolution: Backup failure reasons are now logged in px-backup pods from the applicationbackup CR, providing users with accurate information to facilitate resolution.Affected Versions: 2.8.0, 2.8.1, 2.8.2 | Major |
| PB-6688 | Issue: Backup fails for Portworx volumes encrypted at the storage class level using CSI and Kubernetes secrets when running on a non-secure cluster. User Impact: Data was not backed up and users had to encrypt their Portworx volumes using alternative Portworx encryption methods. Resolution: Backups are now successful in the above scenario for encrypted volumes Portworx volumes. Affected Versions: 2.6.x, 2.7.x, 2.8.0, 2.8.1, 2.8.2 | Major |
| PB-6114 | Issue: For any NFS backup due to some transient errors (network-related or NFS server issues) the backup content may not be readable and it will be marked as cloud backup missing state in the show details (view json). User Impact: User cannot restore such backups even though there is no backup error in this scenario. Resolution: Portworx Backup now waits for the errors to be resolved, fetches the required cloud files and succeeds the backup. Affected Versions: 2.6.x, 2.7.x, 2.8.0, 2.8.1, 2.8.2 | Major |
| PB-3604 | Issue: Image pulls during installation on the Mirantis Kubernetes Engine (MKE) platform failed with an ImagePullBackOff error.User Impact: This error prevented users from proceeding with the installation. Resolution: Users must now include the images.pullSecrets[0]=docregistry-secret parameter in the Helm command during installation or upgrade if they were using docregistry-secret to pull Portworx Backup images into the px-backup namespace from a private registry.Affected Versions: 2.6.x, 2.7.x, 2.8.0, 2.8.1, 2.8.2 | Major |
Known Issue
Portworx Backup team is aware of the following known issue and it will be fixed in the upcoming releases:
| Issue Number | Description |
|---|---|
| PB-9754 | Issue: Upgrading Portworx Backup to version 2.8.3 fails when using an external Prometheus and Alertmanager due to a health check validation, displaying the error: prometheus-operator not found.User Impact: The upgrade process is blocked when external Prometheus and Alertmanager components are in use. Workaround: To proceed with the upgrade, bypass the health check by adding --set pxbackup.skipValidations=true to the Helm upgrade command. |
| PB-8413 | Issue: If a user has taken backup in a cloud container/blob and the user goes ahead and deletes the container, currently the backup deletion of this backup wont go through. User Impact: When the user deletes the backup location container and the backup moves to cloud file missing status, user will not be able to delete that particular backup and it will remain in deleting state forever. |
Portworx Backup 2.8.2
January 24, 2025
Refer these topics before you start install or upgrade tasks:
Portworx Backup offers the following fix in this release:
Fix
| Issue Number | Description | Severity |
|---|---|---|
| PB-9312 | Issue: Possible exposure of certain sensitive information in logs. Resolution: Logs are now sanitized to prevent any exposure of confidential data to ensure enhanced security and compliance. If you are on Portworx Backup version 2.6.0 or 2.7.x or 2.8.x, upgrade to version 2.6.1 or 2.7.4 or 2.8.2 respectively. If you are on versions earlier than 2.6.0, please contact the Pure Storage support team for upgrade assistance. Affected Versions: 2.6.0, 2.7.x, 2.8.0, 2.8.1 | Major |
Portworx Backup 2.8.1
December 13, 2024
Refer these topics before you start install or upgrade tasks:
Portworx Backup offers the following enhancement and fixes in this release:
Enhancements
Kubelogin
Portworx Backup now supports Azure kubelogin, enabling secure, token-based authentication via Azure AD for AKS clusters. This update enhances security by eliminating static credentials and supports Service Principal and Managed Service Identity login modes for streamlined access.
Ansible Collection
The Portworx Backup Ansible collection simplifies the automation of Portworx Backup tasks, allowing you to efficiently manage backup locations, backup schedules, configure cloud credentials, and perform cluster operations using Ansible.
API Improvements
Portworx Backup has introduced the following API enhancements in Backup API for backups and backup schedules:
-
ResourceCollector: a gRPC service that exposes the resource collector API
- GetResourceTypes: this API returns the list of supported resource types of a cluster that can be backed up.
-
ExcludeResourceType: this API field helps to exclude the resource types for backup and backup schedules through
create APIcalls.
Fixes
| Issue Number | Description | Severity |
|---|---|---|
| PB-8993 | Issue: If you attempt to perform an in-place restore of backed-up DB2 application data, the restore operation for the PVC often gets blocked because the PVC remains actively mounted to pods. User Impact: The restore operation failed, preventing the database or application state from being restored to the backup location. Resolution: Scale-down the deployment/statefulset and delete the job pods consuming the PVCs (mounted) for the restores to succeed. Component: Restore Affected Versions: 2.7.x | Major |
| PB-8658 | Issue: The Create API did not return the created object, making it difficult for users to obtain the object's UID for subsequent operations like the inspect call.User Impact: Users had to make an enumerate call and manually search for the object in the call results to retrieve the UID. Resolution: The Create API now returns the created object along with the UID in the response for subsequent tasks.Component: Portworx Backup API Affected Versions: 2.7.2, 2.7.3, 2.8.0 | Major |
| PB-8630 | Issue: PVCs fail to honor the pod's fsgroup setting during backup or restore when using a CSI provisioner (IBM file). As a result, Kopia cannot snapshot the file due to missing read permissions. User Impact: Using IBM file provisioners causes generic/KDMP backup failures due to missing read permissions on files. Resolution: KDMP job pods to run with the anyuid annotation, enabling them to operate with any user ID (UID), including elevated privileges. For more information, refer to Configure KDMP job pods.Component: KDMP backup Affected Versions: 2.7.x, 2.8.0 | Major |
| PB-8410 | Issue: In generic/direct KDMP backups and NFS backup locations, PVC mounts may fail transiently. Portworx Backup currently reports an error on the first failure, causing backup and restore operations to fail. User Impact: Backups and restores with NFS backup locations or direct KDMP backups may fail. Resolution: Upgrade to Portworx Backup version 2.8.1 version and then configure MOUNT_FAILURE_RETRY_TIMEOUT in the kdmp-config ConfigMap, to enable Portworx Backup to retry mounts before returning an error. For more information, refer to Configure mount timeoutComponent: Backups and restores Affected Versions: 2.7.x, 2.8.0 | Major |
Portworx Backup 2.8.0
November 22, 2024
Refer these topics before you start install or upgrade tasks:
Portworx Backup offers the following features, enhancements, and fixes in this release:
Features
Cluster Share
The new cluster share feature in Portworx Backup enables central management and sharing of Kubernetes clusters for backup and restore operations, addressing limitations around self-service and security. Administrators can now add clusters and share access with specific users or groups without exposing kubeconfigs or cloud credentials. The feature supports scalability, allowing clusters to be shared with multiple users while ensuring high security and performance. For more information, refer to share clusters.
Super Administrator
The Portworx Backup super administrator role introduces a powerful, centralized role with comprehensive access to clusters, namespaces, cloud accounts, and backup objects. Super admins can view, manage, and share backups and restores across all clusters within a deployment, regardless of the cluster ownership. With this role, organizations gain streamlined backup management and control over both RBAC and non-RBAC resources of Portworx Backup. For more information, refer to super administrator.
Enhancements
Concurrent Deletion of Backups
The Portworx Backup 2.8.0 release introduces a concurrent deletion enhancement that significantly speeds up the deletion time of backups and their associated volumes. By default, Portworx Backup takes up five backups to be processed concurrently, with each backup supporting up to five threads for volume deletion which can be customized as per the user environments. This enhancement optimizes large-scale backup management, reducing deletion time and enhancing resource utilization. For more information, refer to concurrent deletion of backups.
Azure Proxy Parameters
Portworx Backup introduces advanced proxy configuration options to manage Azure proxy exclusions and inclusions, allowing for precise control over service-specific proxy behavior only for Azure cluster platform. This enhancement is particularly useful in complex Azure deployments where Portworx Backup micro-services need to bypass the proxy while others must adhere to it. This proxy is supported only for Portworx Backup with Azure AKS cluster as Portworx Backup cluster. For more information, refer to Azure proxy parameters.
Fixes
| Issue Number | Description | Severity |
|---|---|---|
| PB-8648 | Issue: updateOwnership API call with invalid ownership struct used to crash the px-backup pod. User Impact: User was not able to get the relevant error message to know the actual cause of pod crash. Resolution: Portworx Backup now validates the user input in updateOwnership API call for ownership details and throws an appropriate error message. Component: Portworx Backup API Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
| PB-8394 | Issue: When you take a backup of two Portworx volumes out of which one is small (with few MB size) and the other, a large one (of size 500 GB), the former used to display failed status. User Impact: Backup operation of smaller volume used to display failed status with key not found error though the backup was successfully uploaded to the cloud.Resolution: Portworx Backup does not check the status of a successful backup of Portworx volumes anymore. Component: Portworx volume backup Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
| PB-8360 | Issue: IBM COS backup location addition used to fail with the error UnsupportedOperation for unlocked bucket.User Impact: User was unable to add an IBM COS backup location for unlocked bucket. Resolution: Portworx Backup now ignores UnsupportedOperation error for unlocked IBM COS bucket for the user to successfully add this as backup location.Component: Backup location Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
| PB-8344 | Issue: If a user deletes backups and the associated backup location simultaneously, sometimes the px-backup pod used to go into crashLoopBackOff state. User Impact: The px-backup pod used to go into crashloopBackOff state halting the backup service. Resolution: PX-Backup now retains associated backups even if the backup location is deleted, provided the backups are in either the deleting or deletePending state.Component: Backup location Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
| PB-8327 | Issue: In environment of scheduled backups with namespace labels on an NFS backup location, if no namespaces matched the labels during backup creation, Portworx Backup would create empty folders. User Impact: User was unable to delete these empty folders even after deleting the backups. Resolution: Portworx Backup now deletes the empty backup folder while deleting the associated backups, thus preventing the accumulation of such empty folders. Component: NFS backup location Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
| PB-8140 | Issue: pxcentral-post-install-hook job would fail if it is re-run after a failure or an error.User Impact: Portworx Backup installation or upgrade used to fail sometimes. Resolution: If pxcentral-post-install-hook re-runs after an error, it runs successfully for smooth install and upgrade tasks without any interruptions.Component: Install/Upgrade Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
| PB-7967 | Issue: Unsuccessful manual backups (because of deletion of dependent backups) of an unknown or stranded user could not be deleted. User Impact: Such backups used to remain in the system utilizing the system resources. Resolution: User can now delete such stranded backups and clear up the storage space. Component: Manual backup Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
| PB-7944 | Issue: Large resource backups were taking long time due to the underlying resource fetch issues. User Impact: Creation of large resource backups took a longer time to complete. Resolution: Large resource backups now take less time to complete in S3-based backup locations. Component: Large resource backups Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Moderate |
| PB-7859 | Issue: PVCs with ReadWriteMany or ReadOnlyMany access modes (excluding ReadWriteOnce) that are already mounted by a pod did not support node affinity. User Impact: Users were unable to configure node affinity for generic or KDMP backup job pods when PVCs were mountable on multiple nodes. Resolution: Portworx Backup now supports node affinity for PVCs that are already in use by pods. Component: Generic backups Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Moderate |
| PB-7774 | Issue: Portworx Backup web console was not displaying the time stamp for all stages of backup creation. User Impact: User was not able to get accurate data on various phases of backup and the time taken for each phase. Resolution: The Portworx Backup web console provides detailed information on the time taken for each stage of the backup and the total backup completion time, accessible through the View JSON option. Component: Backups Affected Versions: 2.7.3 | Major |
| PB-7553 | Issue: In multi-namespace backups, namespaces where backup of all PVCs failed were included in the protected app count. User Impact: Portworx Backup web console dashboard used to display an unprotected namespace as protected. Resolution: Portworx Backup does not include such namespaces in the dashboard and gives accurate data to the users now. Component: Namespace backups Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
| PB-7476 | Issue: The Kopia/NFS backup and restore job pods could not be configured with node affinity, leading these pods to be scheduled on nodes where the applications (to be backed up) were not running. User Impact: Backup job pods were scheduled on nodes (though applications were not running on these nodes due to network restrictions), resulting in backup failures. Resolution: Portworx Backup now enables you to configure node affinity for Kopia/NFS backup and restore job pods, ensuring they run on the specified nodes Component: NFS backup Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
| PB-7469 | Issue: API calls failed to fetch resources if the response time exceeded 90 seconds, resulting in a time-out error. User Impact: The user was unable to retrieve the required resources via API calls. Resolution: Users can now configure the timeout value based on their environment, allowing the API sufficient time to load and fetch the resources. Component: Portworx Backup API Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
| PB-6928 | Issue: The px-backup pod encountered unhandled crashes when invalid JSON was included in the API request body.User Impact: Service disruptions and pod restarts occurred due to malformed JSON payloads. Resolution: Portworx Backup now validates REST API requests, returns appropriate errors for malformed JSON instead of causing crashes or pod restarts. Component: Portworx Backup API Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
| PB-6916 | Issue: No alerts were triggered when the backup cluster could not reach the application cluster. User Impact: Users did not receive email notifications or alerts in the Portworx Backup web console, leaving them unaware of the issue. Resolution: Portworx Backup now sends alerts to the configured email and also displays the alerts on the web console when an application cluster goes offline. Component: Alerts Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
| PB-6665 | Issue: No alerts were triggered when backups transitioned to Cloud Backup Missing state for NFS backup locations.User Impact: Users were not notified when backup files were missing from NFS backup locations. Resolution: Portworx Backup now notifies the user through alerts if cloud backups go missing in NFS backup locations. Component: NFS backups Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
| PB-6152 | Issue: In a proxy-enabled cluster, adding a filter to bypass the proxy for px-backup gRPC traffic was failing.User Impact: Users were unable to deploy Portworx Backup in a proxy-enabled cluster. Resolution: You can now add the localhost:<px-backup-grpc-port-number> endpoint to the no-proxy environment variable to bypass the proxy for internal communication. Component: Install Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
| PB-6034 | Issue: For synced backups, if the logged-in user is not the cluster owner, attempt to create a duplicate backup resulted in an error: Duplication of backup failed because cluster <cluster-name> not found.User Impact: Users could see the duplicate backup option in the Portworx Backup web console but were unable to create a duplicate backup successfully. Resolution: Duplicate option is disabled for synced backups in the Activity timeline > Backup tab. Component: Portworx Backup Dashboard Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
Known Issue
Portworx Backup team is aware of the following known issue and it will be fixed in the upcoming releases:
| Issue Number | Description |
|---|---|
| PB-8102 | Issue: If user 1 creates a manual backup and user 2 creates an incremental backup of same namespace of a cluster (shared or non-shared) on the same backup location (shared or non-shared), when user 1 tries to delete their backup, it will get stuck in delete pending state due to its dependency on user 2's incremental backup. This issue is seen only for backups of Portworx volumes.Workaround: User 1 must contact super administrator or user 2 to delete the dependent backup and then proceed with deletion of their own backup. |