Portworx Backup Release Notes
Portworx Backup 2.6.1
January 21, 2025
Refer these topics before you start install or upgrade tasks:
Fix
Portworx Backup offers the following fix in this release:
| Issue Number | Description | Severity |
|---|---|---|
| PB-9312 | Issue: Possible exposure of certain sensitive information in logs. Resolution: Logs are now sanitized to prevent any exposure of confidential data to ensure enhanced security and compliance. If you are on Portworx Backup version 2.6.0, upgrade to version 2.6.1. If you are on versions earlier than 2.6.0, please contact the Pure Storage support team for upgrade assistance. Affected Versions: 2.6.0 | Major |
Portworx Backup 2.6.0
December 7, 2023
Features
Backup and restore VMs with KubeVirt
Portworx Backup now lets you to backup and restore Virtual Machines (VMs) using KubeVirt add-on. This feature also allows you to backup and restore VMs migrated from VMware environments and VMs running on OpenShift Virtualization. KubeVirt runs VMs as Kubernetes resources and Kubernetes manages these resources with the built-in Kubernetes tools.
Deletion of stranded resources
Portworx Backup default administrator users can now view the key resources of Portworx Backup created by non-admin users. Default administrator can now view the clusters, backups, restores, and backup schedules created by these users, filter them by username, and delete them if those resources are obsolete or not required any more. For more information, refer to Deletion of stranded resources.
Starting from Portworx Backup 2.6.0 with Stork 23.9.0 and above, users cannot take a backup of kube-system, kube-node-lease, kube-public namespace and the namespace where Portworx is installed in any of the cluster environments.
Enhancement
Portworx Backup has now enhanced its functionality to provide integrity and reliability for synced backups. From 2.6.0, backups created on the shared backup location are visible only to the backup owner and not to the owner of the shared backup location. This functionality enhances the integrity and reliability of the synced backups. For more information, refer Synced backups.
Fixes
| Issue Number | Description |
|---|---|
| PB-4687 | Issue: If the user tried to update an AWS S3 backup location through API without validation flag, backup location update task used to get stuck in validation in progress state.Resolution: Users can update the AWS S3 backup location successfully with API requests. |
| PB-4445 | Issue: When the user upgrades Portworx Backup version from 2.4.2 to 2.5.1 with external auth provider as OpenShiftv4 (with custom CA certificate), user cannot login to Portworx Backup or Keycloak user interface through OpenShiftv4. Resolution: Upgrade to Portworx Backup version 2.6.0. |
| PB-4365 | Issue: Sometimes KDMP backups and restores fail if the Dataexport CR already exists.Resolution: Portworx Backup checks for existence of Dataexport CR and then continues with other volumes to resolve this issue for KDMP backups and restores. |
| PB-4354 | Issue: Users were unable to take a KDMP backup of the PVC used by completed job pod. Resolution: This issue is now fixed. |
| PB-4274 | Issue: Maintenance job pods used to get stuck in ContainerCreating state when NFS mount is not reachable or invalid after creating NFS backup location.Resolution: Maintenance job pods get force deleted when they are stuck in ContainerCreating state with an hourly check for such pods. |
| PB-4259 | Issue: Backup was not going to failed state when a backup location and NFS subpath are deleted or does not exist.Resolution: Portworx Backup validates this condition and moves the backup to failed state with CloudBackup objects are missing error. |
| PB-4070 | Issue: Restore from NFS backup location used to go to a perpetual Pending state if the destination cluster has older Stork version.Resolution: Portworx Backup now checks for the Stork version and displays an appropriate error message in the user interface. |
| PB-4067 | Issue: KDMP or local snapshot backup delete jobs were not getting cleaned up after completion of backup deletion, leading to accumulation of stale backup delete job entries. Resolution: This issue is fixed now. |
Known Issues
| Issue Number | Description |
|---|---|
| PB-4875 | Issue: In an on premises OCP cluster with custom CA certificate and Active Directory as external auth provider, addition of an AWS S3 backup location fails. Workaround: Disable SSL during creation of an AWS S3 backup location. |
| PB-4872 | Issue: If you install Portworx Enterprise with FlashBlade direct attach NFS volumes, backup fails if you select the volume snapshot class during creation of backup in the create backup window. Workaround: Deselect volume snapshot class option in the create backup window. |
| PB-4869 | Issue: When you restore a backup with FlashBlade direct-attached NFS volumes on Portworx cluster, restore might fail if you have enabled Snapshot option in FlashBlade user interface. Workaround: Refer to Restore with FlashBlade section for the workaround. |
| PB-4805 | Issue: If the user updates email, first name or last name details in the Keycloak, updated details do not reflect in the All Backups page of Portworx Backup user interface. Workaround: Check All Shared Clusters page to view the latest changes made to email, first name, or last name. |
| PB-4628 | Issue: You cannot restore backups of Portworx RWX volumes to a new namespace with different name from source cluster to a different cluster. Also, KubeVirt VMs are paused during a KDMP backup with RWO mode. Workaround: Create an empty namespace with same name in the source cluster. |
| PB-4116 | Issue: KubeVirt VMs restore from source cluster (with RHEL9) to destination cluster with RHEL 8.x fails. Workaround: Update the VM template to spec.domain.machine.type: q35 to make it version neutral. |
| PB-3175 | Issue: User cannot delete a cluster that goes to Not available state with a backup schedule running. |