2.8.1
December 13, 2024
Refer these topics before you start install or upgrade tasks:
Portworx Backup offers the following enhancement and fixes in this release:
Enhancements
Kubelogin
Portworx Backup (PXB) now supports Azure kubelogin, enabling secure, token-based authentication via Azure AD for AKS clusters. This update enhances security by eliminating static credentials and supports Service Principal and Managed Service Identity login modes for streamlined access.
Ansible collection
The PXB Ansible collection simplifies the automation of Portworx Backup tasks, allowing you to efficiently manage backup locations, backup schedules, configure cloud credentials, and perform cluster operations using Ansible.
API improvements
PXB has introduced the following API enhancements in Backup API for backups and backup schedules:
-
ResourceCollector: a gRPC service that exposes the resource collector API
- GetResourceTypes: this API returns the list of supported resource types of a cluster that can be backed up.
-
ExcludeResourceType: this API field helps to exclude the resource types for backup and backup schedules through
create API
calls.
Fixes
Issue Number | Description | Severity |
---|---|---|
PB-8993 | Issue: If you attempt to perform an in-place restore of backed-up DB2 application data, the restore operation for the PVC often gets blocked because the PVC remains actively mounted to pods. User Impact: The restore operation failed, preventing the database or application state from being restored to the backup location. Resolution: Scale-down the deployment/statefulset and delete the job pods consuming the PVCs (mounted) for the restores to succeed. Component: Restore Affected Versions: 2.7.x | Major |
PB-8658 | Issue: The Create API did not return the created object, making it difficult for users to obtain the object's UID for subsequent operations like the inspect call.User Impact: Users had to make an enumerate call and manually search for the object in the call results to retrieve the UID. Resolution: The Create API now returns the created object along with the UID in the response for subsequent tasks.Component: PXB API Affected Versions: 2.7.2, 2.7.3, 2.8.0 | Major |
PB-8630 | Issue: PVCs fail to honor the pod's fsgroup setting during backup or restore when using a CSI provisioner (IBM file). As a result, Kopia cannot snapshot the file due to missing read permissions. User Impact: Using IBM file provisioners causes generic/KDMP backup failures due to missing read permissions on files. Resolution: KDMP job pods to run with the anyuid annotation, enabling them to operate with any user ID (UID), including elevated privileges. For more information, refer to Configure KDMP job pods.Component: KDMP backup Affected Versions: 2.7.x, 2.8.0 | Major |
PB-8410 | Issue: In generic/direct KDMP backups and NFS backup locations, PVC mounts may fail transiently. PXB currently reports an error on the first failure, causing backup and restore operations to fail. User Impact: Backups and restores with NFS backup locations or direct KDMP backups may fail. Resolution: Upgrade to PXB version 2.8.1 version and then configure MOUNT_FAILURE_RETRY_TIMEOUT in the kdmp-config ConfigMap, to enable PXB to retry mounts before returning an error. For more information, refer to Configure mount timeoutComponent: Backups and restores Affected Versions: 2.7.x, 2.8.0 | Major |
2.8.0
November 22, 2024
Refer these topics before you start install or upgrade tasks:
Portworx Backup offers the following features, enhancements, and fixes in this release:
Features
Cluster share
The new cluster share feature in Portworx Backup (PXB) enables central management and sharing of Kubernetes clusters for backup and restore operations, addressing limitations around self-service and security. Administrators can now add clusters and share access with specific users or groups without exposing kubeconfigs or cloud credentials. The feature supports scalability, allowing clusters to be shared with multiple users while ensuring high security and performance. For more information, refer to share clusters.
Super administrator
The PXB super administrator role introduces a powerful, centralized role with comprehensive access to clusters, namespaces, cloud accounts, and backup objects. Super admins can view, manage, and share backups and restores across all clusters within a deployment, regardless of the cluster ownership. With this role, organizations gain streamlined backup management and control over both RBAC and non-RBAC resources of PXB. For more information, refer to super administrator.
Enhancements
Concurrent deletion of backups
The Portworx Backup 2.8.0 release introduces a concurrent deletion enhancement that significantly speeds up the deletion time of backups and their associated volumes. By default, PXB takes up five backups to be processed concurrently, with each backup supporting up to five threads for volume deletion which can be customized as per the user environments. This enhancement optimizes large-scale backup management, reducing deletion time and enhancing resource utilization. For more information, refer to concurrent deletion of backups.
Azure proxy parameters
Portworx Backup introduces advanced proxy configuration options to manage Azure proxy exclusions and inclusions, allowing for precise control over service-specific proxy behavior. This enhancement is particularly useful in complex deployments where PXB micro-services need to bypass the proxy while others must adhere to it. For more information, refer to Azure proxy parameters.
Fixes
Issue Number | Description | Severity |
---|---|---|
PB-8648 | Issue: updateOwnership API call with invalid ownership struct used to crash the px-backup pod. User Impact: User was not able to get the relevant error message to know the actual cause of pod crash. Resolution: PXB now validates the user input in updateOwnership API call for ownership details and throws an appropriate error message. Component: PXB API Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
PB-8394 | Issue: When you take a backup of two Portworx volumes out of which one is small (with few MB size) and the other, a large one (of size 500 GB), the former used to display failed status. User Impact: Backup operation of smaller volume used to display failed status with key not found error though the backup was successfully uploaded to the cloud.Resolution: PXB does not check the status of a successful backup of Portworx volumes anymore. Component: Portworx volume backup Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
PB-8360 | Issue: IBM COS backup location addition used to fail with the error UnsupportedOperation for unlocked bucket.User Impact: User was unable to add an IBM COS backup location for unlocked bucket. Resolution: PXB now ignores UnsupportedOperation error for unlocked IBM COS bucket for the user to successfully add this as backup location.Component: Backup location Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
PB-8344 | Issue: If a user deletes backups and the associated backup location simultaneously, sometimes the px-backup pod used to go into crashLoopBackOff state. User Impact: The px-backup pod used to go into crashloopBackOff state halting the backup service. Resolution: PX-Backup now retains associated backups even if the backup location is deleted, provided the backups are in either the deleting or deletePending state.Component: Backup location Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
PB-8327 | Issue: In environment of scheduled backups with namespace labels on an NFS backup location, if no namespaces matched the labels during backup creation, PXB would create empty folders. User Impact: User was unable to delete these empty folders even after deleting the backups. Resolution: PXB now deletes the empty backup folder while deleting the associated backups, thus preventing the accumulation of such empty folders. Component: NFS backup location Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
PB-8140 | Issue: pxcentral-post-install-hook job would fail if it is re-run after a failure or an error.User Impact: Portworx Backup installation or upgrade used to fail sometimes. Resolution: If pxcentral-post-install-hook re-runs after an error, it runs successfully for smooth install and upgrade tasks without any interruptions.Component: Install/Upgrade Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
PB-7967 | Issue: Unsuccessful manual backups (because of deletion of dependent backups) of an unknown or stranded user could not be deleted. User Impact: Such backups used to remain in the system utilizing the system resources. Resolution: User can now delete such stranded backups and clear up the storage space. Component: Manual backup Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
PB-7944 | Issue: Large resource backups were taking long time due to the underlying resource fetch issues. User Impact: Creation of large resource backups took a longer time to complete. Resolution: Large resource backups now take less time to complete in S3-based backup locations. Component: Large resource backups Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Moderate |
PB-7859 | Issue: PVCs with ReadWriteMany or ReadOnlyMany access modes (excluding ReadWriteOnce) that are already mounted by a pod did not support node affinity. User Impact: Users were unable to configure node affinity for generic or KDMP backup job pods when PVCs were mountable on multiple nodes. Resolution: PXB now supports node affinity for PVCs that are already in use by pods. Component: Generic backups Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Moderate |
PB-7774 | Issue: PXB web console was not displaying the time stamp for all stages of backup creation. User Impact: User was not able to get accurate data on various phases of backup and the time taken for each phase. Resolution: The PXB web console provides detailed information on the time taken for each stage of the backup and the total backup completion time, accessible through the View JSON option. Component: Backups Affected Versions: 2.7.3 | Major |
PB-7553 | Issue: In multi-namespace backups, namespaces where backup of all PVCs failed were included in the protected app count. User Impact: PXB web console dashboard used to display an unprotected namespace as protected. Resolution: PXB does not include such namespaces in the dashboard and gives accurate data to the users now. Component: Namespace backups Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
PB-7476 | Issue: The Kopia/NFS backup and restore job pods could not be configured with node affinity, leading these pods to be scheduled on nodes where the applications (to be backed up) were not running. User Impact: Backup job pods were scheduled on nodes (though applications were not running on these nodes due to network restrictions), resulting in backup failures. Resolution: PXB now enables you to configure node affinity for Kopia/NFS backup and restore job pods, ensuring they run on the specified nodes Component: NFS backup Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
PB-7469 | Issue: API calls failed to fetch resources if the response time exceeded 90 seconds, resulting in a time-out error. User Impact: The user was unable to retrieve the required resources via API calls. Resolution: Users can now configure the timeout value based on their environment, allowing the API sufficient time to load and fetch the resources. Component: PXB API Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
PB-6928 | Issue: The px-backup pod encountered unhandled crashes when invalid JSON was included in the API request body.User Impact: Service disruptions and pod restarts occurred due to malformed JSON payloads. Resolution: PXB now validates REST API requests, returns appropriate errors for malformed JSON instead of causing crashes or pod restarts. Component: PXB API Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
PB-6916 | Issue: No alerts were triggered when the backup cluster could not reach the application cluster. User Impact: Users did not receive email notifications or alerts in the PXB web console, leaving them unaware of the issue. Resolution: PXB now sends alerts to the configured email and and also displays the alerts on the web console when an application cluster goes offline. Component: Alerts Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
PB-6665 | Issue: No alerts were triggered when backups transitioned to Cloud Backup Missing state for NFS backup locations.User Impact: Users were not notified when backup files were missing from NFS backup locations. Resolution: PXB now notifies the user through alerts if cloud backups go missing in NFS backup locations. Component: NFS backups Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
PB-6152 | Issue: In a proxy-enabled cluster, adding a filter to bypass the proxy for px-backup gRPC traffic was failing.User Impact: Users were unable to deploy PXB in a proxy-enabled cluster. Resolution: You can now add the localhost:<px-backup-grpc-port-number> endpoint to the no-proxy environment variable to bypass the proxy for internal communication. Component: Install Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
PB-6034 | Issue: For synced backups, if the logged-in user is not the cluster owner, attempt to create a duplicate backup resulted in an error: Duplication of backup failed because cluster <cluster-name> not found .User Impact: Users could see the duplicate backup option in the PXB web console but were unable to create a duplicate backup successfully. Resolution: Duplicate option is disabled for synced backups in the Activity timeline > Backup tab. Component: PXB Dashboard Affected Versions: 2.7.1, 2.7.2, 2.7.3 | Major |
Known Issue
Portworx Backup team is aware of the following known issue and it will be fixed in the upcoming releases:
Issue Number | Description |
---|---|
PB-8102 | Issue: If user 1 creates a manual backup and user 2 creates an incremental backup of same namespace of a cluster (shared or non-shared) on the same backup location (shared or non-shared), when user 1 tries to delete their backup, it will get stuck in delete pending state due to its dependency on user 2's incremental backup. This issue is seen only for backups of Portworx volumes.Workaround: User 1 must contact super administrator or user 2 to delete the dependent backup and then proceed with deletion of their own backup. |
2.7.3
October 15, 2024
Refer these topics before you start install or upgrade tasks:
Portworx Backup offers the following features and fixes in this release:
Feature
Portworx Backup now offers support for Azure Blob immutable storage containers as backup targets. This feature expands the existing S3 object lock functionality to Azure Blob Storage, providing data protection with immutability. It enforces a version-level Write Once Read Many (WORM) policy, applicable at both the storage account and container levels. For more information, refer to Azure immutability backup location.
Fixes
Issue Number | Description | Severity |
---|---|---|
PB-8360 | Issue: In certain versions of IBM Cloud Object Storage (COS), addition of backup location failed for unlocked buckets. User Impact: The backup location addition failed with an UnsupportedOperation error for unlocked IBM COS buckets.Resolution: Portworx Backup now ignores this error for unlocked IBM COS buckets and the user no longer encounters this error. Component: Backup location Affected Versions: 2.7.0, 2.7.1, 2.7.2 | Major |
PB-8316 | Issue: The backup status was incorrectly marked as successful instead of partial success despite some PVCs failing during the backup process.User Impact: Users might incorrectly assume that the backup process was successful though all the data is not backed up due to failed PVCs. Resolution: Portworx Backup now updates the in-memory value, accurately displays the number of failed volumes, and reflects correct backup status. Component: Partial success backups Affected Versions: 2.7.0, 2.7.1, 2.7.2 | Major |
PB-7938 | Issue: During Portworx Backup installation or upgrade, if the PostgreSQL service is in Not ready state, post install job used to fail.User impact: User cannot access Portworx Backup web console. Resolution: Post install job waits for the PostgreSQL service pod to reach the Ready state and then starts the dependent operations.Component: PostgreSQL Affected Versions: 2.7.0, 2.7.1, 2.7.2 | Minor |
PB-7868 | Issue: NFS backup location PVC used to fail without the NFS version being passed in mount option. User impact: KDMP restore with NFSv4 backup location used to fail. Resolution: You can now restore KDMP backup taken on NFSv4 backup location. Component: KDMP restore Affected Versions: 2.7.0, 2.7.1, 2.7.2 | Major |
PB-7726 | Issue: Kubevirt VM backups used to fail if there was a VM whose migration has failed . User impact: VM Backups with default exec rules were failing if their virt-launcher pods are not in Running state.Resolution: Default exec rules are now executed on Virt-launcher pods that are in Running state for the VM backups to succeed.Component: Kubevirt Affected Versions: 2.7.0, 2.7.1, 2.7.2 | Major |
PB-4394 | Issue: KDMP restore of a PVC used to fail if the snapshot size was greater than PVC size. User impact: User was not able to perform a KDMP restore of PVC for file system-based storage provisioners. Resolution: By default, Portworx Backup now increases the restore PVC size for KDMP backups by 10% (based on snapshot size). In addition, provides option to increase the size of restore PVC. For more information, refer to KDMP restore Component: KDMP restore Affected Versions: 2.7.0, 2.7.1, 2.7.2 | Major |
2.7.2
Sep 12, 2024
Refer these topics before you start install or upgrade tasks:
Portworx Backup provides the following enhancement and fix in this release:
Enhancement
Partial success backup
This enhancement ensures that backups with some failed volumes are still considered successful for the volumes that were successfully backed up, improving the overall utility and reliability of our backup solution. Users can now rely on partial backups, ensuring that successfully backed-up volumes are still usable even if some volumes fail. For more information, refer to partial success backups.
Fix
Portworx Backup 2.7.2 provides the following fix:
Issue Number | Description | Severity |
---|---|---|
PB-7726 | Issue: VM backups used to fail during auto execution of default backup rules (both pre-exec and post-exec) if the virt-launcher pod of the VMs are not in running state. User impact: You cannot backup your VM successfully. Resolution: Default backup rules are now auto-executed on the virt-launcher pods that are in running state for successful creation of VM backups. Component: Kubevirt Affected Versions: 2.7.0, 2.7.1 | Major |
2.7.1
June 6, 2024
Refer these topics before install or upgrade to know more on the installation prerequisites and compatible versions:
Portworx Backup provides the following enhancement and fixes in this release:
Enhancement
Portworx backup enables you to add an Azure backup location based out of China through Portworx Backup web console. For more information, refer to Add Azure backup location.
Fixes
Portworx Backup 2.7.1 comes with the following fixes:
Issue Number | Description |
---|---|
PB-6908 | Issue: Being on Portworx Backup version 2.6.0, if you upgraded Stork version to 24.1.0 and then opt for default VSC in the Create Backup window, VSC Not Found Error was seen.Resolution: You can now choose default VSC and create successful backups in this environment. |
PB-6687 | Issue: If you deploy Portworx Enterprise with PX-Security enabled and take a backup on NFS backup location and then restore, restore used to fail. Resolution: This issue is now fixed. |
Known Issues
Portworx Backup team is aware of the following known issues and they will be fixed in the upcoming releases:
Issue Number | Description |
---|---|
PB-7159 | Issue: For default CSI restores with replace existing resources option on cloud platforms, restores will be successful but the application pods get stuck in container creating state. Workaround: Perform a custom restore to a different namespace to bring the application pods up. |
PB-7158 | Issue: When you restore a CSI backup based on any cloud platform supported by Portworx Backup, Show Details window of restore displays duplicate entries for PV and PVC resources. |
2.7.0
May 20, 2024
Refer these topics before install or upgrade to know more on the installation prerequisites and compatible versions:
Portworx Backup proudly introduces a series of new features in this release.
New Features
New web console
Portworx Backup provides an intuitive web console with light theme designed to provide a holistic view, centralized management, detailed insights, and simplifies management of your backups and restores. Along with real-time insights and streamlined management capabilities the user interface provides easy navigation and usability. For more information, refer to Web console.
Portworx Backup Dashboard
Portworx Backup introduces an extensive dashboard that provides enhanced visibility and control over your backup and restore operations across all clusters. Dashboard enables you to track the health and status of all your backup and restore operations in real-time. Also, you can view metrics like total number of protected applications, successful and failed operations, and ongoing activities with breakdowns by hour, day, and cluster. You can export detailed backup and restore data in CSV format for in-depth analysis and reporting to make informed decisions based on historical data trends. Refer to Web console for more information.
Email Alerts
From Portworx Backup version 2.7.0, you can configure dashboard alerts, receive instant notifications for critical events such as failures in backups, restores, backup location and connectivity to clusters. Administrators can set up a dedicated SMTP server to enable email notifications to keep the intended audience informed on the key events that arise in Portworx Backup environment. For more information, refer to Email alerts and Configure email alerts.
Refined CSI and KDMP backups
This feature provides significant refinements to the backup options available in Portworx Backup utilizing the CSI (Container Storage Interface) and KDMP (Kubernetes Data Management Platform) drivers. Portworx Backup now facilitates cross-cloud backups and restores through the web console and the new approach gives the users more control to map their storage provisioners with the VSCs more accurately to create the required type of backup as per their needs. Refer to CSI backups and KDMP backups for more information.
Native monitoring with external or internal Prometheus stack
From 2.7.0 release, you can opt in to use your own Prometheus Stack or Portworx Backup provided one as your monitoring component with minimal configuration settings. With this, Portworx Backup offers the reliability of Prometheus and Alertmanager for metrics collection and alert management. You can deploy Portworx Backup's Prometheus stack through Helm for this robust framework to support a secure and scalable alerting and monitoring solution. For more information, refer to Configure Prometheus.
Backup and restore VMs with KubeVirt
Portworx Backup now lets you backup and restore your VMs through stream-lined workflow with a new web console. In addition, Portworx Backup internally applies default or built-in (pre-exec and post-exec) rules in case you do not apply any rules during creation of VM-based backups. For more information on how this new feature functions, refer to Backup and restore KubeVirt VMs.
Fixes
Portworx Backup 2.7.0 comes with the following fixes:
Issue Number | Description |
---|---|
PB-6880 | Issue: Previously, deleting KDMP backups failed if the "CSI + Offload to backup location" option was selected in the Create Backup window of Portworx Backup version 2.6.0 and below. Resolution: This issue has been fixed. |
PB-6631 | Issue: Quick and full maintenance job pods were using certificates and TLS secrets (even with SSL disabled) and got stuck in the ContainerCreating state failing to mount a non-existent secret.Resolution: These pods now operate smoothly without getting stuck. |
PB-6578 | Issue: Portworx Backup pod would crash with a nil pointer error when handling a large number of backups. Resolution: This issue is resolved now. |
PB-6435 | Issue: A Kubernetes version call would fail when more than 100 application clusters were added to a single backup deployment. Resolution: This issue has been fixed. |
PB-5824 | Issue: In the Restore Backup window, the Destination Namespace drop-down was not scrollable when the list of namespaces was too large. Resolution: Scrolling through a large list of destination namespaces is now possible. |
PB-5438 | Issue: Restoring backups of static PVCs with labels previously failed. Resolution: Backups of static PVCs with labels can now be restored without issues. |
PB-4623 | Issue: The Portworx Backup web console displayed the Remove option from the vertical ellipsis even when a backup was in the Deleting or Delete pending state. Resolution: The backups in these states appear disabled now and display only the View Json and Show Details options in the vertical ellipsis. |
PB-4443 | Issue: The Portworx Backup web console experienced slow performance when multiple users logged into a single instance. Resolution: This issue has been resolved. |
PB-3852 | Issue: The Portworx Backup web console previously allowed users to delete a backup while its restore was in progress. Resolution: From version 2.7.0 of Portworx Backup, deleting a backup during its restore is no longer possible. |
PB-3175 | Issue: Users were unable to delete a cluster that was in a Not available state. Resolution: Clusters in this state can now be deleted. |
Known Issues
Portworx Backup team is aware of the following known issues and these issues will be addressed in the upcoming releases:
Issue Number | Description |
---|---|
PB-6999 | Issue: Users will not be able to create a duplicate of VM backup. |
PB-6908 | Issue: If you upgrade to Stork version 24.1.0 while you are on Portworx Backup 2.6.0, selecting the default VSC in the Create Backup window may result in a VSC Not Found Error . |
PB-6901 | Issue: When you try to restore a backup of a namespace with Istio enabled, the restore goes into a partial success state. Workaround: While restoring the backup of application namespace with Istio enabled ensure that you deselect Istio ConfigMap istio-ca-root-cert resource under Resource Selector during restore. |
PB-6896 | Issue: Gateway time-out errors occur in SMTP configurations lacking user authorization because Portworx Backup does not support email alert configurations for unauthorized or non-authenticated SMTP servers. |
PB-6819 | Issue: Portworx Backup does not support backing up and restoring VMs in Microsoft Windows environments. |
PB-6817 | Issue: Portworx Backup does not support snapshot class mapping backup (CSI or CSI with KDMP) with CephFS. |
PB-6758 | Issue: When navigating away from the Portworx Backup home page during a report download, the download task is aborted without notification if there are a large number of backups involved. |
PB-6747 | Issue: Scheduled backups on OCP clusters may occasionally fail when creating a snapshot class mapping backup with the Offload to backup location option if the cluster's Kubernetes version is below 1.29. This failure is due to a faulty condition in the snapshot controller. |
PB-6612 | Issue: Portworx Backup does not support KDMP backups for Kubevirt VMs. |
PB-6046 | Issue: The activity timeline chart on the Portworx Backup Dashboard page displays all backup data instead of only showing filtered backup data for the specified time. |
2.6.0
December 7, 2023
Features
Backup and restore VMs with KubeVirt
Portworx Backup now lets you to backup and restore Virtual Machines (VMs) using KubeVirt add-on. This feature also allows you to backup and restore VMs migrated from VMware environments and VMs running on OpenShift Virtualization. KubeVirt runs VMs as Kubernetes resources and Kubernetes manages these resources with the built-in Kubernetes tools. For more information, refer to Backup and restore KubeVirt VMs.
Deletion of stranded resources
Portworx Backup default administrator users can now view the key resources of Portworx Backup created by non-admin users. Default administrator can now view the clusters, backups, restores, and backup schedules created by these users, filter them by username, and delete them if those resources are obsolete or not required any more. For more information, refer to Deletion of stranded resources.
Starting from Portworx Backup 2.6.0 with Stork 23.9.0 and above, users cannot take a backup of kube-system, kube-node-lease, kube-public namespace and the namespace where Portworx is installed in any of the cluster environments.
Enhancement
Portworx Backup has now enhanced its functionality to provide integrity and reliability for synced backups. From 2.6.0, backups created on the shared backup location are visible only to the backup owner and not to the owner of the shared backup location. This functionality enhances the integrity and reliability of the synced backups. For more information, refer Synced backups.
Fixes
Issue Number | Description |
---|---|
PB-4687 | Issue: If the user tried to update an AWS S3 backup location through API without validation flag, backup location update task used to get stuck in validation in progress state.Resolution: Users can update the AWS S3 backup location successfully with API requests. |
PB-4445 | Issue: When the user upgrades Portworx Backup version from 2.4.2 to 2.5.1 with external auth provider as OpenShiftv4 (with custom CA certificate), user cannot login to Portworx Backup or Keycloak user interface through OpenShiftv4. Resolution: Upgrade to Portworx Backup version 2.6.0. |
PB-4365 | Issue: Sometimes KDMP backups and restores fail if the Dataexport CR already exists.Resolution: Portworx Backup checks for existence of Dataexport CR and then continues with other volumes to resolve this issue for KDMP backups and restores. |
PB-4354 | Issue: Users were unable to take a KDMP backup of the PVC used by completed job pod. Resolution: This issue is now fixed. |
PB-4274 | Issue: Maintenance job pods used to get stuck in ContainerCreating state when NFS mount is not reachable or invalid after creating NFS backup location.Resolution: Maintenance job pods get force deleted when they are stuck in ContainerCreating state with an hourly check for such pods. |
PB-4259 | Issue: Backup was not going to failed state when a backup location and NFS subpath are deleted or does not exist.Resolution: Portworx Backup validates this condition and moves the backup to failed state with CloudBackup objects are missing error. |
PB-4070 | Issue: Restore from NFS backup location used to go to a perpetual Pending state if the destination cluster has older Stork version.Resolution: Portworx Backup now checks for the Stork version and displays an appropriate error message in the user interface. |
PB-4067 | Issue: KDMP or local snapshot backup delete jobs were not getting cleaned up after completion of backup deletion, leading to accumulation of stale backup delete job entries. Resolution: This issue is fixed now. |
Known Issues
Issue Number | Description |
---|---|
PB-4875 | Issue: In an on premises OCP cluster with custom CA certificate and Active Directory as external auth provider, addition of an AWS S3 backup location fails. Workaround: Disable SSL during creation of an AWS S3 backup location. |
PB-4872 | Issue: If you install Portworx Enterprise with FlashBlade direct attach NFS volumes, backup fails if you select the volume snapshot class during creation of backup in the create backup window. Workaround: Deselect volume snapshot class option in the create backup window. |
PB-4869 | Issue: When you restore a backup with FlashBlade direct-attached NFS volumes on Portworx cluster, restore might fail if you have enabled Snapshot option in FlashBlade user interface. Workaround: Refer to Restore with FlashBlade section for the workaround. |
PB-4805 | Issue: If the user updates email, first name or last name details in the Keycloak, updated details do not reflect in the All Backups page of Portworx Backup user interface. Workaround: Check All Shared Clusters page to view the latest changes made to email, first name, or last name. |
PB-4628 | Issue: You cannot restore backups of Portworx RWX volumes to a new namespace with different name from source cluster to a different cluster. Also, KubeVirt VMs are paused during a KDMP backup with RWO mode. Workaround: Create an empty namespace with same name in the source cluster. |
PB-4116 | Issue: KubeVirt VMs restore from source cluster (with RHEL9) to destination cluster with RHEL 8.x fails. Workaround: Update the VM template to spec.domain.machine.type: q35 to make it version neutral. |
PB-3175 | Issue: User cannot delete a cluster that goes to Not available state with a backup schedule running. |
2.5.1
September 14, 2023
Portworx by Purestorage recommends to upgrade your Portworx Backup version from 2.5.0 to 2.5.1. Ensure that you go through the instructions in Configure access to user interface section before upgrade.
Fixes
In this release, Portworx Backup has the following issues fixed:
Issue Number | Description |
---|---|
PB-4253 | Issue: Users had to access the Portworx Backup 2.5.0 version user interface with some workarounds due to Keycloak version upgrade. Resolution: From Portworx Backup 2.5.1 onwards, Users can access the user interface without any additional configurations. |
PB-4164 | Issue: In Ingress-Nginx controller environment, creation of new user in Keycloak failed with gateway error. Resolution: Update the proxy buffer configuration option to proxy-buffer-size:128k in Ingress-Nginx controller’s ConfigMap. For more information, refer Ingress ConfigMap. |
PB-4023 | Issue: In case of failed KDMP or local snapshot backups, the volumesnapshot or volumesnapshotcontent was not getting deleted from the cluster during the cleanup.Resolution: With Stork 23.7.0 and above, Portworx Backup successfully deletes failed KDMP or local snapshot backups and clears up the storage space. |
PB-4000 | Issue: In some scenarios, deletion of bound pod created to bind the volume with WaitForFirstConsumer takes more time than expected, consequently the corresponding backup fails.Resolution: From Stork version 23.7.0 and above, timeout value for bound pod deletion is increased to 5 minutes to resolve this issue. |
Known Issue
Portworx Backup team wants you to be aware of the below issue, check future release notes for a fix:
Issue Number | Description |
---|---|
PB-4166 | Issue: User gets logged out of Keycloak every time the page is refreshed. Workaround: Refer Keycloak browser settings for the workaround. |
2.5.0
June 27, 2023
In this release, Portworx Backup offers the following features along with some fixes:
Features
-
You can now leverage your NFS server-based infrastructure, associate with Portworx Backup and utilize NFS shares as your backup location. For more information to begin with this feature, refer NFS as backup target.
-
Portworx Backup now enables you to take CSI-native backups and restores on IKS clusters using IBM Cloud Block Storage VPC CSI Driver 5.0.0 or higher versions. For more information, refer Create CSI-native backups.
Portworx Backup version 2.5.0 now uses Keycloak version 21.1.1 to provide enhanced security:
- To access Portworx Backup user interface refer, Access the Portworx Backup UI using a node IP.
For more information on: - How to configure access to Portworx Backup over HTTPS and HTTP, refer Access Portworx Backup UI.
- How to configure Keycloak for your environment, you can refer Keycloak configuration settings.
Fixes
Portworx Backup 2.5.0 comes with the following fixes:
Issue number | Description |
---|---|
PB-3765 | Issue: Deletion of the backup locations that are shared publicly was restarting the px-backup pod. Resolution: Portworx Backup pod does not restart the px-backup pod while deleting publicly shared backup locations. |
PB-3823 | Issue: When a user tries to perform backup and restore operations with kopiaexecutor:1.2.5-cve-fix , those operations succeed but the application pod becomes unresponsive. Resolution: Application pod remains stable in this scenario now. |
PB-3855 | Issue: A null image secret name was getting passed to a KDMP job. Resolution: Portworx Backup validates and passes only an image secret with a value to a KDMP job. |
PB-3863 | Issue: Portworx Backup used to throw an error when the user tries to delete a KDMP backup with a name that exceeds 63 characters as the name was used in the job pod label. Resolution: Added validation check to truncate the backup name in the label, if it exceeds 63 characters. |
PB-3878 | Issue: Users were unable to restore CSI volumes from a backup that is taken from a restored backup. Resolution: Users can successfully restore CSI volumes from a backup taken from the restored backup. |
PB-3888 | Issue: Portworx Backup user interface displays incorrect backup size for the compressed individual backups that are based on object-store or NFS backup locations. Resolution: User interface displays correct backup size data from Portworx Enterprise 3.0.0 and above. |
PB-3948 | Issue: Stork fails when a user attempts multiple parallel restores either on the same namespace or onto a different namespace. Resolution: Stork is resilient for multiple parallel restores to same or a different namespace. |
Known Issue
Portworx Backup Team is aware of the following known issue, check future release notes for a fix:
Issue Number | Description |
---|---|
PB-3810 | Users with admin roles are not able to delete the backups created by users who have quit the organization. |
2.4.3
August 4, 2023
Fix
In this release, Portworx Backup offers the following fix:
Issue number | Description |
---|---|
PB-4069 | Issue: mongodb pod used to consume more memory than expected, resulting in pod restarts with OOM error, thus affecting the functionality of px-backup pod. Resolution: mongodb pod memory consumption is now within the configured cache size of 4G or less. For more information on recovering the memory, contact Portworx support team. |
2.4.2
April 26, 2023
In this release, Portworx Backup offers the following enhancement:
Enhancement
You can now take seamless backups and restores of namespaces that contain large number of Kubernetes resources. For more information on how to perform such backups and restores, refer Backup and restore with large number of resources.
2.4.1
April 7, 2023
In this release, Portworx Backup provides below feature, enhancement, and fixes:
Feature
Portworx Backup is now available in GCP Marketplace for deployment. End users can independently install Portworx Backup on their GKE environments from this central location in simple steps. For information related to installation, refer Install Portworx Backup from GCP.
Enhancement
With Portworx Backup, you can configure a custom admin namespace to store all Kubernetes resources related to your multi-namespace backups and restores. For more information, refer Admin namespace in Portworx Backup.
Fixes
Portworx Backup 2.4.1 comes with the following fixes:
Issue Number | Description |
---|---|
PB-3689 | Issue: Rule cmd executor pods were getting created in kube-system namespace.Resolution: Rule cmd executor pods now run in the namespace where stork is deployed and not in kube-system namespace. |
PB-3733 | Issue: The kdmp backups of non-AWS cloud service providers like GCP and Azure failed as the correct cloud provider parameter was not sent to Portworx Backup. Resolution: Portworx Backup now takes the correct cloud provider parameter and creates kdmp backups successfully for all cloud service providers. |
PB-3735 | Issue: Portworx Backup pod restarts and causes interruption in UI access when you add an EKS cluster through cluster discovery option. Resolution: You can now add an EKS cluster with cluster discovery option without any interruption in the UI access. |
OPERATOR-516 | Issue: When Stork is deployed using Portworx Operator, backups used to fail with the below error message:Error running PreExecRule: error executing PreExecRule for namespace csi-mysql: pods "pod-cmd-executor-09ca32f1-efe8-4b6c-89aa-0c00d741c1db" is forbidden: error looking up service account kube-system/stork-account: serviceaccount "stork-account" not found. Resolution: Upgrade to Portworx Backup 2.4.1 for backups to be successful. |
Known Issues
Portworx Backup 2.4.1 has the below known issues, and they will be addressed in the upcoming releases:
Issue Number | Description |
---|---|
PB-3728 | Issue: Restores fail on the GKE platform if the Replace existing resources option is not selected in the Restore Backup window if resources already exist in the restore namespace. Workaround: Delete the failed restore in the user interface and restore again by selecting Replace existing resources in the Restore Backup window for successful in-place restores. |
PB-3503 | Issue: Portworx Backup does not support gke-gcloud-auth-plugin currently, as a result, addition of GKE clusters in Portworx Backup can fail.Workaround: Refer Generate kubeconfig for GKE clusters for the workaround. |
2.4.0
March 14, 2023
In this release, Portworx Backup expands on the below new features and fixes.
New Features
Rancher Cluster
Portworx Backup extends its support on Rancher clusters to allow mapping of Rancher projects across clusters during backup and restore workflows. For more information, refer Rancher cluster in Portworx Backup.
Labels in Portworx Backup
You can now retrieve namespaces and resources with preset labels in the Portworx Backup UI and create a backup of those namespaces and resources in a single click. In addition, you can automate scheduled backups of namespaces and resources to include future namespaces and resources with this feature. For more information, refer Labels in Portworx Backup.
Fixes
Portworx Backup 2.4.0 provides the following fixes:
Issue Number | Description |
---|---|
PB-3116 | Issue: Mutating and validating webhook types were not displayed in the Resource Type list in Portworx Backup user interface. Resolution: Both the validating and mutating webhook types are listed in the Resource Type field now allowing you to include them for the backup. |
PB-3298 | Issue: Unable to restore KDMP backups on a destination cluster with Kubernetes 1.24 and newer versions, due to missing secret token on service account. Resolution: You can now restore KDMP backups if the destination cluster runs Kubernetes 1.25 and older versions. |
PB-3317 | Issue: If an application contains non-live PVCs, then the job pod mounts it for backup and if another backup is triggered in parallel, the former assumes that the PVC is live (as it is used by backup job pods). Backups may sometimes fail in such a scenario. Resolution: Portworx Backup validates parallel backups with non-live PVC scenarios and ensures that backups are successful. |
PB-3338 | Issue: If you were to map a storageclass on the source cluster to the default storageclass of the destination cluster during custom restore, then the storageclass mapping option was ambiguous. Resolution: You can now choose use-default-storage-class option, if the custom restore needs to pickup the default storageclass configured on the destination cluster. |
PB-3465 | Issue: License page timed out and failed to display license information when no application cluster is associated with Portworx Backup. Resolution: Portworx Backup now displays accurate license information even if no application cluster is associated with Portworx Backup. |
PB-3490 | Issue: The dependent backups were getting deleted for unsupported snapshot versions in Portworx Backup making some backups unrestorable. Resolution: Upgrade to Portworx Backup 2.4.0 to ensure that the Portworx snapshot version is supported. The upgrade secures and retains the dependent backups. |
PB-3494 | Issue: KDMP restores failed on EKS clusters if cluster’s Kubernetes version is 1.23 and newer because of unsupported in-tree csi drivers in the specified Kubernetes versions. Resolution: Portworx Backup 2.4.0 takes the correct volume mount path for successful KDMP restores on EKS clusters with Kubernetes version 1.23 and newer. |
PB-3637 | Issue: Region was auto-selected by Portworx Backup for AWS S3 endpoint, due to which KDMP backups failed without region data. Resolution: Portworx Backup now provides the correct region value to AWS S3 object locations for KDMP backups to be successful. |
2.3.3
February 15, 2023
In this release, Portworx Backup offers the below enhancement:
Enhancement | Description |
---|---|
PB-3516 | Portworx Backup user interface provides additional options for cluster kubeconfig edit workflows. |
2.3.2
December 14, 2022
In this release, Portworx Backup provides enhanced security along with the below fix:
Fix
Issue Number | Issue Description |
---|---|
PB-3165 | Issue: Custom selection of namespaces from different pages was not supported in the Portworx Backup user interface. Resolution: Users can now select the required namespaces across different pages and then back up the chosen namespaces. |
2.3.1
October 6, 2022
In this release, Portworx Backup provides the following fixes:
Fixes
Issue Number | Issue Description |
---|---|
PB-3137 | Issue: Usage-based billing report fails with subscription not active error after upgrading to 2.3.0. Resolution: Usage-based license billing report works as expected with the upgrade. |
PB-3139 | Issue: Portworx Backup pod used to crash sometimes while fetching the backup location during backup share because of database issues. Resolution: Portworx Backup pod crash issue while fetching backup location is now fixed. |
2.3.0
September 15, 2022
In this release, Portworx Backup provides the below new features and enhancement.
New Features
Share backups with users and groups
Portworx Backup enables you to share your backups with other users and groups. You can either share a single backup or all backups of your cluster with the intended users. For more information, refer Share backups with users and groups.
Usage-based license
Users can now avail the benefits of usage-based licensing that depends on node count and node-hours with these new license types:
- Air-gapped metering
- Node-label licensing
For more information, refer Portworx Backup licenses.
Enhancement
Enhancement Number | Issue Description |
---|---|
PB-2279 | Portworx Backup provides encryption support for different types of backups with either a user-provided or default encryption key. For more information, refer Encryption matrix. |
Known Issues (Errata)
Issue Number | Issue Description |
---|---|
PB-3078 | Issue: Shared backups are not listed in All Backups page after a user shares all backups in a cluster with Portworx Backup admin with full access. Also, after upgrading to Portworx Backup 2.3.0, some users may not see All Shared Clusters button to view the shared backups of a cluster. Workaround: Clear the browser cache to view the shared backups in All Backups page and to see the All Shared Clusters button in the user interface. |
PB-3094 | Issue: Restores fail on AKS cluster if its Kubernetes version is 1.23.x and above. Workaround: Downgrade the Kubernetes version of the cluster to 1.22.x for the restores to work seamlessly. |
2.2.3
January 30, 2023
Portworx Backup 2.2.3 offers the below fix:
Fix
Issue Number | Issue Description |
---|---|
PB-3490 | Issue: If you are running Portworx Enterprise 2.11.2 or above with Portworx Backup 2.2.2 or below, a mismatch in CloudSnap version can impact scheduled backups. Resolution: Upgrade your Portworx Backup version to 2.2.3. |
2.2.2
August 19, 2022
Portworx Backup currently supports Portworx Enterprise version 2.11.1 and below.
In this release, Portworx Backup comes with the following fixes:
Fixes
Issue Number | Issue Description |
---|---|
PB-3015 | Issue: If addition of application clusters failed due to network issues, stork, firewall ports or other unforeseen glitches, users had to manually retry addition of each failed cluster. Resolution: Portworx Backup now automatically retries addition of failed application clusters (because of above stated reasons) without user intervention. |
PB-3016 | Issue: License information was displayed incorrectly in the Portworx Backup user interface, when addition of a large number of clusters failed. Resolution: License page now displays accurate information. |
PB-3017 | Issue: Whenever cluster addition failed, the Retry option in the user interface redirected the user to edit the cluster details before retrying. Resolution: Retry option now automatically retries to add a failed cluster again with the existing cluster details. Portworx Backup continues to provide the Edit option to modify the cluster details. |
2.2.1
July 22, 2022
In this release, Portworx Backup provides the following fixes:
Fixes
Issue Number | Issue Description |
---|---|
PB-2328 | Issue: CSI backups failed in application clusters running VolumeSnapshot version v1 and Kubernetes version 1.20 or above. Resolution: Portworx Backup now supports CSI backups for the application clusters with the above specified combination. |
PB-2373 | Issue: In Portworx Backup 2.2.0 version, creation of backups failed on backup locations with disableSSL flag. Resolution: Users can back up and restore data successfully if new or existing backup locations have disableSSL flag set. |
PB-2375 | Issue: Portworx Backup displayed an error if the user tried to add a Cloudian, IBM Cloud or FlashBlade object storage account with an unlocked bucket as a backup location target. Resolution: Users can now add an unlocked bucket hosted on Cloudian, IBM cloud or FlashBlade object storage account. |
PB-2377 | Issue: kdmp (generic) backup was failing in an air-gapped environment because of custom registry secret not getting created in every PVC namespace. Resolution: Creation of kdmp (generic) backup with custom registry secret now succeeds by creating custom registry for every PVC namespace. |
PB-2904 | Issue: Generic backups with self-signed object store as backup location used to fail in Portworx Backup. Resolution: Users can now take generic backups with self-signed certificate object store and restore them later. |
2.2.0
May 04, 2022
In this release, Portworx Backup offers the following new features and enhancements.
New Features
Object Lock Support
Portworx Backup supports object lock for all S3 object store compliant backup location targets and allows object lock with a bucket-level locking mechanism to secure the objects placed in a bucket. For more information, refer S3 object lock in Portworx Backup.
Cluster Discovery
Portworx Backup enables you to discover EKS clusters that you have added in the AWS cloud account, lists them, and allows you to add the discovered clusters to your Portworx Backup clusters page. For more information, refer Discover EKS clusters.
Enhancements
Portworx Backup 2.2.0 supports the following enhancements:
Enhancement Number | Description |
---|---|
PB-2192 | For air-gapped cluster environments, you can install Portworx Backup without updating the config map with custom image registry details for kopia executor image. For more information, refer to Prepare air-gapped environment. |
PB-2196 | Portworx Backup uses privileged containers only for file system level backups to access PVC data on the host with Kubernetes hostpath. For more information refer Backup NFS shares. |
PB-2227 | During backup operations, the cloud credentials are stored securely as Kubernetes secret in the backup location custom resource on the application cluster. |
Fixes
Portworx Backup 2.2.0 comes with the following fixes:
Issue Number | Issue Description |
---|---|
PB-2081 | Issue: PVC restore of a backup with datasource pointing to volume snapshot fails if volume snapshot does not exist. User Impact: PVC restore fails if the datasource contains reference to volume snapshot. Resolution: Reset the datasource field before the restore so that the volume details get updated during restore. |
PB-2194 | Issue: Backup, restore, and backup schedule operations were using NodeRoleInstance in place of IAM user credentials. User Impact: The intended credentials were not used for backups and restores. Resolution: In Portworx Backup 2.2 with stork version 2.10 and above, backup, restore, and backup schedule operations use the IAM User or Service Accounts associated with the cloud credential instead of the NodeRoleInstance in EKS and GKE clusters. |
Known Issues (Errata)
Portworx Backup 2.2.0 has the below known issues, and they will be addressed in the upcoming releases:
Issue Number | Issue Description |
---|---|
PB-1928 | Issue: If you create a role using the Portworx Backup API (/role/default ), it does not reflect in the UI.Workaround: Create roles in the PX-Backup Security -> Roles page. |
PB-2279 | Issue: Portworx Backup fails to pass encryption key from backup location object to Stork resulting in creation of unencrypted backups. |
PB-2336 | Issue: Portworx Backup user interface becomes unresponsive when you try to delete a synced CSI or kdmp backups with local snapshot. Workaround: Contact support for assistance. |
PB-2372 | Issue: Backups created in Azure clusters prior upgrading to Portworx Backup 2.2.0 version will not have cloud credentials associated with them. Deletion of such backups do not happen automatically and require cloud credentials every time. In addition, the Azure cluster is displayed as an on-prem cluster with an incorrect icon. Workaround: All the Azure clusters created before upgrading to Portworx Backup 2.2.0 version need to be updated manually with the cloud credentials. |
PB-2374 | Issue: After performing a manual or scheduled backup of an EKS cluster, with the EKS backup location, when you add or update a backup label with a new label and delete the backup, Portworx Backup displays an error. Workaround: Navigate to Edit Backup -> Cloud Account, select a cloud credential and then add or update the label. |
2.1.1
January 19, 2022
In this release, Portworx Backup includes enhancements in addition to the AWS Marketplace support and other backup, restore features available in the Portworx Backuo 2.1.0 release.
Improvement
Pure Storage has the following upgraded or enhanced functionality:
Improvement Number | Improvement Description |
---|---|
PB-2118 | Portworx Backup now supports cross-region backup in the native GKE driver, by default. |
PB-2147 | On the AWS and GKE cloud providers, Portworx Backup supports provisioning of all related PVCs consumed by the pod in the same zone, enabling the pod to come up after the restore. |
PB-2148 | Portworx Backup now handles backup and restore of PVCs that use WaitForFirstConsumer binding mode in their respective storage classes. |
PB-2156 | Added support to back up and restore for OCP rdb and cephfs provisioners. |
Known Issues (Errata)
Pure Storage is aware of the following issues, check future release notes for fixes on these issues:
Issue Number | Issue Description |
---|---|
PB-2045 | Portworx Backup deployment fails in the cloud while using a file based storage class. Workaround: The internal database used for Portworx Backup requires block based storage. Use block based storage class for deploying Portworx Backup. |
PB-2118 | When you restore a backup created in a different region, Portworx Backup displays the restore operation is successful, but the pods do not run. Workaround: Cloud snapshots are region specific. To perform cross-region restore, add the BACKUP_TYPE=”Generic” parameter in the kdmp-config configmap on application clusters. |
PB-2124 | Cannot resize a PVC volume on a restored application. Workaround: To resize a PVC volume, set the AllowVolumeExpansion parameter in the destination storage class. |
PB-2129 | In the Azure cloud, if you restore a namespace with CouchDb to an alternate location, then Portworx Backup restores are partial complete as the restore also includes the azure-storage-account-xx secret as part of the PVC restore This issue occurs if you do not select the Replace policy .Workaround: Select Replace policy during restore or deselect the azure-storage-account-xx secret when you back up or restore. |
PB-2142 | Backup fails on S3 backup location after installing the OpenShift Cluster Platform (OCP) on AWS with Portworx setup. Workaround: When you deploy Stork, add hostNetwork: true . |
Fixes
Pure Storage has fixed the following issues:
Issue Number | Issue Description |
---|---|
PB-1017 | Issue: RoleBindings which had Subjects with no namespaces in it were not restored.User Impact: An application using such a RoleBinding would fail to start after restoring on the destination clusterResolution: Stork will retain the Subject in a RoleBinding , if it does not have a namespace in it. |
PB-1021 | Issue: RoleBindings which had a system prefix in OCP environments were not backed up by Stork.User Impact: An application using a RoleBinding with an associated system SCC in Openshift would not start up after restore, since the associated RoleBinding was not backed up on the source side.Resolution: RoleBindings which have a prefix system:openshift:scc will be collected and backed up by Stork. |
PB-1903 | Issue: The metrics API was missing in the swagger-ui API listing.User Impact: Unable to trigger metric related API from the swagger-ui .Resolution: The metrics API is now listed in the swagger-ui . |
PB-2040 | Issue: The metrics API summary line was not present in the swagger-ui .User Impact: Documentation for the metrics API was missing in the swagger-ui .Resolution: Added the API summary for the metrics API. |
PB-2050 | Issue: Portworx Backup pod logs were flooded with the following message constructing many client instances from the same exec auth config can cause performance problems during cert rotation , exhausting available network connections; clients constructed calling "aws-iam-authenticator" continuously on AWS installations.User Impact: Portworx Backup pod logs filled up with log messages. Resolution: Moved this log message to tracef as part of the client-go module. |
PB-2083 | Issue: The backup failure timeout message was confusing, since it also includes the Stork installation failure information. User Impact: The error message was misleading with Stork installation failure information. Resolution: Removed the Stork installation failure message from the timeout error message. |
PB-2158 | Issue: PayG billing information is not being captured in Zuora production server due to missing clusterID and Cluster UID fields.User Impact: Portworx Backup license usage page fails to load. Resolution: Fixed this issue by populating the Cluster ID with subscription ID and Cluster name as Org name . |
2.1.0
November 25, 2021
This release is specific for deploying Portworx Backup on AWS Marketplace. To deploy Portworx Backup outside of AWS Marketplace, deploy Portworx Backup version 2.1.1.
New features
Portworx Backup includes the following new features and enhancements:
- Offload cloud provider snapshots to backup location: You can now offload your Cloud (for example, EBS) snapshots, orchestrated by Portworx Backup, to any S3 compliant storage or another region in public cloud. This enables you to maintain another recovery point in addition to the snapshots created by public cloud providers.
- Backup persistent data on files shares: Application owners can now build Kubernetes applications on scale-out file systems and apply protection policies. This supports file services from FlashBlade, Cloud based file services (for example, AWS EFS) or any NAS device that run the containerized applications.
- Backup or recover data using user roles from PX-Secure: You can now backup or recover data using the native user roles configured in the PX-Secure feature.
Known Issues (Errata)
Portworx is aware of the following issues, check future release notes for fixes on these issues:
Issue Number | Issue Description |
---|---|
OPERATOR-516 | Install Portworx using the operator, and install Portworx Backup. When you back up a cluster with the Stork ServiceAccount not present, the backup fails displaying the following error:Error running PreExecRule: error executing PreExecRule for namespace csi-mysql: pods "pod-cmd-executor-09ca32f1-efe8-4b6c-89aa-0c00d741c1db" is forbidden: error looking up service account kube-system/stork-account: serviceaccount "stork-account" not found .Workaround: Create the missing ServiceAccount :apiVersion: v1 kind: ServiceAccount metadata: name: stork-account namespace: kube-system --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: stork-role rules: - apiGroups: ["*"] resources: ["*"] verbs: ["*"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: stork-role-binding subjects: - kind: ServiceAccount name: stork-account namespace: kube-system roleRef: kind: ClusterRole name: stork-role apiGroup: rbac.authorization.k8s.io |
PB-1976 | You cannot add the TKGi cluster to Portworx Backup. This issue occurs because Portworx Backup cannot resolve the server: https://system-test:8443 parameter in the kubeconfig file.Workaround: To add the TKGi cluster to Portworx Backup, change the server parameter value in the kubeconfig file to https://10.100.200.1:443 . |
2.0.1
August 20, 2021
Improvement
Pure Storage has the following upgraded or enhanced functionality:
Improvement Number | Improvement Description |
---|---|
OC-933 | Keycloak is upgraded from version 9.0.2 to 14.0.0, and additional changes are implemented in Portworx Backup to accommodate the newer Keycloak version. This provides a seamless experience with reduced number of vulnerabilities reported in the earlier versions. |
Known Issues (Errata)
Portworx is aware of the following issue, check future release notes for fixes on this issue:
Issue Number | Issue Description |
---|---|
PB-1840 | Issue: After upgrading from the earlier Portworx Backup version to 2.0.1 with Keycloak or an external OIDC, sign in to OIDC fails. Workaround: Clear your browser cache, and then sign in. |
2.0.0
July 30, 2021
New features
Portworx Backup includes the following new features and enhancements:
- MongoDB: replaced the existing etcd backup store with MongoDB to improve the scalability and performance of Portworx Backup.
- PX-Backup Security: enables the infrastructure administrators and application owners to control the level of access to certain Portworx Backup resources by setting governance policies and managing permissions on the platform. See Portworx Backup Security for more information.
- Backup Timeline: provides a daily and monthly graphical view of backup activity and status. See Portworx Backup Timeline for more information.
- Application grouping: provides an optimized view for Portworx Backup applications, instead of individual resources. Improved scalability to handle large numbers of enterprise resources and namespaces. See Create a backup topic for the updated Applications UI.
- Guided onboarding: provides a step-by-step guided experience for configuring and managing Portworx Backup.
Improvements
Pure Storage has upgraded or enhanced functionality in the following areas:
Improvement Number | Improvement Description |
---|---|
PB-1092 | The new Portworx Backup Applications page UI allows users to choose resources at each namespace level, as needed. |
PB-1233 | IBM cloud is now available as a cloud provider in the Portworx Backup Add Cloud Credentials page. |
PB-1539 | Reduced vulnerabilities in the updated Portworx Backup base image. |
PB-1560 | The Default cloud credentials, which are public and accessible by all Portworx Backup users, is removed from Cloud Credentials page. |
PB-1673 and PB-1748 | Various improvements around Restore progress bars |
PB-1688 | A warning message now appears on the Portworx Backup Applications page when the Stork version is detected prior to 2.6.4. Portworx Backup enables certain features (for example, custom resource selection), only if Kubernetes cluster is installed with Stork version 2.6.4 and above. |
Fixes
Pure Storage has fixed the following issues:
Issue Number | Issue Description |
---|---|
OC-569 | Cannot specify nodes while installing or enabling Portworx Central using Helm. User Impact: Unable to select nodes using the nodeSelector flag while installing Portworx Central using Helm. Resolution: While enabling or installing Portworx Backup and monitoring service, you can select nodes using the nodeAffinityLabel flag. If the nodes are labelled with px/central key, then you can set the following helm parameter:--setnodeAffinityLabel=px/central |
OC-588 | Portworx Central services are automatically configured with LoadBalancer and/or NodePort while deploying Portworx Central or Portworx Backup. User Impact: Cannot configure all Portworx Central services explicitly as the ClusterIP . Resolution: While installing or upgrading Portworx Central chart, you can set up the services according to your environment using the following command: --set service.pxBackupUIServiceType=ClusterIP,service.grafanaServiceType=ClusterIP,service.cortexNginxServiceType=ClusterIP |
OC-742 | When you install Portworx Backup, the pxcentral-backend and px central-frontend pods do not start. User Impact: The Portworx Backup chart fails on Kubernetes version 1.21. Resolution: The PX-Central installation now supports Kubernetes versions from 1.16 to 1.21. |
PB-1227 | Portworx Backup should not prompt IKS users to manually update the cluster kubeconfig whenever the access token expires. User Impact: Users couldn't use the IBM cluster kubeconfig when the access token expires. Resolution: Portworx Backup now automatically refreshes the IBM cluster kubeconfig when users provide IBM credentials with an API key. |
PB-1228 | If a deleted namespace is in a terminating state, then Portworx Backup Applications page displays its status as active. User Impact: Users may have seen the status of deleted namespaces were not updated in the Portworx Backup Applications page. Resolution: The Applications page no longer displays a deleted namespace if it is in a terminating state. |
PB-1478 | When Keycloak is unreachable, Portworx Backup does not display a proper error message. User Impact: When Keycloak was down, users saw an improper connection timeout message. Resolution: Portworx Backup now displays a meaningful error message when Keycloak is down or the endpoint is unreachable. |
PB-1697 | While changing the Portworx Backup token lifespan, the post-install hook pod fails and does not update the Portworx Backup secrets with the required OIDC tokens. User Impact: Portworx Backup installation fails because of intermittent Keycloak APIs failure. Resolution: Fixed with retries when Keycloak APIs fail. |
PB-1794 | In some cases, initializing time for the MySQL pod is longer than expected. During this prolonged initialization, the liveliness probe fails and Kubernetes deletes the pod. This causes MySQL to go into an inconsistent state. User Impact: MySQL pod never becomes healthy. Resolution: The initialDelaySeconds parameter is increased to minimize the initialization time for the MySQL pod. |
PB-1800 | An incorrect check for comparing includeResource in the backupschedule reconciler results in Portworx Backup pod to dump failed comparison logs on the ApplicationBackupSchedule object. User Impact: The following message appears in the Portworx Backup logs: time="2021-07-27T16:19:01Z" level=info msg="backupScheduleStatusUpdateReconciler: updated backup schedule CR mysql-csi-sch-46c06bb on cluster ocp-cluster" time="2021-07-27T16:19:02Z" level=debug msg="compareBackupSchedule: backup schedule mysql-csi-sch - orgID default IncludeResources differ<br/><br/> **Resolution:** The incorrect comparison is resolved and PX-Backup pod does not display the messages in the log. |
Known Issues (Errata)
Portworx is aware of the following issues, check future release notes for fixes on these issues:
Issue Number | Issue Description |
---|---|
OC-748 | The px-backup-ui service will be deprecated in the future PX-Central versions. User Impact: No user impact in the PX-Central 2.0.0 version. It may occur in the future version. Recommendation: In the future Portworx Central versions, use the px-central-ui service to access the UI. |
PD-889 | If a cluster contains a larger number of nodes, removing the cluster from Portworx Backup UI and re-adding it displays a warning. User Impact: Users can view that the cluster is not added, but internally the cluster is added. Recommendation: Users can check the cluster details page if the cluster is added. If the cluster is still not added, try adding it again. |
PB-1308 | If the IP address of a server is pointing to a loopback address (for example, 127.0.0.1:6443), instead the actual IP address in the kubeconfig, adding an application cluster using this kubeconfig will fail. User Impact: Cannot add an application cluster when using the loopback IP address in kubeconfig. Recommendation: Replace the loopback IP address with the IP address of the Kubernetes master. |
PB-1382 | While installing Portworx Backup 2.0.0 using Helm, the CustomResourceDefinition is deprecated warning appears. User Impact: Portworx Backup installation fails because the apiextensions.k8s.io/v1 API version is not supported with the Kubernetes versions prior to 1.16. Recommendation: To install Portworx Backup with the Kubernetes versions prior to 1.16, users should:
|
PB-1785 | For adding a backup location using the IBM cloud, the Secret Key and Access Key are mandatory. User Impact: Without the Access Key and Secret Key, users cannot add a backup location using the IBM cloud account. Recommendation: For adding a backup location using the IBM Cloud Object Store. |
1.2.4
May 10, 2021
Improvements
Pure Storage has upgraded or enhanced functionality in the following areas:
Improvement Number | Improvement Description |
---|---|
PB-1419 | Portworx Backup now handles license activation for clusters deployed from the IBM Cloud Catalog and running in air-gapped mode or VPC private only network. |
1.2.3
April 2, 2021
Improvements
Pure Storage has upgraded or enhanced functionality in the following areas:
Improvement Number | Improvement Description |
---|---|
PB-1097 | Improved drop-downs in Safari. |
PB-1125 | If you delete multiple backups, delete confirmation modal now displays the number of backups being deleted instead of a list of backups. |
PB-1149 | If you are an IBM Cloud Pay-As-You-Go or subscription account user, now you cannot import a different type of license. |
PB-1160 | The Protected Data field now displays a sum of sizes of all backups from all clusters, including deleted clusters. |
PB-1145 | Portworx Backup now properly displays the start date of your IBM Cloud Pay-As-You-Go or subscription account license. |
Fixes
Pure Storage has fixed the following issues:
Issue Number | Issue Description |
---|---|
PB-1077 | Sometimes, Portworx Backup needed more than one iteration to delete a backup, even if there were no dependent backups. User Impact: Portworx Backup took a long time to delete a backup. Resolution: Portworx Backup now deletes backups faster, as it places all backups in the same iteration. |
PB-1078 | On the Backups tab, selecting a date range had no effect. User Impact: Portworx Backup always displayed all backups. Resolution: Portworx Backup now displays backups taken during a specific date range. |
PB-1090 | Sometimes, when the number of resources being enumerated was very large, background workers took too much memory when they iterated over backups. User Impact: The Portworx Backup containers were evicted. Resolution: Background workers no longer take too much memory when they iterate over backups. |
PB-1123 | When a user who did not have permission to create a Kubernetes namespace tried to create a backup schedule, the operation failed even when the namespace was already present. User Impact: Users who didn't have permission to create a Kubernetes namespace could not create a backup schedule. Resolution: When a user who doesn't have permission to create a Kubernetes namespace tries to create a backup schedule, and if the namespace already exists, then Portworx Backup will create that backup schedule. If the namespace does not exist and the user does not have permission to create a namespace, then the operation will fail. |
1.2.2
Jan 26, 2021
New features
- Portworx Backup now allows you to delete multiple backup and restore jobs.
- Users can recreate a backup or restore job from an existing job by duplicating a successful or failed backup and restore job.
- If you are using a CSI driver and the original cluster is no longer available, you can now choose any other CSI cluster to delete your CSI backup.
- You can now specify the CSI snapshot class that Portworx Backup will use to back up a CSI volume.
- Portworx Backup now supports cross-cluster restores on clusters running Pure Service Orchestrator (PSO) v6.0.5
Improvements
Pure Storage has upgraded or enhanced functionality in the following areas:
Improvement Number | Improvement Description |
---|---|
PB-773 | Portworx Backup now displays an improved error message when users choose the "Include any namespace created" option on a cluster running a version of Stork older than 2.5.0. |
PB-981 | Portworx Backup now displays different icons for partially successful restores. These icons help to distinguish between partially successful restores and successful ones. |
PB-1050 | When the number of resources that are backed up is very large, the Portworx Backup Details modal now displays a message indicating that resources are being loaded. |
Fixes
Pure Storage has fixed the following issues:
Issue Number | Issue Description |
---|---|
PB-1003 | The view JSON output for a backup schedule did not show all the successful backups taken by that backup schedule. User Impact: The users could not see all their successful backups. Resolution: The view JSON output for a backup schedule now shows all the successful backups that were taken by that backup schedule. |
PB-1015 | Sometimes, when a cluster came back online, Portworx Backup did not update the status of that cluster immediately. User Impact: There was a delay in updating the cluster status during which users saw their online clusters marked as offline. Resolution: Portworx Backup now correctly reflects the status of your cluster. |
PB-1036 | Users could not apply a new license. User Impact: Portworx Backup displayed the following error message: "Can't update license as current license type is invalid." Resolution: Users can now apply new licenses. |
PB-1039 | When the number of namespaces being backed up was very large, Portworx Backup took a long time to load all resources, and backups would fail with a timeout error. User Impact: Portworx Backup marked the backup as failed. Resolution: Portworx Backup no longer times out and marks backups as failed when the number of namespaces being backup up is very large. |
PB-1047 | On the Applications page, if the user selected a particular resource type to back up, Portworx Backup enabled the Backup button before all the resources were loaded. User Impact: Sometimes, Portworx Backup backed up only a subset of the resources the user has selected. Resolution: Portworx Backup now enables the Backup button only after it loads all resources, and backs up all resources. |
PB-1056 | Backup jobs became stuck in the "In progress" state when the application cluster on which you triggered the backup has been shut down or terminated. User Impact: Users saw these backup jobs sit in the "In progress" state in the Portworx Backup UI and never converge to the "Failed" state. Resolution: Portworx Backup now correctly marks backup jobs as "Failed" when the application cluster on which you triggered the backup has been shut down or terminated. |
PB-1063 | If Portworx Backup failed to create a backup location, the objects created to validate the cloud credentials were not cleaned up. User Impact: If Portworx Backup failed to create a backup location, the objects created to validate the cloud credentials were not cleaned up. Resolution: When Portworx Backup fails to create a backup location, it now removes all objects created to validate the cloud credentials. |
PB-1068 | Portworx Backup did not verify the license immediately after the Portworx Backup pod was restarted. User Impact: If the license was expired and the users restarted the Portworx Backup pod, there was a ten minutes period during which users could create backups Resolution: Portworx Backup now verifies the license immediately after the Portworx Backup pod is restarted. If the license is expired, the backups fail, and Portworx Backup displays an error saying that the license is expired. |
PB-1069 | Sometimes, Portworx Backup marked a backup job as "Done" even if a volume backup was still in progress. User Impact: Sometimes, users saw their jobs being marked as "Done" even if a volume backup was still in progress Resolution: Portworx Backup now marks a backup job as "Done" only after all volume backups are successfully completed. |
1.2.1
Jan 4, 2021
Fixes
Pure Storage has fixed the following issues:
Issue Number | Issue Description |
---|---|
PB-1034 | The users could not use a license file to activate a license. User Impact: Portworx Backup displayed the following error message: "no license provided for activation." Resolution: The users can now use a license file to activate a license. |
1.2.0
Dec 3, 2020
New features
- Introducing usage-based pricing for remote cluster nodes.
- Portworx Backup now supports generic CSI driver backup and restore.
- Introducing cluster-level aggregated metrics for backup and restore with Prometheus metrics and Grafana dashboards
Improvements
Pure Storage has upgraded or enhanced functionality in the following areas:
Improvement Number | Improvement Description |
---|---|
PB-553 | On the Restores page, Portworx Backup now displays a different icon for partially successful restores. This icon helps to distinguish between a partially successful restore and a successful one. |
PB-894 | On the All Backups page, Portworx Backup now displays the name of the cluster for each backup. |
PB-932 | On the Schedules page, you can now hover over a paused backup schedule to see the reason for that backup being paused. |
PB-934 | On the Applications page, Portworx Backup now properly displays namespaces with long names in the namespace drop-down list. |
PB-948 | Portworx Backup now displays the date and the time when the next backup will run in the modal containing details about your backup schedule. |
PB-946 | Portworx Backup now uses the same format to display dates for backups, restores, and backup schedules. |
PB-947 | On the Scheduled Backup Details pane, the NEXT SCHEDULED BACKUP ON field now shows the date and the time when the next backup will run. When a backup is running, the NEXT SCHEDULED BACKUP ON field shows In Progress. |
PB-950 | If you select the name of a backup, restore, or backup schedule from the table view, Portworx Backup now displays a modal containing details about your backup, restore, or backup schedule. |
PB-958 | On the Applications page, the list of resource types is now sorted alphabetically. |
PB-966 | On the Schedules page, Portworx Backup now displays the namespaces included in a backup. |
Fixes
Pure Storage has fixed the following issues:
Issue Number | Issue Description |
---|---|
PB-659 | Portworx Backup did not notify the users when their license was about to expire. User Impact: The users would know about expired licenses only when backups and restores started failing due to expired licenses. Resolution: Portworx Backup now displays a warning seven days before license expiration. |
PB-862 | If the OIDC server was not reachable during startup, Portworx Backup failed to start User Impact: Portworx Backup did not start. Resolution: If the OIDC server is not reachable during startup, Portworx Backup now starts and tries to connect to the OIDC server before a gRPC call is performed. |
PB-892 | Portworx Backup incorrectly reflected the size of your AWS backup. User Impact: Portworx Backup displayed "B" instead of "GiB" Resolution: Portworx Backup now accurately displays the size of your AWS backup. |
1.1.1
Nov 2, 2020
Improvements
Portworx by Pure Storage has upgraded or enhanced functionality in the following areas:
Improvement Number | Improvement Description |
---|---|
PB-868 | On the Application page, you can now select the Backup button to perform a new backup operation, even if not all resources are loaded yet. |
PB-809 | When an API call times out, Portworx Backup now displays a more descriptive error message containing the full URL of the API call. |
PB-871 | Portworx Backup now displays an error message when a user that does not have adequate permissions to add a new cluster to PX-Central tries to add one. |
PB-901 | On the Add Backup Location page, the Endpoint field can now have a maximum of 512 characters. |
Fixes
Portworx by Pure Storage has fixed the following issues:
Issue Number | Issue Description |
---|---|
PB-739 | If you used Swagger to query a backupLocation , Portworx Backup returned an empty response. User Impact: Swagger displayed the following error message: "Method Not Allowed /v1/backuplocation". Resolution: If you use Swagger to query a backuplocation , Portworx Backup no longer returns an empty response. |
PB-776 | If a user wished to restore a backup, Portworx Backup selected all namespaces included in that backup for restore. User impact: A restore could cause unwanted configuration changes on the destination cluster. Resolution: Portworx Backup no longer selects all namespaces for restore. The users choose which namespaces they want to restore. |
PB-856 | If you logged in for the first time and no clusters were added to Portworx Backup, the dashboard indicated that Portworx Backup must still load the stats. User impact: The stats section of the dashboard displayed three dots. Resolution: The dashboard now clearly shows that no clusters are added to Portworx Backup, by setting all stats to zero. |
PB-875 | Backups sometimes became orphaned, losing the association with their objects in a data store. User impact: When the user tried to delete a backup that depends on an orphaned backup, that backup became stuck in the "Delete Pending state". Resolution: Portworx Backup now deletes scheduled backups that become orphaned. If a manual backup becomes orphaned, then you must remove its corresponding object from the data store. |
PB-857 | If two users added the same cluster to Portworx Backup, and one user does not have adequate permissions to list the nodes in the cluster, then the status of the cluster is incorrectly reflected in Portworx Backup. User Impact: For both users, the status of the cluster changed continuously from "Active" to "Inactive". Resolution: Portworx Backup now accurately displays the status of the cluster. |
PB-655 | Portworx Backup failed to create a backuplocation in the AWS us-west-2 region when the user provided the default endpoint (s3.amazonaws.com ). User Impact: Portworx Backup displayed the following error message: "backup location [awsl1] creation failed as provided cloud credential [awscc] is not valid: cloud credential [awscc] doesn't have permission to upload object: BucketRegionError: incorrect region, the bucket is not in 'us-east-2' region at endpoint 's3.amazonaws.com' status code: 301, request id: , host id:" Resolution: Portworx Backup now creates a backuplocation in the AWS us-west-2 region, even if the user provides the default endpoint (s3.amazonaws.com ) |
1.1.0
Sep 28, 2020
New features
- If you add a new cluster using the CLI or API, Portworx Backup now displays your cluster in the UI.
- Added a separate Lighthouse view.
- The new PX-Backup dashboard provides insights into your protected applications. You can view the amount of data backed up, and the policies enforced both at the individual cluster level and the multi-cluster level.
- Portworx Backup now features resource-level backups, allowing you to perform granular backup operations by resource type and also at the individual resource level.
- Portworx Backup now features selective restores, allowing you to selectively restore specific resource types or resources from any selected backup.
- Introducing default backup policies: administrators can now use wildcards to specify backup policies. Portworx Backup will add all newly created namespaces to that backup schedule, without requiring a policy update.
- Administrators can now share the default cloud account and backup location with other users.
- Added additional metrics for backups, including the size of backups per PVC, namespace, and cluster.
- To help improve user experience, Portworx Backup now uses telemetry to collect information about your use cases, backup metrics, and deployment environments.
- Portworx Backup licenses are node-based, and you can check the node count when you import a license.
Improvements
Improvement Number | Improvement Description |
---|---|
PB-783 | Portworx Backup now validates the bucket name field in the Add Backup Location view. |
PB-762 | When a backup schedule is in delete pending state, Portworx Backup no longer displays the remove, suspend, or edit options. |
PB-837 | When a backup location is in delete pending state, Portworx Backup no longer displays the remove option. |
PB-640 | The Backups view now features a progress bar indicating the progress of your backup operations. |
PB-682 | The Backup Rules page now includes a help message explaining pre and post backup rules. |
PB-680 | Persistent volumes no longer appear in the Restore Backup view. |
PB-706 | Improved validation rules for the field that specifies the number of scheduled backups that Portworx Backup retains. |
PB-699 | The Restores view now features a progress bar indicating the progress of your restore operations. |
PB-671 | Portworx Backup now displays the backup size for each namespace in the Restore Backup view. |
PB-478 | When a backup is in delete or delete pending state, Portworx Backup no longer displays the View json and Show Details options. |
PB-302 | Portworx Backup now automatically validates backup locations when the users add them. |
PB-634 | Users can now filter resources by resource type. |
PB-768 | Users can now delete a resource without being prompted for the name of that resource. |
PB-636 | Improved clarity around the OrgID field in the Add License view. |
PB-645 | In the Edit Backup Schedule View, you can now use a navigation link to go to the Schedule Policy view. |
PB-710 | Added a tooltip showing whether a backup schedule is being deleted. |
PB-509 | When your Keycloak token expires, Portworx Backup now redirects you to the login page. |
PB-664 | Portworx Backup now displays the status of a cluster as Inactive , even if the cluster has been deleted or is not reachable. |
PB-712 | Every time you update the cluster configuration, Portworx Backup validates whether the cluster is accessible. |
PB-637 | If your license expires, Portworx Backup pauses all scheduled backups until you apply a new license. |
PB-654 | The users are no longer required to provide the org name when they generate new license files. |
PB-745 | The Backup Rules view now features a new Container field that allows the users to specify the container to which Portworx Backup will apply the rule. |
PB-831 | Portworx Backup now displays the resources in alphabetical order in the Create Backup view. |
Fixes
Portworx by Pure Storage has fixed the following issues:
Issue Number | Issue Description |
---|---|
PB-629 | Users were unable to log out and log in as a different user. User impact: They were seeing an error message saying "You are already authenticated as different user <username> in this session. Please log out first." Resolution: Users can now log out and log in as a different user. |
PB-686 | If you provide an HTTP endpoint in the backup location field, sync backup fails. User impact: Portworx Backup displays the following error: "Access Denied." Resolution: Sync backup now works, even if you provide an HTTP endpoint. |
PB-608 | Middleware not able to establish a connection with backup because the grpc connection was not closed. User impact: Because of this issue, Portworx Backup was marked as offline. Resolution: Users will no longer see Portworx Backup marked as offline due to the middleware not being able to establish a connection with Portworx Backup. |
PB-664 | If a backup schedule is associated with an inactive cluster, users can not remove the cluster from Portworx Backup. User impact: Portworx Backup displays an error message saying that the user can not delete the cluster. Resolution: The users can now select the inactive cluster and perform all operations except triggering a new backup. |
PB-744 | When the users created a large number of backups, and each backup contained a large number of resources, the All backups view did not show any backups. User impact: They could not see their backups in the All backups view. Resolution: Portworx Backup always shows all backups in the All backups view, even if the users create a large number of backups and resources. |
1.0.2
July 28, 2020
Improvements
Portworx by Pure Storage has upgraded or enhanced functionality in the following areas:
Improvement Number | Improvement Description |
---|---|
PB-621 | Generic CRD support: Portworx Backup now shows CRs in the application view. You must use Stork 2.4.3 or greater on the application cluster. |
PB-574 | Added support for backing up namespace quotas |
PB-573 | Added support for the kubectl oidc authenticator |
PB-565 | Provided an option to copy the json output from the Inspect Data pane |
PB-539 | Portworx Backup now displays the orgID in the user's profile page |
PB-464 | The scheduled backups settings now use a 12-hour clock |
PB-609 | The tooltip now shows the reason for Portworx Backup being marked offline when you hover over it |
PB-584 | The restore view now features a progress bar |
PB-576 | The backup view now features a progress bar |
PB-575 | Added a help message to explain the Path / Bucket field in the backup location screen |
PB-572 | Portworx Backup now reads the OIDC admin secret into a user-provided namespace instead of the Portworx Backup namespace. |
Fixes
Portworx by Pure Storage has fixed the following issues:
Issue Number | Issue Description |
---|---|
PB-627 | Backup location, schedules, pre, and post rule dropdowns showed only 10 entries, even if there were more. User impact: If they had more than 10 entries, users couldn't access them from the dropdowns. Resolution: Portworx Backup now shows all results in these dropdowns. |
PB-623 | Users were unable to delete restore jobs that were in the pending state. Resolution: Users can now delete pending restore jobs. |
PB-610 | Due to a race condition between the schedule delete and reconciler status updates, Portworx Backup did not delete backup schedules when prompted to. Resolution: Portworx Backup now properly deletes backup schedules. |
PB-608 | Middleware not able to establish a connection with backup because the grpc connection was not closed. User impact: Because of this issue, Portworx Backup was marked as offline. Resolution: Users will no longer see Portworx Backup marked as offline due to the middleware not being able to establish a connection with Portworx Backup. |
PB-599 | Stork continuously retried to update the backup/restore resources when Portworx Backup marked a job as failed. User impact: In some cases, Stork would eventually mark the backup CR as successful, but Portworx Backup would continue to show it as failed. Resolution: Portworx Backup now accurately reflects the backup CR's status. |
PB-590 | Cloud credential information was displayed in plain text in the logs and in the View JSON option. Resolution: Portworx Backup no longer displays credential information in these places. |
PB-579 | Restore jobs sometimes became stuck in the pending state. User impact: Users would see these restore jobs sit in the pending state in the Portworx Backup UI and never converge to a failed state. Resolution: If the restore job is stuck in a pending state, it will eventually be marked as failed after the timeout period. |
PB-578 | Backup entries were not deleted from the Portworx Backup user interface when backup sync was in progress and the backup location was deleted User impact: Users would see backup entries from a backup location that was removed from Portworx Backup user interface. Resolution: Portworx Backup now deletes these backup entries. |
PB-569 | "Successfully" is no longer misspelled in the Restore status dialog. |
PB-552 | Portworx Backup failed to indicate that users must have admin privileges when adding a Portworx cluster. User impact: Users may not have known why they could not add a Portworx cluster. Resolution: In the Portworx endpoint section, a message now indicates that admin account privileges are needed for to add a Portworx cluster for monitoring. |
PB-541 | Clusters with Portworx Backup disabled were listed on the dropdown in the Portworx Backup dashboard User impact: Users may have been confused by these erroneous listings Resolution: Portworx Backup no longer lists clusters with Portworx Backup disabled. |
PB-384 | Portworx Backup picked up the existing token when users signed out and attempted to sign in User impact: Users would be directly signed-in when they attempted to log back in and could not switch users if desired. Resolution: Portworx Backup now redirects users to the sign-in page after logging out. |
PB-607 | It was possible to enter decimal numbers into the schedule dialog. Resolution: The schedule dialog no longer accepts decimal numbers as input. |
1.0.1
June 5, 2020
Improvements
Portworx by Pure Storage has upgraded or enhanced functionality in the following areas:
Improvement Number | Improvement Description |
---|---|
PB-547 | Portworx Backup now allows more than 12 backups to be retained while creating a schedule. |
PB-515 | Users can now add a cluster to Lighthouse and edit it independently from Portworx Backup. |
PB-389 | The App View page now includes a refresh button, allowing you refresh the list. |
PB-485 | The credential settings page now includes info icons explaining what should be entered into the input fields. |
PB-480 | The Remove button no longer appears when there is no entry in the pod selector during Rule creation. |
PB-479 | When a backup deletion is pending, Portworx Backup no longer shows a restore option. |
PB-455 | An improved error message now displays when Stork is not installed on the application cluster. |
PB-453 | Selections on the namespace selection list now persist when you switch between tabs. |
PB-451 | When adding a Google cloud account, you can now upload your json key using the file browser. |
PB-444 | Improved clarity around options for pasting or uploading your kubeconfig on the Add Cluster page. |
PB-435 | A new warning message now indicates that any backups that belong to a deleted backup location will also be deleted. |
PB-507 | A cluster with a status of Inactive is now highlighted when the cluster is down for improved visibility. |
PB-500 | The field labels in the Add cloud account page have been improved. |