Skip to main content
Version: 2.10

Portworx Backup Release Notes

Portworx Backup 2.10.2

January 22, 2026

Refer to these topics before you start install or upgrade tasks:

Fixes

Issue NumberDescriptionSeverity
PB-13813Issue: When zlib decompression is enabled in MongoDB's network protocol, a heap memory leak vulnerability (CVE-2025-14847) allows unauthenticated attackers to read sensitive server memory contents, including credentials, session tokens, and other confidential data.

User Impact: In affected Portworx Backup versions, an unauthenticated attacker with network access to the Portworx Backup–managed MongoDB instance could exploit CVE-2025-14847 to read sensitive data stored in MongoDB, including Portworx Backup credentials, configuration data, session information, and other confidential metadata associated with backup and restore operations.

Resolution: Upgrade Portworx Backup to version 2.10.2. This release updates the embedded MongoDB components in Portworx Backup to versions that contain the vendor fix for CVE-2025-14847, removing this vulnerability from Portworx Backup–managed MongoDB instances. For supported upgrade paths and detailed steps, follow the Upgrade Portworx Backup guide

Affected Versions: 2.10.1 and earlier.
Major

Portworx Backup 2.10.1

December 15, 2025

Refer to these topics before you start install or upgrade tasks:

Features

Edit label selectors on backup schedules

Portworx Backup now allows you to modify label selectors on existing backup schedules without recreating them. Any backup schedule with resource, namespace, or VM label selectors can now be altered to change the associated resource, namespace, or VM label selectors, without the need to delete and recreate.

Enhanced Prometheus metrics for backup operations

Portworx Backup now provides comprehensive operational metrics via its metrics endpoint for consumption by external monitoring tools. These metrics expose detailed information regarding backup, enabling better observability and troubleshooting on external monitoring systems.

Fixes

Issue NumberDescriptionSeverity
PB-13531Issue: Pre-upgrade validation could fail in environments where the Keycloak PostgreSQL database username was customized from the default, resulting in authentication errors during upgrade.

User Impact: Upgrades to Portworx Backup 2.10.0 could be blocked at the pre-upgrade step on clusters using a non-default Keycloak database username.

Resolution: The pre-upgrade workflow now correctly reads and uses the configured Keycloak PostgreSQL username from deployment settings, ensuring validation succeeds in customized environments. This fix is included in Portworx Backup 2.10.1.

Affected Versions: Portworx Backup 2.5.x & above (environments with a customized Keycloak database username)
Minor

Portworx Backup 2.10.0

November 24, 2025

Refer these topics before you start install or upgrade tasks:

Features

Integration with SUSE Rancher projects access control

Portworx Backup now provides seamless integration with SUSE Rancher's project based access control. Portworx Backup users now can view & access Kubernetes namespaces mapped to their Rancher project(s) based on their LDAP/SAML group membership obtained via popular providers like OpenLDAP or Ping Identity. This feature extends your access-control from SUSE Rancher into Portworx Backup, preventing unauthorized or unintended data exposure by enforcing namespace-level filtering based on Rancher's Projects configuration.

Flexible namespace management

Portworx Backup introduces flexible namespace management capabilities that allow scheduled backups to gracefully handle missing namespaces, alongside ability to edit backup schedules to remove or add namespaces. Now the system proceeds to perform backup with available namespaces and marks the backup as Partial Success, if it finds some of the specified namespaces missing. This feature improves backup resilience even when namespace availability changes and provides greater usability by allowing users to remove or append namespaces to schedule.

Password customization for internal databases

Portworx Backup now has easier mechanism based on Kubernetes secrets, to provide and rotate credentials for its internal databases. You can also enable optional encryption of the internal-database and specify encryption keys via Kubernetes secret. Now making Portworx Backup's security and protection capabilities more usable.

mTLS support for Portworx Backup

Portworx Backup can now run with mTLS when deployed into a customer‑managed service mesh, enabling encrypted, mutually authenticated traffic across Portworx Backup microservices. This adds Helm managed integration for Istio and Linkerd, allowing UI access with HTTPs protocal aligning with enterprise security policies thus preventing unauthorized access and man-in-the-middle attacks.

Batch alerting for Backup schedules

Portworx Backup adds batch alerting for schedule operations, aggregating failures from pause, resume, and delete actions across multiple schedules into a single consolidated alert. Each alert lists the affected schedule objects with per‑object error reasons captured during each update cycle, so you can quickly pinpoint what failed and why. Alerts are grouped and continuously updated to reflect the current state - new failures are added and resolved items are removed - reducing email noise while preserving real‑time visibility.

Capture notes for backup schedule changes

Portworx Backup now lets you add a note when suspending, resuming, or editing schedules, including during bulk actions where a single note applies to all selected schedules. The latest note is shown when viewing schedule details, so that one can understand why backup schedule status was altered.

Granular backup sharing

Portworx Backup now extends its existing functionality for sharing backup, by allowing the sharer to specify with backup can be used for restore only (or) with full access rights by user or group with access to Backup Location and its Cloud Credential.

Backup and Restore FADA Volumes

The FADA volumes backup and restore feature has graduated from early access to general availability. It allows Portworx Backup to support FADA volume types effectively by using native Portworx snapshots (PXD-based) for both block and file system PersistentVolumeClaim (PVC) modes.

Enhancements

Enhanced SSL/TLS Certificate Management for Ansible Module

The Portworx Backup Ansible collection now provides comprehensive SSL/TLS certificate management with support for custom CA certificates, mutual TLS authentication, and flexible certificate validation options. This enterprise-ready feature enables unified SSL configuration through inventory variables that automatically get applied across all modules, eliminating the need for per-module certificate setup while supporting self-signed certificates, private certificate authorities, and corporate PKI deployments for enhanced security in production environments.

Optional UUID for backup resources

Portworx Backup now supports optional UUID parameters across all interfaces (REST API, gRPC API, Ansible collection, and CLI), allowing users to reference backup resources using either human-readable names or UUIDs. This enhancement simplifies resource management by enabling users to work with user-defined names instead of complex UUID strings, while still maintaining UUID support for programmatic integrations that require guaranteed unique identifiers. The flexible approach improves user experience and scriptability while preserving backward compatibility with existing UUID-based workflows.

Generic backup repositories now leaner

Scheduled backup now uses split backup repositories for generic backups, creating smaller schedule‑bound repositories per PVC. Only when the incremental threshold is reached, a new full backup is started in the new repo. Thus keeping repositories small and stable, reducing maintenance time on large datasets. Portworx Backup also has improved cleanup mechanisms employed, ensuring zero size snapshots are removed and stale folders are periodically pruned. Thereby preserving active data and improving space reclamation in your backup locations. This enhancement is compatible when using Portworx Backup 2.9.x+ alongwith Stork 25.2.x+. In such environments, on Portworx Backup upgrade, schedules are migrated to allow new capabilities, while existing backups continue to work in their current layout.

Fixes

Issue NumberDescriptionSeverity
PB-11135Issue: Helm charts required individual registry and repository configuration for each image in the px-central-values.yaml file, forcing users to manually edit multiple fields when redirecting images to different registries or repositories.

User Impact: Managing registry and repository settings across multiple images was repetitive, time-consuming, and error-prone. Users deploying in environments with private or mirrored registries had to manually update numerous entries, significantly increasing the risk of misconfiguration and deployment failures.

Resolution: Introduced global registry (images.registry) and repository(images.repo) parameters in the px-central-values.yaml file that automatically apply to all images. Users can now configure registry and repository settings once at the global level, eliminating redundant configuration and reducing the potential errors during deployment.

Affected Versions: 2.9.0 and below
Minor
PB-10193Issue: NFS/KDMP job pod crashes caused by memory-related issues (container OOM-Killed events or node-level OutOfMemory conditions) lacked clear diagnostic information in logs and the web console, making it difficult to identify the root cause of failures.

User Impact: Troubleshooting NFS/KDMP job pod crashes was challenging due to insufficient visibility into memory-related failure causes. Users could not easily determine whether crashes resulted from OOM-Killed events or node-level OutOfMemory conditions, leading to prolonged resolution times and inefficient debugging processes.

Resolution: Enhanced logging and web console reporting now provide clear identification of memory-related crash causes for NFS/KDMP job pods. The system explicitly indicates whether failures are due to OOM-Killed events or OutOfMemory conditions, enabling faster diagnosis and more efficient troubleshooting workflows.

Affected Versions: 2.9.0 and below
Minor
PB-11861Issue: Backup owners could not grant Restore or Full access for a backup to users who already had access to the associated BackupLocation and CloudCredentials resources.

User Impact: Limited sharing capability for Backup Share.

Resolution: In px-backup 2.10.0, backup sharing is expanded. Backup owners can now share backups with Restore or Full access to users who already have access to the corresponding BackupLocation and CloudCredentials. The backup owner does not need to be the owner of these resources to grant Restore/Full access.

Affected Versions: All versions
Minor
PB-11908Issue: Users could previously specify both cloud credentials and platform credentials for the same cluster, leading to potential conflicts during cluster operations.

User Impact: Cluster create and update operations now validate that only one type of credential (cloud OR platform) is provided, preventing configuration conflicts and improving operation reliability.

Resolution: Added credential exclusivity validation with clear error messages. Cluster create and update now enforce that only one credential type - Cloud or Platform - is provided.

Affected Versions: 2.9.0 and below
Minor
PB-11755Issue: Maintenance pods for non‑S3 BackupLocation's failed to run after an S3 BackupLocation was used with DisableSSL and a self‑signed TLS certificate.

User Impact: For KDMP backups using a non‑S3 BackupLocation, maintenance did not run and repositories were not cleaned up.

Resolution: Self‑signed certificate handling now applies only to S3 BackupLocation's; non‑S3 maintenance runs as expected.

Affected Versions: 2.9.0 and below
Minor
PB-12674Issue: With NFS BackupLocation's, PX volume backup deletion was issuing excessive READDIR calls, increasing load on the NFS server.

User Impact: Occasional slowness during PX volume backup deletion when using NFS BackupLocation's.

Resolution: Avoids extra reads by deleting the PX volume snapshot folder directly.

Affected Versions: 2.9.1
Minor
PB-11202Issue: Kopia repository connect timeout was fixed at 1 minute for backup, restore, maintenance, and delete operations; jobs failed if repository connection exceeded 60 seconds.

User Impact: In higher-latency environments, repository connect attempts could time out and abort jobs before a successful connection.

Resolution: Introduced the configurable KDMP_KOPIA_CONNECT_TIMEOUT setting. Set it in the kdmp-config ConfigMap for backup/restore (KDMP) jobs, and in the px-backup-config ConfigMap for maintenance/delete jobs. If not set, the default remains 1m.

Affected Versions: 2.8.4
Minor
PB-12202Issue: When PX‑Central was installed in a namespace other than px-backup, the proxy configuration did not include the namespace‑qualified px‑backup service, causing the px‑backup component to appear “Down” on the Service Status page.

User Impact: Health status for px‑backup could be reported as “Down,” and related operations could fail through the proxy.

Resolution: Helm now automatically includes the correct namespace‑qualified px‑backup service in the proxy bypass list, so health checks and internal calls succeed after install or upgrade.

Affected Versions: 2.9.0
Minor

Known Issues

Issue NumberDescription
PB-12045Issue: During CloudSnap backups with FADA volumes, attachment failures may occur when a node reaches its maximum attachment limit of 256 volumes. This happens because detach operations after backups are asynchronous, temporarily increasing the total number of active attachments.

Workaround: Keep the number of FADA volume attachments per node well below the 256 limit, or reduce the number of FADA volumes included in each backup to ensure CloudSnap operations complete successfully
PB-6901Issue: Restores for namespaces with Istio enabled can enter a partial success state because both Istio and Stork attempt to create the istio-ca-root-cert (and sometimes istio-ca-crl) ConfigMaps at the same time, resulting in a resource conflict.

Workaround: Exclude the istio-ca-root-cert and istio-ca-crl ConfigMaps from the restore. Restores with the Replace policy will fail for Istio-enabled namespaces because these ConfigMaps are pre-created by the Istio control plane and cannot be replaced.