Portworx Backup Release Notes
Portworx Backup 2.11.0
March 18, 2026
Refer to these topics before you start install or upgrade tasks:
Features
Structured Kubernetes Resource Layout for Backups
Starting with Portworx Backup 2.11.0, backup resources are organized using a structured layout to improve scalability and reliability in large Kubernetes and VM environments.
This layout reduces memory consumption during backup operations, improves reliability during network interruptions, and enables support for partial backups and selective resource restores.
During upgrade, Portworx Backup also performs a one-time cleanup of stale backup artifacts that may have accumulated during mixed-version upgrades of Stork and earlier Portworx Backup releases.
Restore Specific Namespaces or Resources from a Namespace Backup
Portworx Backup now enables more granular restore operations from namespace backups. You can selectively restore specific namespaces or individual resources within a namespace, rather than performing a full restore.
This enhancement provides greater flexibility and control when restoring to the same or a different cluster, supports configurable source-to-destination mapping (namespaces, storage classes, and Rancher projects), and includes defined concurrency limits to optimize restore performance. The feature also supports large namespace backups with improved metadata handling for resource-level restores.
File and Folder Restore for Virtual Machines
Portworx Backup now supports restoring specific files or folders from a VM backup directly to the same VM. This capability enables granular recovery without performing a full VM restore.
The feature supports configurable restore paths (original or alternate location), partition-aware restores, and is compatible with supported Linux distributions, file systems, and SELinux modes. Restore operations are processed sequentially (one file/folder restore at a time) to ensure stability and consistency.
Volume Resource-Only (VRO) Policy configuration from Portworx Backup web console
Enhanced the Volume Resource-Only (VRO) policy to support configuration from the Portworx Backup web console. The VRO policy allows backing up only Kubernetes volume resource specifications (PVCs and PVs) without including the underlying volume data.
Editable Label Selectors for Backup Schedules
Portworx Backup now allows you to edit namespace and VM label selectors on both existing and new backup schedules using the UI, CLI, API, or Ansible. This enhancement enables dynamic adjustment of backup scope without requiring schedule recreation.
Enhancements
Portworx Backup now supports restoring an individual VM (including all associated VM resources) using either Default Restore (no mapping required) or Custom Restore (with namespace, StorageClass, and Rancher project mapping). It also supports file/folder restore from VM backups with configurable restore paths.
Portworx Backup web console Cloud Settings page is enhanced to provide a more streamlined and intuitive configuration experience for Cloud Accounts and Backup Locations. There are no functional changes to existing workflows.
Update Resource Filters for Existing Backup Schedules
Portworx Backup now enables you to modify resource label filters for existing backup schedules. Previously, you could configure resource filters only when creating a backup schedule and could not change them afterward.
With this enhancement, you can add resource label filters to schedules that were created without them, update existing filters to control which resources are included in future backups, or remove filters altogether to include all resources in the namespace.
These updates apply to subsequent scheduled backups and allow you to adjust backup scope without recreating the schedule.
Fixes
| Issue Number | Description | Severity |
|---|---|---|
| PB-14153 | Issue: Jobs (kopiaexecutor, nfsexecutor, filesystemctl) spawned by Stork and Px-Backup had imagePullPolicy: Always hard-coded, causing Image Pull errors in air-gapped environments where images were already cached. User Impact: Backup and restore operations failed in air-gapped clusters because Kubernetes attempted to pull images from external registries despite images being available locally on the nodes. Resolution: Removed the hard-coded imagePullPolicy: Always and updated jobs to inherit the imagePullPolicy from their respective parent deployments: - kopiaexecutor, nfsexecutor: from Stork deployment - filesystemctl: from Px-Backup deployment This ensures backup and restore operations succeed in air-gapped environments. Affected Versions: Legacy Issue (Prior to v2.11.0) | Minor |
Known Issues
| Issue Number | Description |
|---|---|
| PB-14749 | Issue: In large-scale restore scenarios, the restore operation may complete successfully but be marked as Failed in the Portworx Backup UI. This occurs when the ApplicationRestore custom resource (CR) exceeds the default Kubernetes/etcd size limit during the final status update, resulting in an etcdserver: request is too large error.Although the UI displays a failed status, the resources are successfully restored. Workaround: Reduce the large-resource-size-limit (for example, to 800 KB) so the restore is treated as a large resource restore. Note that in this mode, detailed restored resource information is not displayed in the Restore Details view. |
| PB-14716 | Issue:In certain partial-success VM backups, if a failure occurs before the PVC specification is uploaded, the PVC metadata may be missing from the backup location. During a Full Restore, the system repeatedly retries to locate the missing PVC spec, which may cause the restore to time out and fail. Workaround: Use the VirtualMachine Restore option, which allows restoring only successfully backed-up VMs. |
| PB-14321 | Issue: After a restore with the Replace option enabled, the running application may transition to a Pending state because the existing volume references are no longer available.Workaround: Increase the ephemeral storage on the worker nodes and retry the restore. |