Skip to main content

Portworx Stork Release Notes

24.3.4

December 12, 2024

Fixes

Issue NumberIssue DescriptionSeverity
PB-8630In IBM environments, backups or restores of Persistent Volume Claims (PVCs) fail when using certain CSI provisioners (e.g., IBM File). These provisioners do not invoke the kubelet ownership and permission change functions, causing kopia to fail due to insufficient read permissions.

User Impact: Users with IBM File provisioners cannot perform KDMP backups because KDMP job pods lack the required read permissions.

Resolution: KDMP job pods now support the anyuid annotation to address permission issues. To enable this:
  1. Add the PROVISIONERS_TO_USE_ANYUID: openshift-storage.cephfs.csi.ceph.com,provisioner2 entry to the KDMP ConfigMap.
  2. Apply this ConfigMap on both the backup and restore clusters.

This ensures KDMP job pods run with the necessary permissions, resolving backup and restore failures.

Affected Versions: 24.3.3 and earlier
Minor

24.3.3.1

December 11, 2024

Fixes

Issue NumberIssue DescriptionSeverity
PB-7868The NFS backup location PVC failed when the NFS version was not specified in the mount option.

User impact: KDMP restore failed with an NFSv4 backup location.

Resolution: You can now restore KDMP backups taken on an NFSv4 backup location.

Affected versions: 24.3.3 and earlier
Major
PWX-40984Previously, in Stork version 24.3.3, live migration of VMs with bind-mounted Portworx volumes failed with the following error: Unsafe migration: Migration without shared storage is unsafe.

User Impact: Users encountered migration failures for such VMs.

Resolution: This issue has been fixed in Stork version 24.3.3.1. To successfully live migrate the impacted VMs, restart the impacted VMs after upgrading.

Affected Versions: 24.3.3
Major
PB-7944For large resources, the resources are fetched twice, increasing the backup completion time.

User Impact: Large resource backups take longer to complete.

Resolution: Resources are now fetched only once and uploaded in the same reconciler context. Note: This fix is applicable only for S3 object store.

Affected Versions: 24.3.3 and earlier
Minor

24.3.3

November 22, 2024

Fixes

Issue NumberIssue DescriptionSeverity
PB-8394If a smaller pxd volume of a few MB and a larger volume of, let's say, 500 GB, are backed up, the status of the smaller backup throws a false error: Backup failed for volume: rpc error: code = Internal desc = Failed to get status of backup: Key not found.

User Impact: The backup appears to be in a failed state with the key not found error, but the backup was actually successful and uploaded to the cloud.

Resolution: Once the backup reaches a successful state, we stop checking its status.

Affected Versions: 24.3.2 and earlier
Minor
PB-7476Node affinity was not set up in the kopia/NFS backup and restore job pods, allowing these job pods to be scheduled on nodes where the applications to be backed up were not running. This led to backup failures.

User Impact: If there are network restrictions on certain nodes and applications are not running on those nodes, there's a risk that the backup job pods may get scheduled on these restricted nodes, causing the backup to fail.

Resolution: Added node affinity for the job pods to ensure they are scheduled on the desired nodes where the application pods are running.

Affected Versions: 24.3.2 and earlier
Minor

Known issues (Errata)

Issue NumberIssue DescriptionSeverity
PWX-38905If a StorageClass is deleted on the source and an asynchronous DR operation is performed for resources (PV/PVC) that use the deleted StorageClass, the migration fails with the following error:
Error updating StorageClass on PV: StorageClass.storage.k8s.io <storageClassName> not found

Workaround: You need to recreate the deleted StorageClass to proceed with the DR operation.

Affected versions: 24.3.0, 24.3.1, 24.3.2, and 24.3.3
Major

24.3.2

October 14, 2024

Fixes

Issue NumberIssue DescriptionSeverity
PB-4394KDMP restore failed when the snapshot size exceeded the PVC size.

User Impact: Users experienced failures during KDMP restore for filesystem-based storage provisions when the PVC content was larger than the PVC size.

Resolution: Modified the PVC size to match the snapshot size.
Affected Versions: 24.3.1 and earlier
Major
PB-8316Backups were incorrectly marked as successful, not partial, even when some volume backups failed.

User Impact: Users were led to assume that all PVCs were successfully backed up, even when some had failed.

Resolution: Updated the in-memory value of failedVolCount and the backup object to accurately reflect the number of failed backups.
Affected Versions: 24.3.0 and 24.3.1
Major
PB-8360Addition of IBM COS backup location failed with an UnsupportedOperation error when the bucket was unlocked.

User Impact: Users could not add an IBM COS backup location if it was unlocked.

Resolution: The UnsupportedOperation error is now ignored for unlocked IBM COS buckets, indicating the bucket is not locked.
Affected Versions: 24.3.1
Major
PB-7726VM Backup failed while executing auto exec rules if the virt-launcher pod of the VMs was not in a running state.

User Impact: VM Backups failed when auto exec rules were applied.

Resolution: Auto exec rules are now only executed on running virt-launcher pods.
Affected Versions: 24.3.0 and 24.3.1
Major

Known issues (Errata)

Issue NumberIssue DescriptionSeverity
PWX-38905If a StorageClass is deleted on the source and an asynchronous DR operation is performed for resources (PV/PVC) that use the deleted StorageClass, the migration fails with the following error:
Error updating StorageClass on PV: StorageClass.storage.k8s.io <storageClassName> not found

Workaround: You need to recreate the deleted StorageClass to proceed with the DR operation.

Affected versions: 24.3.0, 24.3.1, and 24.3.2.
Major

24.3.1

October 07, 2024

Improvements

Improvement NumberImprovement Description
PWX-39128If the AWS init gets stuck for a long time and prevents Stork from starting up, you can skip the AWS driver init from the Stork imports by adding the environment variable SKIP_AWS_DRIVER_INIT="true" to the stork pod.

Fixes

Issue NumberIssue DescriptionSeverity
PWX-38383In certain scenarios, Kubernetes etcd was overloaded, and stork pods went into CrashLoopBackOff state with the following error:
Controller manager: failed to wait for snapshot-schedule-controller caches to sync: timed out waiting for cache to be synced.

User Impact: Stork failed and restarted multiple times due to the overloading of Kubernetes etcd.

Resolution: We've added a --controller-cache-sync-timeout flag, using which you can tweak the value of the cache sync timeout based on your requirements. The default value is 2 minutes.
Example: --controller-cache-sync-timeout=10 - sets the controller cache sync timeout as 10 minutes instead of the default 2 minutes.
Affected Versions: 24.3.0 and earlier
Minor
PWX-36167The Stork health monitor was incorrectly considering stale node entries with an offline status for pod eviction.

User Impact: If a node was repaired and returned with a different IP address, pods were inadvertently evicted from this online node due to the presence of stale node entries.

Resolution: If a node entry with an 'online' storage status shares the same scheduler ID as an 'offline' node entry, the system will disregard the offline node entry when considering pod evictions. This change ensures that pods are not inadvertently evicted from nodes that have been repaired and are now online.
Affected Versions: 24.3.0 and earlier
Minor

24.3.0

September 11, 2024

Improvements

  • Stork now supports partial backups. If backup fails for any of the PVCs, the successful backups of other PVCs are still saved, and the status is displayed as partial success.
    note

    A partial backup requires at least one successful PVC backup.

  • Updated golang, aws-iam-authenticator, google-cloud-cli, and google-cloud-sdk versions to resolve security vulnerabilities.

Fixes

  • Issue: In a Synchronous DR setup, when you perform a failover operation using the storkctl perform failover command, the witness node might be deactivated instead of the source cluster.
    User Impact: After failover, the source cluster might remain in active state, and the PX volumes can still be mounted and used from the source cluster.
    Resolution: After failover, now the source cluster is deactivated by default, and the witness node remains unaffected.

24.2.5

August 05, 2024

Fixes

  • Issue: Strong hyperconvergence for pods is not working when using stork.libopenstorage.org/preferLocalNodeOnly annotation.
    User Impact: Pods remain in a pending state.
    Resolution: When the stork.libopenstorage.org/preferLocalNodeOnly annotation is used, the pods are now scheduled in the node where the volume replica lies, and the strong hyperconvergence works as expected.

24.2.4

July 17, 2024

Fixes

  • Issue: During an OCP upgrade in a 3-node cluster, the MutatingWebhookConfiguration stork-webhooks-cfg is deleted if the leader Stork pod is evicted.
    User Impact: Applications that require Stork as the scheduler will experience disruptions, and OCP upgrades will get stuck on a 3-node cluster.
    Resolution: The MutatingWebhookConfiguration is now created after the leader election, ensuring stork-webhooks-cfg is always available.
    Affected Versions: All

24.2.3

July 02, 2024

note

For users currently on Stork versions 24.2.0, 24.2.1, or 24.2.2, Portworx by Pure Storage recommends upgrading to Stork 24.2.3.

Fixes

  • Issue: If the VolumeSnapshotSchedule has more status entries than the retain policy limit, Stork may continue creating new VolumeSnapshots, ignoring the retain policy. This can happen if the retain limit was lowered or if there was an error during snapshot creation.
    User Impact: Users saw more VolumeSnapshots than their retain policy was configured to allow.
    Resolution: Upgrade to Stork version 24.2.3. #1800
    note

    This fix doesn’t clean up the snapshots that were created before the upgrade. If required, you need to delete the old snapshots manually.


Affected Versions: 24.2.0, 24.2.1, and 24.2.2.

24.2.2

June 14, 2024

Improvements

  • Stork now uses the shared informer cache event handling mechanism instead of the watch API to reschedule unhealthy pods that are using Portworx volumes.

24.2.1

June 06, 2024

Improvements

  • Stork now supports Azure China environment for Azure backup locations. For more information, see Add Azure backup location.

Fixes

  • Issue: If you were running Portworx Backup version 2.6.0 and upgraded the Stork version to 24.1.0, selecting the default VSC in the Create Backup window resulted in a VSC Not Found error.
    User Impact: Experienced failures during backup operations.
    Resolution: You can now choose the default VSC in the Create Backup window and create successful backups.

  • Issue: If you deploy Portworx Enterprise with PX-Security enabled and take a backup on NFS backup location and then restore, restore used to fail.
    User Impact: Unable to restore backups from the NFS backup location for PX security-enabled Portworx volumes.
    Resolution: This issue is now fixed.

24.2.0

May 29, 2024

Improvements

  • Enhanced Disaster Recovery User Experience

    In this latest Stork release, the user experience has been improved significantly, with a particular focus on performing failover and failback operations. These Improvements provide a smoother and more intuitive user experience by simplifying the process while ensuring efficiency and reliability.

    Now, you can perform a failover or failback operation using the following storkctl commands:

    • To perform a failover operation, use the following command: storkctl perform failover -m <migration-schedule> -n <migration-schedule-namespace>
    • To perform a failback operation, use the following command: storkctl perform failback -m <migration-schedule> -n <migration-schedule-namespace>

    For more information on the enhanced approach, refer to the below documentation.

  • The Portworx driver is updated to optimize the API calls it makes to reduce the time taken for scheduling pods and monitoring pods if they need to be rescheduled when Portworx is down on nodes.

Fixes

  • Issue: Migration schedules in the admin namespace were updated with true or false for the applicationActivated field when activating or deactivating a namespace, even if they did not migrate the particular namespace.
    User Impact: Unrelated migration schedules were getting suspended.
    Resolution: Stork now updates the applicationActivated field only for migration schedules that are migrating at least one of the namespaces being activated or deactivated.

  • Issue: Updating the VolumeSnapshotSchedule resulted in a version mismatch error from Kubernetes when the update happened on a previous version of the resource.
    User Impact: When the VolumeSnapshotSchedule is high, Stork logs are flooded with these warning messages.
    Resolution: Fixed the VolumeSnapshotSchedule update with a patch to avoid the version mismatch error.

  • Issue: Similar volume snapshot names were created when the VolumeSnapshotSchedule frequency matched and aftertrimming produced similar substrings.
    User Impact: For one volume, a snapshot may not be taken but can be marked as successful.
    Resolution: Adding a 4 digit randomness to the name to avoid name collisions for volumesnapshots resulting from different volumesnapshot schedules.

  • Issue: Stork relies on Kubernetes DNS to locate services, but it also assumes the .svc.cluster.local domain for Kubernetes services.
    User Impact: The clusters that modified Kubernetes DNS domains were not able to use Stork.
    Resolution: Stork now works on clusters with a modified Kubernetes DNS domain.

  • Issue: Resource transformation for CR was not supported.
    User Impact: It was blocking some of the necessary transformations for resources that were required at the destination site.
    Resolution: Now, resource transformation for CR is supported.

Known issues (Errata)

  • Issue: If you use the storkctl perform failover command to perform a failover operation, the Stork might not be able to scale down the KubeVirt pods, which could cause the operation to fail. Workaround: Perform the failover operation by following the procedure on the below pages:

24.1.0

May 20, 2024

Improvements

  • Stork now supports Kubevirt VMs for Portworx backup and restore operations. You can now initiate VM-specific backups by setting the backupObjectType to VirtualMachine. Stork automatically includes associated resources, such as PVCs, secrets, and ConfigMaps used as volumes and user data in VM backups. Also, Stork applies default freeze/thaw rules during VM backup operations to ensure consistent filesystem backups.
  • Cloud Native backups will now automatically default to CSI or KDMP with LocalSnapshot, depending on the type of schedules they create.
  • Previously in Stork, for CSI backups, you were limited to selecting a single VSC from the dropdown under the CSISnapshotClassName field. Now you can select a VSC for each provisioner via the CSISnapshotClassMap.
  • Now, the creation of a default VSC from Stork is optional.

Fixes

  • Issue: Canceling an ongoing backup initiated by PX-Backup results in the halting of the post-execution rule.
    User Impact: This interruption causes the I/O processes on the application to stop or the post-execution rule execution to cease.
    Resolution: To address this, Stork executes and removes the post-execution rule CR as part of the cleanup procedure for the application backup CR.

  • Issue: Generic KDMP backup/restore pods become unresponsive in environments where Istio is enabled.
    User Impact: Generic KDMP backup and restore fails in the Istio enabled environments.
    Resolution: Relaxed the Istio webhook checks for the stork created KDMP generic backup/restore pods. Additionally, the underlying issue causing job pod freezes has been resolved in Kubernetes version 1.28 and Istio version 1.19.

23.11.0

January 22, 2024

New Features

  • You can now create and delete schedule policies and migration schedules using the new storkctl CLI feature. This enables you to seamlessly create and delete SchedulePolicy and MigrationSchedule resources, enhancing the DR setup process. In addition to the existing support for clusterPairs, you can now efficiently manage all necessary resources through storkctl. This update ensures a faster and simpler setup process, with built-in validations. By eliminating the need for manual YAML file edits, the feature significantly reduces the likelihood of errors, providing a more robust and user-friendly experience for DR resource management in Kubernetes clusters.

  • The new Storage Class parameter preferRemoteNode enhances scheduling flexibility for SharedV4 Service Volumes. By setting this parameter to false, you can now disable anti-hyperconvergence during scheduling. This provides an increased flexibility to tailor Stork's scheduling behavior according to your specific application needs.

Improvements

  • Updated golang and google-cloud-sdk versions to resolve security vulnerabilities.

Fixes

  • Issue: Exclusion of Kubernetes resources such as deployments, statefulsets, and so on was not successful during migration.
    User Impact: The use of labels to exclude selectors proved ineffective in scenarios where the resource was managed by an operator that reset user-defined labels.
    Resolution: The introduction of the excludeResourceTypes feature now allows users to exclude certain types of resources from migration, providing a more effective solution compared to using [labels].

  • Issue: The applicationrestore function created using storkctl consistently restored to a namespace with the identical name as the source, causing users to be unable to restore to a different namespace.
    User Impact: Users faced limitations as they were unable to restore applications to a namespace other than the one with the same name as the source.
    Resolution: storkctl has been updated to address this issue by introducing support for accepting namespace mapping as a parameter, allowing users to restore to a different namespace as needed.

  • Issue: The storkctl create clusterpair command was not functioning properly with HTTPS PX endpoints.
    User Impact: Migrations between clusters with SSL-enabled PX endpoints were not successful.
    Resolution: The issue has been addressed, and now both HTTPS and HTTP endpoints are accepted as source (src-ep) and destination (dest-ep) when using storkctl create clusterpair.

  • Issue: The PostgreSQL operator generates an error related to the pre-existence of service account, role, and role bindings following a migration.
    User Impact: Users are unable to scale up a PostgreSQL application installed via OpenShift Operator Hub after completing the migration.
    Resolution: Excluded migration of service account, role, and role bindings if they have owner reference set to allow PostgreSQL pods to come up successfully.

  • Issue: Real-Time Custom Resource (RT CR) enters a failed state when a transform rule includes either int or bool as a data type.
    User Impact: Migration involving resource transformation will not succeed.
    Resolution: Resolved the issue by addressing the parsing problem associated with int and bool types.

  • Issue: Continuous crashes occur in Stork pods when the cluster contains a RT CR with a rule type set as slice and the operation is add.
    User Impact: Stork service experiences ongoing disruptions.
    Resolution: Implemented a solution by using type assertion to prevent the panic. Additionally, the problematic SetNestedField method is replaced with SetNestedStringSlice to avoid panics in such scenarios. You can also temporarily resolve the problem by removing the RT CR from the application cluster.

  • Issue: Stork crashes when attempting to clone an application with CSI volumes using Portworx.
    User Impact: Users are unable to clone applications if PVCs in the namespaces utilize Portworx CSI volumes.
    Resolution: Now, a patch is included to manage CSI volumes with Portworx, which ensures the stability of application cloning functionality.

  • Issue: When setting up a migration schedule in the admin namespace with pre/post-execution rules, these rules must be established in both the admin namespace and every namespace undergoing migration.
    User Impact: The user experience is less intuitive as it requires creating identical rules across multiple namespaces.
    Resolution: The process is now simplified as rules only require addition within the migration schedule's namespace.

  • Issue: Stork was not honoring locator volume labels correctly when scheduling pods.
    User Impact: In cases where preferRemoteNodeOnly was initially set to true, pods sometimes failed to schedule. This issue was particularly noticeable when the Portworx volume setting preferRemoteNodeOnly was later changed to false, and there were no remote nodes available for scheduling.
    Resolution: Now, even in scenarios where remote nodes are not available for scheduling, pods can be successfully scheduled on a node that holds a replica of the volume.

Known issues (Errata)

  • Issue: In Portworx version 3.0.4, several migration tests fail in an auth-enabled environment. This issue occurs specifically when running these tests in environments where authentication is enabled.
    User Impact: You may experience failed migrations, which will impact data transfer and management processes.
    Resolution: The issue has been resolved in Portworx version 3.1.0. Users experiencing this problem are advised to upgrade to version 3.1.0 to ensure smooth migration operations and avoid permission-related errors during data migration processes.

  • Issue: When using the storkctl create clusterpair command, the HTTPS endpoints for Portworx were not functioning properly.
    User Impact: This issue affects when you attempt migrations between clusters where px endpoints were secured with SSL. As a result, migrations could not be carried out successfully in environments using secure HTTPS connections.
    Resolution: In the upcoming Portworx 3.1.0 release, the storkctl create clusterpair command will be updated to accept both HTTP and HTTPS endpoints, allowing the specification of either src-ep or dest-ep with the appropriate scheme. This update ensures successful cluster pairing and migration in environments with SSL-secured px endpoints.

23.9.1

December 15, 2023

Fixes

  • Issue: The generic backup of some PVCs in kdmp was failing due to the inclusion of certain read-only directories and files.
    User Impact: Difficulties restoring the snapshot as the restoration of these read-only directories and files resulted in permission denied errors.
    Resolution: Introduced the --ignore-file option in kdmp backup, enabling you to specify a list of files and directories to be excluded during snapshot creation. This ensures that during restoration, these excluded files and directories will not be restored.

    Format for adding the ignore file list:

    KDMP_EXCLUDE_FILE_LIST: |
    <storageClassName1>=<dir-list>,<file-list1>,....
    <storageClassName2>=<dir-list>,<file-list1>,....

    Sample for adding the ignore file list:

    KDMP_EXCLUDE_FILE_LIST: |
    px-db=dir1,file1,dir2
    mysql=dir1,file1,dir2
  • Issue: The backup process does not terminate when an invalid post-execution rule is applied, leading to occasional failures in updating the failure status to the application backup CR.
    User Impact: Backups with invalid post-execution rules were not failing as expected.
    Resolution: Implemented a thorough check to ensure that backups with invalid post-execution rules are appropriately marked as failed, accompanied by a clear error message.

23.9.0

December 07, 2023

New Features

Enhanced support for Kubevirt VMs in Portworx Backup

This feature facilitates the backup and restoration of Kubevirt VMs through Portworx Backup. When Kubevirt VMs are included in the backup object, the restoration process from this backup ensures successful transformation of VMs, incorporating DataVolumeTemplate and adjusting masquerade interface configurations to ensure a successful completion of the restore operation.

Fixes

  • Issue: Occasionally, the restoration process encounters an error stating resourceBackup CR already exists when the reconciler attempts to re-enter.
    User Impact: The restore operation is unsuccessful due to this error.
    Resolution: The issue has been resolved by addressing the issue of the already-existing resourceBackup CR error. The fix involves handling the error by ignoring it during the creation of the ResourceBackup CR. https://github.com/libopenstorage/stork/pull/1482

  • Issue: The attempt to add Tencent Cloud object storage fails during the objectlock support check due to a discrepancy in error reporting when object lock is not supported.
    User Impact: Users are unable to utilize Tencent Cloud object storage as a backup location for backup and restore operations.
    Resolution: A solution has been implemented by incorporating appropriate error handling checks during the verification of objectlock support for buckets in Tencent Cloud object storage. https://github.com/libopenstorage/stork/pull/1478

  • Issue: A warning event is recorded in the applicationbackup CR when the S3 bucket already exists.
    User Impact: The warning event is causing confusion for users.
    Resolution: To address this, the system now refrains from generating a warning event if the S3 object store indicates the ErrCodeBucketAlreadyExists code. https://github.com/libopenstorage/stork/pull/1481

  • Issue: Backing up the kube-system namespace is not a supported feature. However, in the case of all-namespace backups or backups based on namespace labels, the kube-system namespace is inadvertently included.
    User Impact: This inclusion of the kube-system namespace in backups causes complications during the restore process.
    Resolution: This issue has been resolved by excluding the kube-system namespace from all-namespace backups and backups based on namespace labels. https://github.com/libopenstorage/stork/pull/1506

  • Issue: The restore process based on CSI encounters failures in setups with the csi-snapshot-webhook admin webhook. This failure is attributed to a distinct error related to existing resources, specifically when creating the volumesnapshotclass resource.
    User Impact: Users are affected by the inability to perform CSI-based restores on setups featuring the csi-snapshot-webhook admin webhook.
    Resolution: The issue has been addressed by incorporating a pre-check through a get call before the create call. Now, the create call occurs only if the get call fails with a NotFound error, preventing conflicts related to existing resources. https://github.com/libopenstorage/stork/pull/1567

23.8.0

October 12, 2023

New Features

  • Stork now supports both asynchronous and synchronous DR migration for applications managed by operators. When startApplications is set to false for migrations, Stork ensures that application pods remain inactive in the destination cluster after migration. Additionally, Stork provides the flexibility to scale down applications by modifying Custom Resource (CR) specifications, using the suspend options feature. For applications controlled by clusterwide operators that do not support scaling down via CR spec modifications, Stork offers a "stash strategy" to prevent application pods from becoming active prematurely during migration, ensuring a seamless transition to the destination cluster. https://github.com/libopenstorage/stork/pull/1451

  • Stork now supports import workflows using the DataExport CRs. This means you can now seamlessly transfer data from one PVC to another within the same cluster using rsync.

Improvements

  • Stork now enables you to optimize resource utilization and ensure a more efficient and targeted restoration of applications. You can now specify resourceTypes during the application backup process, allowing for a more granular selection of resources to be included in the backup. Moreover, when initiating an application restore, you can also choose specific resources to restore, providing greater flexibility in the recovery process.

Fixes

  • Issue: ReplicaSets that had been migrated and were not under the management of deployments were unable to be activated or deactivated using storkctl.
    User Impact: Users should manually scale up or down the ReplicaSet using kubectl.
    Resolution: As part of the activation or deactivation process, storkctl is now capable of scaling up or down the migrated ReplicaSet. https://github.com/libopenstorage/stork/pull/1471

  • Issue: When using Stork 23.7, KubeVirt pods encounter a CrashLoopBackoff state, accompanied by the following error messages within the pod logs:

    /usr/bin/virt-launcher-monitor: /lib64/libc.so.6: version GLIBC_2.34' not found (required by /etc/px_statfs.so) 
    /usr/bin/virt-launcher-monitor: /lib64/libc.so.6: version GLIBC_2.33' not found (required by /etc/px_statfs.so)


    User Impact: KubeVirt VMs become unusable.
    Resolution: This issue has been resolved in Stork version 23.8. For any existing virt-launcher pods experiencing the CrashLoopBackOff state due to this bug, follow these steps after upgrading to Stork 23.8:

    1. Stop the KubeVirt VM.
    2. Restart the KubeVirt VM. https://github.com/libopenstorage/stork/pull/1493
  • Issue: Following migration, an ECK application installed using an operator demands all associated Custom Resource Definitions (CRDs) to be present to initiate successfully.
    User Impact: Users experiences difficulties in scaling up the Elasticsearch application due to the absence of essential CRDs after the migration.
    Resolution: To address this issue, the migration process will include the migration of associated CRDs for a Custom Resource, thereby preventing any obstacles in scaling up the ECK application post-migration. https://github.com/libopenstorage/stork/pull/1494

  • Issue: Following migration, applications controlled by Custom Resources (CRs) are automatically initiated if the operator is already operational in a distinct namespace.
    User Impact: This results in applications in the destination cluster starting unexpectedly, contrary to the desired behavior where they should remain inactive with startApplication: false.
    Resolution: To rectify this, a stashing strategy has been implemented for the CR content, storing it in a configmap. This ensures that the CR specification is applied only after activating the migration, allowing the applications to start as intended. https://github.com/libopenstorage/stork/pull/1451

23.7.3

September 26, 2023

Fixes

Issue: Migrations using a cluster pair created using —unidirectional option fail due to the absence of object store information in the destination cluster.
User Impact: Users couldn't run migrations with unidirectional cluster pair.
Resolution: Create object store information in the destination cluster for the migrations to succeed. https://github.com/libopenstorage/stork/pull/1501, https://github.com/libopenstorage/stork/pull/1507, https://github.com/libopenstorage/stork/pull/1510

23.7.2

September 01, 2023

Fixes

Issue: If the status is larger than the maximum etcd request size (1.5M), the update of the kdmp generic backup status in the Volumebackup CR will fail.
User Impact: At times, the failure of the update to the Volumebackup CR causes the kdmp backup to also fail.
Resolution: Currently, the practice involves refraining from updating the actual status in the Volumebackup CR if it is large. Instead, the update is made within the log of the job pod.

23.7.1

August 22, 2023

Fixes

  • Issue: The aws-iam-authenticator binary size was zero in the Stork container.
    User Impact: If kubeconfig is used with aws-iam-authenticator, then user will encounter failure when creating a cluster pair on Amazon EKS.
    Resolution: Updated the aws-iam-authenticator and the curl options to make sure binary gets downloaded successfully.

  • Issue: Updates made to the parameter associated with large resources in the stork-controller-config configmap are not preserved when the Stork pod is restarted.
    User Impact: Whenever Stork pods restart, user needs to update the parameter related to large resources in the stork-controller-config configmap.
    Resolution: Ensure that updated value of the large-resource related parameter in the stork-controller-config configmap persists across Stork pod restarts. #1473

  • Issue: The backup object has an empty volume size for the Portworx volume, when you use a Stork version that is newer than 23.6.0 but Portworx version is older than 3.0.0.
    User Impact: Portworx volumes with a version below 3.0.0 will display a volume size of zero.
    Resolution: Retrieval of volume size has been handled regardless of the versions of Stork and Portworx.

23.7.0

August 02, 2023

New Features

  • Stork has introduced an enhanced workflow for creating a ClusterPair, enabling you to create a ClusterPair using a single storkctl command in both synchronous and asynchronous Disaster Recovery (DR) scenarios. See Create a synchronous DR ClusterPair and Create an asynchronous DR ClusterPair for more information.
    note

    You need to update the storkctl binary for this change to take effect.

Improvements

  • Updated golang, aws-iam-authenticator and google-cloud-sdk versions to resolve 20 Critical and 105 High vulnerabilities reported by the JFrog scanner.

  • Added kubelogin utility in Stork container.

Fixes

  • Issue: Due to occasional delays in bound pod deletion, the current timeout setting of 30 seconds proves insufficient, resulting in backup failures.
    User Impact: At times, the backup process for a volume, which is bound with the "WaitForFirstConsumer" mode, encounters timeout errors and fails.
    Resolution: The timeout value has been extended to five minutes to ensure that the deletion of the bound pod, created for binding the volume with "WaitForFirstConsumer," will not encounter timeout errors.

  • Issue: During the cleanup process, when the KDMP/Localsnapshot backup fails, the volumesnapshot/volumesnapshotcontent were not being removed.
    User Impact: Unnecessary accumulation of the stale volumesnapshot/volumesnapshotcontent was occurring.
    Resolution: Volumesnapshot/volumensnapshotcontent cleanup is now performed, even in the case of failed KDMP/localshapshot backups.

  • Issue: When the native CSI backup fails with a timeout error, the volumesnapshotcontent is not being deleted.
    User Impact: In the event of native CSI backup failures, the volumesnapshotcontent will accumulate.
    Resolution: Proper handling includes deleting the volumesnapshotcontent in case of failure as well.

23.6.0

June 27, 2023

New Features

  • Added support for NFS share backup location for the applicationbackup and applicationrestore in stork. This support is currently supported only with the PX-Backup product.

Fixes

  • Issue: Update calls were happening to the volumesnapshotschedule in each 10 seconds even if there is no new update.
    User Impact: With lots of volumesnapshotschedules running, it is unnecessary load on api-server.
    Resolution: Avoiding updates if there is no change to volume snapshot list as part pf pruning.

  • Issue: Restore size was taken from volumesnapshot size because for some CSI driver PVC was not bounded, if Volumesnapshot size less than source volume size.
    User Impact: CSI restore was failing with some of the storage provisioner.
    Resolution: Updating the restore volume size only when the volumesnapshot size is greate than source volume size.

  • Issue: Mysql app can show inconsistent data after being restored.
    User Impact: All Mysql backup may fail to run mysql application after restore operation due to data inconsistency
    Resolution: Fixed the data inconsistency by holding the table lock for a required interval while backup is in progress.

23.5.0

May 31, 2023

New Features

  • You can now provide namespace labels in the MigrationSchedule spec. This enables the specification of namespaces to be migrated.

Improvements

  • Stork now supports the ignoreOwnerReferences parameter in the Migration and MigrationSchedule objects. This parameter enables Stork to skip the owner reference check and migrate all resources, and it removes the ownerReference while applying the resource. This allows migrating all the Kubernetes resources managed and owned by an application Operator’s CR.
    note

    You need to update the storkctl binary for this change to take effect.

Fixes

  • Issue: Restoring from a backup, which was taken from a previously restored PVC, was failing in the CSI system.
    User Impact: Unable to restore a backup that had been taken from a previously restored CSI PVC.
    Resolution: You can now successfully perform CSI restores using backups taken from already restored PVCs.

  • Issue: You may encounter webhook errors when attempting to modify StatefulSets in order to update Stork as the scheduler.
    User Impact: Webhook related errors may give you the impression that the pod scheduler feature does not function properly.
    Resolution: Removing webhook for StatefulSets and Deployments as Stork already contains a webhook for pods that manages setting Stork as the scheduler.

  • Issue: Incorrect migration status for service account deletion on source cluster.
    User Impact: The expected behavior to delete the service account on the destination cluster, based on migration status, does not occur.
    Resolution: Do not display the purged status for resources for which merging is supported on the destination cluster.

  • Issue: Storkctl create clusterpair does not honor the port provided from CLI.
    User Impact: You cannot create bi-directional clusterpairs.
    Resolution: To create clusterpair, pass the port provided from CLI in the endpoints.

    note

    You need to update the storkctl binary for this change to take effect.

  • Issue: Stork will delete pods running on the Degraded or Portworx StorageDown nodes.
    User Impact: There will be a disruption on application, even though drivers like Portworx support running applications on the StorageDown nodes.
    Resolution: Stork will not delete pods running on the StorageDown nodes.

23.4.0

May 05, 2023

New Features

  • You can now apply exclude label in MigrationSchedule spec to exclude specific resources from migration.

Improvements

  • Stork service is updated to not accept old TLS versions 1.0 and 1.1.

  • Stork now creates the default-migration-policy schedule policy, which is set to an interval of 30 minutes instead of 1 minute.

  • Stork now skips migrating the OCP specific (system:) ClusterRole and ClusterRoleBinding resources on OpenShift.

  • Stork now uses a default QPS of 1000 and a Burst of 2000 for its Kubernetes client.

  • Updated moby package to fix vulnerability CVE-2023-28840.

Fixes

  • Issue: Stork monitoring controller causes a high number of ListPods API calls to be executed repeatedly, resulting in a considerable consumption of memory.
    User Impact: When there is a considerable quantity of pods within the cluster, the Stork monitoring system triggers additional ListPods APIs, which leads to a substantial utilization of memory.
    Resolution: To reduce the overall memory usage of Stork, utilize a cache for retrieving the pod list within the health monitoring process.

  • Issue: Certain plurals of CRD do not follow pluralizing rules, which causes the APIs to fail to collect plurals during migration or backup.
    User Impact: These CRDs neither get migrated or backed up properly, which affects disaster recovery, backup and restore of applications that depend on the CRs.
    Resolution: Use API calls with correct CRD plurals, fetched from the cluster.

23.3.1

April 26, 2023

Improvements

  • Issue: Backing up a significant number of Kubernetes resources has resulted in failures due to limits on gRPC requests, Kubernetes custom resource sizes, or etcd payload sizes.
    User Impact: If the number of Kubernetes resources in a backup is large, then the backup process may fail to complete due to errors related to size limits.
    Resolution: To prevent errors related to gRPC size limits, etcd payload limits, and custom resource definition size, the resource information was removed from both the ApplicationBackup and ApplicationRestore CRs.

23.3.0

April 06, 2023

Improvements

  • Stork now supports cluster pairing for Oracle Kubernetes Engine (OKE) clusters.
  • Stork now supports bidirectional cluster pairing for Asynchronous DR migration.
  • Stork will update StorageClass on PV objects after the PVC migration.
  • Fixed CVE-2020-26160 vulnerability by updating the JWT package.

Fixes

  • Issue: Failback to primary cluster failed when an app used Portworx CSI volumes, as the volumeHandle pointed to the old volume ID.
    User Impact: App did not come up on the primary cluster after failback.
    Resolution: Recreate the PVs and specify the correct volume name in the volumeHandle field of the spec. As a result, the app will use properly bounded PVCs and will come up without any issue.

  • Issue: Could not find resources for the Watson Knowledge Catalog.
    User Impact: Migration failed for the Watson Knowledge Catalog.
    Resolution: Use the proper plurals for the CRDs for a successful migration process for the Watson Knowledge Catalog.

  • Issue: Service and service account updates did not reflect on the destination cluster.
    User Impact: Migration failed to keep updated resources on the destination cluster.
    Resolution: During migration, you need to sync service updates, merge secrets associated with the service account, and update the AutomountServiceAccountToken param for the service account on the destination cluster.

23.2.1

March 29, 2023

Fixes

  • Issue: px-backup needs a way to know the custom admin namespace configured in stork deployment.
    User Impact: px-backup users were unable to use the custom admin namespace configured in stork deployment.
    Resolution: Added a configmap stork-controller-config in kube-system namespace with the details of admin namespace.

  • Issue: rule cmd executor pods were always getting created/started in kube-system namespace.
    User Impact: Users had concerns about running the stork rule pods in kube-system namespace.
    Resolution: Fixed it to run the rule cmd executor pods in the namespace where stork is deployed.

23.2.0

March 10, 2023

note
  • Starting with 23.2.0, the naming scheme for Stork releases has changed. Release numbers are now based on the year and month of the release.
  • Customers upgrading to stork 23.2.0 will need to update storkctl on their cluster. This is required to correctly set up migration with Auto Suspend.

Improvements

  • Stork will migrate all the CRDs under the same group for which CR exists if it is for a different kind. #1269
  • Stork will update the owner reference for PVC objects on the destination cluster. #1269
  • Added support for gke-gcloud-auth-plugin required for authenticating with GKE.

Fixes

  • Issue: Users were unable to migrate Confluent Kafka resources during failback.
    User Impact: Confluent Kafka application failback was unable to bring up the application.
    Resolution: Stork will now remove any finalizers on the CRs when deleting the resource during migration so that a new version of the resource can be recreated.

  • Issue: Migration was suspended after failback if autosuspend was enabled.
    User Impact: After failback, the existing migration schedule was not being resumed, which caused the secondary cluster not to sync.
    Resolution: With the fix, the primary cluster's migration schedule will correctly detect if migration can be resumed to secondary.

  • Issue: The Confluent Kafka operator was not able to recognize service sub resources during Async-DR migration.
    User Impact: Application pods for Confluent Kafka were not able to start.
    Resolution: Stork will not migrate Service resources if the owner reference field is set.

  • Issue: Stork was throwing the error Error migrating volumes: Operation cannot be fulfilled on migrations.stork.libopenstorage.org: the object has been modified; please apply your changes to the latest version and try again.
    User Impact: This error caused unnecessary confusion during migration.
    Resolution: Stork no longer raises this event as it retries the failed operation.

  • Issue: Users were not allowed to take backups based on namespace labels.
    User Impact: Users had to manually select the static namespaces list for backup schedules. Dynamic selection of the namespace list based on the namespace label was not possible.
    Resolution: With namespace label support, users can specify the namespace label such that the list of namespaces with those label will be selected dynamically for backups.

  • Issue: The Rancher project association for the Kubernetes resources in the Rancher environment was not backed up and restored.
    User Impact: Since the project configurations are not restored, some applications failed to come up.
    Resolution: Project settings are now backed up and applied during the restore. Users can also change to a different project with project mapping during restore.

  • Issue: Users were not able to specify the option to use the default storage class configured on the restore cluster in storage class mapping.
    User Impact: Users were not able to use the default storage class for restore.
    Resolution: Now users can mention use-default-storage-class as the destination storage class in the storage class mapping if they want to use the default configure storage class from the restore cluster.

note

If you are looking for release notes of older releases, refer to GitHub page.