Skip to main content
Version: 3.1

Failback an application

Failback is the process of moving the application and its data back to the source cluster once the source cluster is restored and operational again.

The following considerations are used in the examples on this page. Update them to the appropriate values for your environment:

  • Source Cluster is the Kubernetes cluster which is down and where your applications were originally running.
  • Destination Cluster is the Kubernetes cluster where the applications will be failed over.
  • The Zookeeper application is being failed over to the destination cluster.

Create a reverse ClusterPair

note

Skip this section if you have created a bidirectional ClusterPair, and move to the next section.

You need to create a reverse ClusterPair when you have initially paired your clusters in a unidirectional manner (from source to destination), and now you should establish a pairing from the destination cluster back to the source cluster. The reverse ClusterPair enables reverse communication between the clusters (from destination to source), allowing for failback and synchronization of data in the reverse direction.

Run the following command from your destination cluster to create a reverse ClusterPair:

storkctl create clusterpair reverse-migration-cluster-pair \
--namespace <migrationnamespace> \
--src-kube-file <destination-kubeconfig-file> \
--dest-kube-file <source-kubeconfig-file> \
--use-existing-objectstorelocation \
--unidirectional
info

Ensure to provide the destination kubeconfig file with src-kube-file and source destination kubeconfi file with dest-kube-file as mentioned in the above command.

Reverse sync your clusters

If the destination cluster has been running applications for some time, it is possible that the state of your application on the destination cluster differs from your source cluster. This is due to the creation of new resources or changes in data within stateful applications on the destination cluster.

It is recommended to perform one migration from destination cluster to your source cluster before failing back your applications, so that you have the most up-to-date applications on your original source cluster.

As both of your clusters are accessible, follow the instructions to configure a reverse migration schedule:

  1. Create a schedule policy on your destination cluster using the instructions in the Create a schedule policy section.

  2. Copy and paste the following spec into a file called reverse-migrationschedule.yaml on your destination cluster:

    apiVersion: stork.libopenstorage.org/v1alpha1
    kind: MigrationSchedule
    metadata:
    name: reversemigrationschedule
    namespace: <migrationnamespace>
    spec:
    template:
    spec:
    clusterPair: reverse-migration-cluster-pair
    includeResources: true
    startApplications: false
    includeVolumes: false
    namespaces:
    - <app-namespace1>
    - <app-namespace2>
    schedulePolicyName: <your-schedule-policy>
    suspend: false
  3. Apply your reverse-migrationschedule.yaml on your destination cluster:

    kubectl apply -f reverse-migrationschedule.yaml
  4. Verify if at least one migration cycle has been successfully completed:

    storkctl get migration -n <migrationnamespace>
    NAME                                                  CLUSTERPAIR                     STAGE   STATUS       VOLUMES   RESOURCES   CREATED               ELAPSED                               TOTAL BYTES TRANSFERRED
    reversemigrationschedule-interval-2023-02-01-201747 reverse-migration-cluster-pair Final Successful 0/0 4/4 01 Feb 23 20:17 UTC Volumes () Resources (21.71709746s) 0
  5. Suspend the reverse migration schedule:

    storkctl suspend  migrationschedule reversemigrationschedule -n <migrationnamespace>

Stop the application on the destination cluster

Stop the applications from running by changing the replica count of your deployments and statefulsets to 0:

storkctl deactivate migration -n <migrationnamespace>

Restart the application on the source cluster

  1. After you have stopped the applications on the destination cluster, start the applications on the source cluster by editing the replica count:

    storkctl activate migration -n <migrationnamespace>
  2. Verify if your application pods and associated resources are migrated:

    kubectl get all -n zookeeper
    NAME                     READY   STATUS    RESTARTS   AGE
    pod/zk-544ffcc474-6gx64 1/1 Running 0 18h

    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    service/zk-service ClusterIP 10.233.22.60 <none> 3306/TCP 18h

    NAME READY UP-TO-DATE AVAILABLE AGE
    deployment.apps/zk 1/1 1 1 18h

    NAME DESIRED CURRENT READY AGE
    replicaset.apps/zk-544ffcc474 1 1 1 18h
  3. Run the following command to resume the migration schedule:

    storkctl resume migrationschedule migrationschedule -n <migrationnamespace>
    MigrationSchedule migrationschedule resumed successfully
  4. Verify if the migration schedule is active:

    storkctl get migrationschedule -n <migrationnamespace>
    NAME                POLICYNAME                CLUSTERPAIR             SUSPEND   LAST-SUCCESS-TIME     LAST-SUCCESS-DURATION
    migrationschedule <your-schedule-policy> migration-cluster-pair false 01 Dec 23 22:25 UTC 10s

    The false value for the SUSPEND field shows that the migration schedule for your policy is active on the source cluster. Hence, your application has successfully failed back to your source cluster.

Was this page helpful?