Skip to main content
Version: 3.1

Failover an application

In the event of a disaster, when one of your Kubernetes clusters becomes inaccessible, you have the option to failover the applications running on it to an operational Kubernetes cluster.

Follow the instructions on this page to perform a failover of your applications to an active Kubernetes cluster. These instructions apply to both scenarios, whether your source Kubernetes cluster is accessible or not.

The following considerations are used in the examples on this page. Update them to the appropriate values for your environment.

  • Source Cluster is the Kubernetes cluster which is down and where your applications were originally running.
  • Destination Cluster is the Kubernetes cluster where the applications will be failed over.
  • The Zookeeper application is being failed over to the destination cluster.
note

Skip the following two sections if the source cluster is inaccessible and proceed to the next section.

Suspend the migrations on the source cluster (if accessible)

If the source cluster is accessible and autoSuspend is not set, suspend the migration schedule to prevent updates to the application's stateful sets on the destination cluster:

  1. Run the following command to suspend the migration schedule:

    storkctl suspend migrationschedule migrationschedule -n <migrationnamespace>
  2. Verify if the schedule has been suspended:

    storkctl get migrationschedule -n <migrationnamespace>
    NAME                POLICYNAME               CLUSTERPAIR              SUSPEND   LAST-SUCCESS-TIME     LAST-SUCCESS-DURATION
    migrationschedule <your-schedule-policy> <your-clusterpair-name> true 01 Dec 22 23:31 UTC 10s

Stop the application on the source cluster (if accessible or applicable)

If your source Kubernetes cluster is still alive and is accessible, it is recommended to stop the applications before failing them over to the destination cluster.

Run the following command to scale down the replica count of your application. This action scales down the application instances but retains the resources in Kubernetes, preparing them for migration:

kubectl scale --replicas 0 statefulset/zk -n zookeeper

As the zookeeper namespace is being used in the above command, it will scale down the replica count for the Zookeeper application. Update the namespace to your application namespace.

Start the application on the destination cluster

You can allow Stork to activate migration either on all namespaces or one namespace at a time. For performance reasons, if you have a high number of namespaces in your migration schedule, it is recommended to migrate one namespace at a time.

  1. Each application spec will have the annotation stork.openstorage.org/migrationReplicas indicating the replica count on the source cluster. Run the following command to update the replica count of your app to the same number as on your source cluster:

    storkctl activate migration -n zookeeper

    Run the following command to scale up all namespaces:

    storkctl activate migration --all-namespaces

    Stork will look for that annotation and scale it to the correct number automatically. Once the replica count is updated, the application will start running, and the failover will be completed.

  1. Verify your volumes and Kubernetes resources are migrated:

    kubectl get all -n zookeeper
    NAME                     READY   STATUS    RESTARTS   AGE
    pod/zk-544ffcc474-6gx64 1/1 Running 0 18h

    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    service/zk-service ClusterIP 10.233.22.60 <none> 3306/TCP 18h

    NAME READY UP-TO-DATE AVAILABLE AGE
    deployment.apps/zk 1/1 1 1 18h

    NAME DESIRED CURRENT READY AGE
    replicaset.apps/zk-544ffcc474 1 1 1 18h

    The above command shows the resources and pods associated with Zookeeper that are migrated to the destination cluster.

Was this page helpful?