Skip to main content
Version: 3.1

Failover an application

In the event of a disaster, when one of your Kubernetes clusters becomes inaccessible, you have the option to failover the applications running on it to an operational Kubernetes cluster.

Follow the instructions on this page to perform a failover of your applications to an active Kubernetes cluster. These instructions apply to both scenarios, whether your source Kubernetes cluster is accessible or not.

The following considerations are used in the examples on this page. Update them to the appropriate values for your environment:

  • Source Cluster is the Kubernetes cluster which is down and where your applications were originally running. The cluster domain for this source cluster isus-east-1a.
  • Destination Cluster is the Kubernetes cluster where the applications will be failed over to. The cluster domain for this destination cluster isus-east-1b.
  • The Zookeeper application is being failed over to the destination cluster.

Deactivate the source cluster domain

You should deactivate your source cluster, regardless of its accessibility status. Notify both Stork and Portworx by marking the source cluster as inactive. Therefore, during quorum formation, the source cluster will not be included. Portworx will remain operational as the witness nodes establish the required quorum, if required.

danger

If you are performing a partial failover while your source cluster is active, skip this step. This will result in stopping of the Portworx cluster and all workloads.

  1. Run the following command to deactivate the source cluster. You need to run this command on the destination cluster:

    storkctl deactivate clusterdomain us-east-1a
    Cluster Domain deactivate operation started successfully for us-east-1a
  2. Verify if your source cluster domain has been deactivated:

    storkctl get clusterdomainsstatus
    NAME                            LOCAL-DOMAIN   ACTIVE                           INACTIVE                         CREATED
    px-dr-cluster us-east-1a us-east-1b (SyncStatusUnknown) us-east-1a (SyncStatusUnknown) 29 Nov 22 22:09 UT

    You can see that the cluster domain of your source cluster is listed under INACTIVE indicating that your source cluster domain is deactivated.

note

Skip the following two sections and proceed to this section if the source cluster domain is not accessible.

Suspend the migrations on the source cluster (if accessible)

If the source cluster is accessible and autoSuspend is not set, suspend the migration schedule to prevent updates to the application's stateful sets on the destination cluster:

  1. Run the following command from your source cluster to suspend the migration schedule:

    storkctl suspend migrationschedule migrationschedule -n <migrationnamespace>
  2. Verify if the schedule has been suspended:

    storkctl get migrationschedule -n <migrationnamespace>
    NAME                POLICYNAME               CLUSTERPAIR              SUSPEND   LAST-SUCCESS-TIME     LAST-SUCCESS-DURATION
    migrationschedule <your-schedule-policy> <your-clusterpair-name> true 01 Dec 22 23:31 UTC 10s

Stop the application on the source cluster (if accessible or applicable)

If your source cluster is accessible, you must stop the applications before failing them over to the destination cluster.

Run the following command to scale down the replica count of your application. This action scales down the application instances but retains the resources in Kubernetes, preparing them for migration:

kubectl scale --replicas 0 statefulset/zk -n zookeeper

As the zookeeper namespace is being used in the above command, it will scale down the replica count for the Zookeeper application. Update the namespace to your application namespace.

Start the application on the destination cluster

You can allow Stork to activate migration either on all namespaces or one namespace at a time. For performance reasons, if you have a high number of namespaces in your migration schedule, it is recommended to migrate one namespace at a time.

  1. Each application spec will have the annotation stork.openstorage.org/migrationReplicas indicating the replica count on the source cluster. Run the following command to update the replica count of your app to the same number as on your source cluster:

    storkctl activate migration -n zookeeper

    Run the following command to migrate all namespaces:

    storkctl activate migration --all-namespaces

    Stork will look for that annotation and scale it to the correct number automatically. Once the replica count is updated, the application will start running, and the failover will be completed.

  1. Verify your volumes and Kubernetes resources are migrated:

    kubectl get all -n zookeeper
    NAME                     READY   STATUS    RESTARTS   AGE
    pod/zk-544ffcc474-6gx64 1/1 Running 0 18h

    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    service/zk-service ClusterIP 10.233.22.60 <none> 3306/TCP 18h

    NAME READY UP-TO-DATE AVAILABLE AGE
    deployment.apps/zk 1/1 1 1 18h

    NAME DESIRED CURRENT READY AGE
    replicaset.apps/zk-544ffcc474 1 1 1 18h

    The above command shows the resources and pods associated with Zookeeper that are migrated to the destination cluster.

Was this page helpful?