Schedule migrations with asynchronous DR in OCP IBM Cloud
Once your source and destination clusters are paired, you need to create a schedule to migrate one or more namespaces periodically.
Create a schedule policy on your source cluster
You have two options for creating a schedule policy:
- Run the
storkctl create migrationschedule
command-line tool the along with the required flags. See Create schedule policy with storkctl topic for more information about the commands with examples, or - Perform the following steps on your source cluster:
To schedule the migration, you need to define a schedule policy. You can choose between two types: SchedulePolicy
or NamespacedSchedulePolicy
.
SchedulePolicy
: This policy is cluster-scoped and applies to the entire cluster.NamespacedSchedulePolicy
: This policy is namespace-scoped and applies only to the namespace that is specified in the policy spec.
- It is recommended to use an interval of at least 15 minutes.
- If a migration is not completed within the specified time interval, Portworx will not initiate the next migration until the previous one is finished.
Perform the following steps on your source cluster to create a schedule policy:
-
Create a spec file using one of the following methods.
- For
SchedulePolicy
, create the following spec file:
apiVersion: stork.libopenstorage.org/v1alpha1
kind: SchedulePolicy
metadata:
name: <your-schedule-policy>
policy:
interval:
intervalMinutes: 30For a list of parameters that you can use to create a schedule policy, see the Schedule Policy page.
- For
NamespacedSchedulePolicy
, create the following spec file:
apiVersion: stork.libopenstorage.org/v1alpha1
kind: NamespacedSchedulePolicy
metadata:
name: <your-schedule-policy>
namespace: <migrationnamespace>
policy:
interval:
intervalMinutes: 30For a list of parameters that you can use to create a namespaced policy, you can use the fields described on the Schedule Policy page, replacing
SchedulePolicy
withNamespacedSchedulePolicy
.-
You can also use the schedule policies that are installed by default. Run the
storkctl get schedulepolicy
command to get the list of these policies, then specify a policy name in the next section for creating a migration schedule. You can skip the next step if you are using the default policies, as they are already applied to your cluster.
- For
-
Apply your policy on the source cluster:
oc apply -f <your-schedule-policy>.yaml
-
Verify if the policy has been created:
storkctl get schedulepolicy <your-schedule-policy>
NAME INTERVAL-MINUTES DAILY WEEKLY MONTHLY
<your-schedule-policy> 30 N/A N/A N/AThe output of this command provides information about the policy, including its name and interval duration.
Create a migration schedule on your source cluster
Once a policy has been created, you can use it to schedule migrations. This means specifying when and how migrations should occur between clusters.
For operators deployed from the OpenShift OperatorHub, create a migration schedule with the --exclude-resource-types
flag to exclude operator-related resources.
Example: storkctl create migrationschedule -c <cluster-pair> --namespaces <namespace> <migration-schedule> --exclude-resource-types ClusterServiceVersion,operatorconditions,OperatorGroup,InstallPlan,Subscription --exclude-selectors olm.managed=true
You have two options for creating a migration schedule:
- Run the
storkctl create migrationschedule
command-line tool the along with the required flags. See Create migration schedule with storkctl topic for more information about the commands with examples, or - Perform the following steps on your source cluster:
-
Copy and paste the following spec into a file called
migrationschedule.yaml
to define the details of the migration schedule.- Modify the following spec to use a different migration schedule name and/or namespace.
- Ensure that the
clusterPair
name is correct.
apiVersion: stork.libopenstorage.org/v1alpha1
kind: MigrationSchedule
metadata:
name: migrationschedule
namespace: <migrationnamespace>
annotations:
# Add the below annotations when PX-Security is enabled on both the clusters
#openstorage.io/auth-secret-namespace: <the namespace where the kubernetes secret holding the auth token resides>
#openstorage.io/auth-secret-name: <the name of the kubernetes secret which holds the auth token>
spec:
template:
spec:
clusterPair: migration-cluster-pair
includeResources: true
startApplications: false
includeVolumes: true
namespaces:
- <app-namespace1>
- <app-namespace2>
schedulePolicyName: <your-schedule-policy>
suspend: false
autoSuspend: truenote-
The option
startApplications
must be set to false in the spec. Otherwise, the first migration will start the pods on the remote cluster, and all subsequent migrations will fail because the volumes will already be in use. -
If you are running Stork 23.2 version or later, you can set the
autoSuspend
totrue
, as shown in the above spec. In case of a disaster, this will suspend the DR migration schedules automatically on your source cluster, and you will be able to migrate your application to an active ocp cluster. If you are using an older version of Stork, refer to the Failover an application page for achieving failover for your application. -
The auth annotations
openstorage.io/auth-secret-namespace
andopenstorage.io/auth-secret-name
must be set when PX-Security is enabled on both the source and destination cluster, as explained in the ClusterPair section.
-
Apply your migration schedule:
oc apply -f migrationschedule.yaml
If the policy name is missing or invalid, there will be events logged against the migration schedule object. Success and failures of the migrations created by the schedule will also result in events being logged against the object. These events can be seen by running a oc describe
on the object
Check your migration status on your source cluster
After applying the migration schedule, verify its status and events.
- Run the following command on your source cluster to check the status of your migration:
storkctl get migration -n <migrationnamespace>
NAMESPACE NAME CLUSTERPAIR STAGE STATUS VOLUMES RESOURCES CREATED ELAPSED TOTAL BYTES TRANSFERRED
bidirectional-clusterpair-ns <your-app>-migration-schedule-interval-interval-2023-08-28-015917 birectional-cluster-pair Final Successful 1/1 4/4 27 Aug 23 19:59 MDT Volumes (33s) Resources (1m2s) 4096
-
The output above indicates a successful migration in the
Status
column. It also provides a list of migration objects along with timestamps associated with your migration. -
You can also run the
oc describe migration -n <migrationnamespace>
to check details about specific migrations.
-
Verify your application and associated resources have been migrated by running the following command on your destination cluster:
oc get all -n zookeeper
```output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/zk-service ClusterIP 10.xxx.xx.68 <none> xxxx/TCP 2m13s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/zk 0/0 0 0 2m13s
NAME DESIRED CURRENT READY AGE
replicaset.apps/zk-59d878d987 0 0 0 2m13s
This confirms that the respective namespace has been created and the applications (for example, Zookeeper) are installed. However, the application pods will not be running because they are running on the source cluster.
As part of every Migration, Stork will also migrate the Kubernetes resources associated with your applications. For a successful migration, these applications need to be in a scaled down state on the destination side so that subsequent migrations from the same schedule can run to completion. To achieve this Stork will leverage spec.replicas
from most of the standard Kubernetes controllers such as Deployments, StatefulSets, and so on. However, for applications managed by an Operator an ApplicationRegistration
CR needs to be created which provides Stork with the necessary information required to perform a scale down of the application. Refer to the ApplicationRegistrations page for more details.