2. Pair Clusters
Understand cluster pairing
In order to failover an application running on one Kubernetes cluster to another Kubernetes cluster, you need to migrate the resources between them. On Kubernetes you will define a trust object required to communicate with the other Kubernetes cluster called a ClusterPair. This creates a pairing between the scheduler (Kubernetes) so that all the Kubernetes resources can be migrated between them. Throughout this section, the notion of source and destination clusters apply only at the Kubernetes level and does not apply to Storage, as you have a single Portworx storage fabric running on both the clusters. As Portworx is stretched across them, the volumes do not need to be migrated.
For reference:
- Source Cluster is the Kubernetes cluster where your applications are running.
- Destination Cluster is the Kubernetes cluster where the applications will be failed over, in case of a disaster in the source cluster.
Generate and Apply a ClusterPair Spec
In Kubernetes, you must define a trust object called ClusterPair. Portworx requires this object to communicate with the destination cluster. The ClusterPair object pairs the Portworx storage driver with the Kubernetes scheduler, allowing the volumes and resources to be migrated between clusters.
The ClusterPair is generated and used in the following way:
- The ClusterPair spec is generated on the destination cluster.
- The generated spec is then applied on the source cluster
Perform the following steps to create a cluster pair:
pxctl
commands in this document either on your Portworx nodes directly, or from inside the Portworx containers on your master Kubernetes node.
Create object store credentials for cloud clusters
If you are running Kubernetes on-premises, you may skip this section. If your Kubernetes clusters are on the cloud, you must create object credentials on both the destination and source clusters before you can create a cluster pair.
The options you use to create your object store credentials differ based on which object store you use:
Create Amazon s3 credentials
Find the UUID of your destination cluster
Enter the
pxctl credentials create
command, specifying the following:- The
--provider
flag with the name of the cloud provider (s3
). - The
--s3-access-key
flag with your secret access key - The
--s3-secret-key
flag with your access key ID - The
--s3-region
flag with the name of the S3 region (us-east-1
) - The
--s3-endpoint
flag with the name of the endpoint (s3.amazonaws.com
) - The optional
--s3-storage-class
flag with either theSTANDARD
orSTANDARD-IA
value, depending on which storage class you prefer clusterPair_
with the UUID of your destination cluster. Enter the following command into your cluster to find its UUID:text PX_POD=$(kubectl get pods -l name=portworx -n kube-system -o jsonpath='{.items[0].metadata.name}') kubectl exec $PX_POD -n kube-system -- /opt/pwx/bin/pxctl status | grep UUID | awk '{print $3}'
/opt/pwx/bin/pxctl credentials create \ --provider s3 \ --s3-access-key <aws_access_key> \ --s3-secret-key <aws_secret_key> \ --s3-region us-east-1 \ --s3-endpoint s3.amazonaws.com \ --s3-storage-class STANDARD \ clusterPair_<UUID_of_destination_cluster>
- The
Create Microsoft Azure credentials
Find the UUID of your destination cluster
Enter the
pxctl credentials create
command, specifying the following:--provider
asazure
--azure-account-name
with the name of your Azure account--azure-account-key
with your Azure account keyclusterPair_
with the UUID of your destination cluster appended. Enter the following command into your cluster to find its UUID:text PX_POD=$(kubectl get pods -l name=portworx -n kube-system -o jsonpath='{.items[0].metadata.name}') kubectl exec $PX_POD -n kube-system -- /opt/pwx/bin/pxctl status | grep UUID | awk '{print $3}'
/opt/pwx/bin/pxctl credentials create \ --provider azure \ --azure-account-name <your_azure_account_name> \ --azure-account-key <your_azure_account_key> \ clusterPair_<UUID_of_destination_cluster>
Create Google Cloud Platform credentials
Find the UUID of your destination cluster
Enter the
pxctl credentials create
command, specifying the following:--provider
asgoogle
--google-project-id
with the string of your Google project ID--google-json-key-file
with the filename of your GCP JSON key fileclusterPair_
with the UUID of your destination cluster appended. Enter the following command into your cluster to find its UUID:text PX_POD=$(kubectl get pods -l name=portworx -n kube-system -o jsonpath='{.items[0].metadata.name}') kubectl exec $PX_POD -n kube-system -- /opt/pwx/bin/pxctl status | grep UUID | awk '{print $3}'
/opt/pwx/bin/pxctl credentials create \ --provider google \ --google-project-id <your_google_project_ID> \ --google-json-key-file <your_GCP_JSON_key_file> \ clusterPair_<UUID_of_destination_cluster>
Generate a ClusterPair on the destination cluster
To generate the ClusterPair spec, run the following command on the destination cluster:
storkctl generate clusterpair -n migrationnamespace remotecluster
Here, the name (remotecluster) is the Kubernetes object that will be created on the source cluster representing the pair relationship.
During the actual migration, you will reference this name to identify the destination of your migration.
apiVersion: stork.libopenstorage.org/v1alpha1
kind: ClusterPair
metadata:
creationTimestamp: null
name: remotecluster
namespace: migrationnamespace
spec:
config:
clusters:
kubernetes:
LocationOfOrigin: /etc/kubernetes/admin.conf
certificate-authority-data: <CA_DATA>
server: https://192.168.56.74:6443
contexts:
kubernetes-admin@kubernetes:
LocationOfOrigin: /etc/kubernetes/admin.conf
cluster: kubernetes
user: kubernetes-admin
current-context: kubernetes-admin@kubernetes
preferences: {}
users:
kubernetes-admin:
LocationOfOrigin: /etc/kubernetes/admin.conf
client-certificate-data: <CLIENT_CERT_DATA>
client-key-data: <CLIENT_KEY_DATA>
options:
<insert_storage_options_here>: ""
status:
remoteStorageId: ""
schedulerStatus: ""
storageStatus: ""
Make the following changes in the options
section of your ClusterPair
:
- This example uses a single storage fabric. Thus, you must delete the
<insert_storage_options_here>: ""
line. By default, every seventh migration is a full migration. To make every migration incremental, specify
mode: DisasterRecovery
as follows:options: mode: DisasterRecovery
Once you’ve made the changes, save the resulting spec to a file named clusterpair.yaml
.
Apply the generated ClusterPair on the source cluster
On the source cluster create the clusterpair by applying the generated spec.
kubectl create -f clusterpair.yaml
Verify the Pair status
Once you apply the above spec on the source cluster you should be able to check the status of the pairing using storkctl on the source cluster.
storkctl get clusterpair
NAME STORAGE-STATUS SCHEDULER-STATUS CREATED
remotecluster NotProvided Ready 09 Apr 19 18:16 PDT
So, on a successful pairing you should see the “Scheduler Status” as “Ready” and the “Storage Status” as “Not Provided”
Once the pairing is configured, applications can now failover from one cluster to another. In order to achieve that, we need to migrate the Kubernetes resources to the destination cluster. The next step will help your synchronize the Kubernetes resources between your clusters.