Prepare your Portworx cluster
In the Metro DR deployment model, you need to have a single Portworx cluster that spans across your source and destination Kubernetes clusters. Follow the instructions on this page to achieve the same.
Install Portworx
Follow the instructions on this page to deploy a single Portworx cluster that spans across multiple Kubernetes clusters. You should specify the same KVDB endpoints and a cluster name for the Portworx cluster on your source and destination clusters.
Generate a Portworx spec
Navigate to Portworx Central to log in or create an account.
Select Portworx Enterprise from the product catalog and click Continue.
On the Product Line page, choose any option depending on which license you intend to use, then click Continue.
For Platform, choose your cloud provider, then click Customize at the bottom of the Summary section.
On the Basic page, select the Your etcd details option and enter your KVDB endpoints, one at a time using the + symbol, then click Next. The following is an example for a three node etcd cluster:
http://<your-etcd-endpoint1>:2379
http://<your-etcd-endpoint2>:2379
http://<your-etcd-endpoint3>:2379
On the Storage page, select cloud as your environment and choose your cloud provider. Specify a number in the Max storage nodes per availability zone (Optional) field to match the total number of Kubernetes storage nodes in your stretch cluster, and click Next.
noteSpecifying
0
in the Max storage nodes per availability zone (Optional) field will keep the number of storage nodes unbounded in a cluster. Every node that joins the cluster will be a storage node. This value should be used with great caution.Choose your network and click Next.
From the Customize page, specify your cluster name prefix in Advanced Settings. Click Finish to generate the spec.
After generating the spec file, copy it to the source and destination clusters. You must add domain names to each of these spec files. The domain names must be different for each Kubernetes cluster. You can use several methods to specify a domain name for your clusters, such as your cloud providers' zone names (for example, us-east-1a
and us-east-1b
) or the datacenter names (for example, datacenter1
and datacenter2
) that your clusters are using. A cluster domain identifies a subset of nodes from the stretch Portworx cluster that are part of the same failure domain. Each Kubernetes cluster and its nodes form a cluster domain.
Before applying the specs to you clusters, ensure that both the spec files have the same metadata:name
and KVDB endpoints, but different domain names.
Apply the spec on your source cluster
Once you have copied the spec to your source cluster, modify it to add the source cluster domain name.
Specify a domain name using the
portworx.io/misc-args
annotation with the-cluster_domain
argument. The following example shows how you can specify your source cluster domain name using your cloud providers' zone name (for example,us-east-1a
):apiVersion: core.libopenstorage.org/v1
kind: StorageCluster
metadata:
annotations:
portworx.io/misc-args: "-cluster_domain us-east-1a"Apply the edited spec:
kubectl apply -f <your-px-source-spec>.yaml
Apply the spec on your destination cluster
Similarly, after you have copied the spec to your destination cluster, modify it to add the destination cluster domain name.
Specify the domains using the
-cluster_domain
argument with theportworx.io/misc-args
annotation. You should use the same method as your source cluster for specifying the domain name of your destination cluster. The following example shows how you can specify the domain name usingus-east-1b
:apiVersion: core.libopenstorage.org/v1
kind: StorageCluster
metadata:
annotations:
portworx.io/misc-args: "-cluster_domain us-east-1b"Apply the edited spec:
kubectl apply -f <your-px-destination-spec>.yaml
A single Portworx cluster will be spanning across both Kubernetes clusters since the
metadata:name
and KVDB endpoints are the same.
Deploying a Synchronous DR setup on a Tanzu cluster requires additional steps. For instructions, see the Tanzu cluster installation page.
Install storkctl
storkctl
is a command-line tool for interacting with a set of scheduler extensions. Install storkctl
on both Kubernetes clusters after installing Portworx. Always use the latest storkctl
binary tool by downloading it from the current running Stork container.
The examples below use the kube-system
namespace, you should update this to the correct namespace for your environment.
Perform the following steps to download storkctl
from the Stork pod:
Linux:
STORK_POD=$(kubectl get pods -n <namespace> -l name=stork -o jsonpath='{.items[0].metadata.name}') &&
kubectl cp -n <namespace> $STORK_POD:/storkctl/linux/storkctl ./storkctl
sudo mv storkctl /usr/local/bin &&
sudo chmod +x /usr/local/bin/storkctlOS X:
STORK_POD=$(kubectl get pods -n <namespace> -l name=stork -o jsonpath='{.items[0].metadata.name}') &&
kubectl cp -n <namespace> $STORK_POD:/storkctl/darwin/storkctl ./storkctl
sudo mv storkctl /usr/local/bin &&
sudo chmod +x /usr/local/bin/storkctlWindows:
Copy
storkctl.exe
from the stork pod:STORK_POD=$(kubectl get pods -n <namespace> -l name=stork -o jsonpath='{.items[0].metadata.name}') &&
kubectl cp -n <namespace> $STORK_POD:/storkctl/windows/storkctl.exe ./storkctl.exeMove
storkctl.exe
to a directory in your PATH.
Check the cluster domain status
When Stork is deployed along with Portworx, it automatically creates a ClusterDomainStatus object on your source and destination Kubernetes clusters.
- Run the following commands on each of your Kubernetes clusters to get the current status of the
ClusterDomainStatus
object:
storkctl get clusterdomainsstatus
NAME LOCAL-DOMAIN ACTIVE INACTIVE CREATED
px-dr-cluster us-east-1a us-east-1a (InSync), us-east-1b (InSync) 29 Nov 22 22:09 UTC
Run the following command to get details of the
ClusterDomainStatus
object:kubectl describe clusterdomainsstatus
Name: px-dr-cluster
Namespace:
Labels: <none>
Annotations: <none>
API Version: stork.libopenstorage.org/v1alpha1
Kind: ClusterDomainsStatus
Metadata:
Creation Timestamp: 2022-11-28T23:13:32Z
Generation: 3
Managed Fields:
API Version: stork.libopenstorage.org/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:clusterDomainInfos:
f:localDomain:
Manager: stork
Operation: Update
Time: 2022-11-28T23:13:32Z
Resource Version: 3383
UID: xxxxxxxx-2e81-4cbf-b4da-4b9c261793fd
Status:
Cluster Domain Infos:
Name: us-east-1b
State: Active
Sync Status: InSync
Name: us-east-1a
State: Active
Sync Status: InSync
Local Domain: us-east-1a
Events: <none>You can see that the
Status
field shows that your Portworx stretch cluster is installed and active.
Setup a witness node
Perform the following to set up a witness node:
Check your Portworx Enterprise version by running the following command on your source and destination clusters (both should have the same version):
kubectl get pods -A -o jsonpath="{.items[*].spec.containers[*].image}" | xargs -n1 | sort -u | grep oci-monitor
Download the witness-install.sh script file on a designated VM.
Change the permissions for the downloaded file to
-rwxr-xr-x
:chmod +x ./witness-install.sh
Update the Portworx version in the
PX_DOCKER_IMAGE=portworx/px-enterprise:2.XX.X
field of the file to match the version that is retrieved in Step 1 of this section. Check the following and save the file:- The
--cluster_domain
argument is set towitness
, indicating that this is a witness node. - The
clusterID
and theetcd
details are same as they have been provided to the two other Portworx installations done in the Kubernetes clusters.
- The
Verify if the docker engine service is running. This service is required in order to successfully run the witness node script:
systemctl status docker
docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2022-11-29 18:02:09 EST; 18h ago
Docs: https://docs.docker.com
Main PID: 1356 (dockerd)
Tasks: 10
Memory: 71.8M
CGroup: /system.slice/docker.service
└─1356 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sockInstall the witness node on a single storageless Portworx node on the designated VM. You need to specify the same Portworx Enterprise version that you retrieved in Step 1 along with the external
etcd
endpoints, as shown in the following example:./witness-install.sh --cluster-id=px-cluster \
--etcd="etcd:http://10.13.28.137:2379,etcd:http://10.13.28.138:2379,etcd:http://10.13.28.139:2379" \
--docker-image=portworx/px-enterprise:<your-px-version>Verify Portworx status on the witness node:
/opt/pwx/bin/pxctl --color status
The
witness-install.sh
script can take a couple of minutes to complete, as shown in the following example output:Status: PX is operational
Telemetry: Disabled or Unhealthy
Metering: Disabled or Unhealthy
License: Trial (expires in 30 days)
..
....You will see
PX is operational
and once the script is successfully completed, you can quit the script by enteringctrl + c
. Note that the witness node requires a valid Portworx license. To check the status of your license, use the/opt/pwx/bin/pxctl --color license list
command.
In the Metro DR deployment model, sharedv4 service volumes are not supported in the Portworx cluster.
To upgrade the witness node, see Upgrade the Portworx OCI bundle.
To uninstall the witness node, see Uninstall the Portworx OCI bundle.
Placement of volumes
Once your Portworx cluster is operational, all repl 2 and repl 3 volumes will have their replicas placed across the two cluster domains. The repl 1 volumes are not allowed. This behavior can be controlled with the Metro DR domain protection
flag. This flag is turned on by default.
This Metro DR domain protection
flag might conflict with any existing or new VolumePlacementStrategy (VPS) rules created for provisioning volumes. A VPS rule trying to place replicas in the same cluster domain will conflict with the Metro DR domain protection
flag. In such cases, the protection flag takes precedence over VPS, preventing the placement of volumes on the same cluster domain and resulting in the failure of volume creation.
For the replica volumes to be placed on different cluster domains and to prevent volume creation failures, Portworx by Pure Storage recommends deleting such conflicting VPS rules and perform the following steps:
To check the status of the protection flag, run the following command from any master node of the Portworx cluster:
PX_POD=$(kubectl get pods -l name=portworx -n <portworx-namespace> -o jsonpath='{.items[0].metadata.name}') \
kubectl exec $PX_POD -n <portworx-namespace> -- /opt/pwx/bin/pxctl cluster options list | grep MetroIf the flag is not enabled, you can enable it by running the following command:
PX_POD=$(kubectl get pods -l name=portworx -n <px-namespace> -o jsonpath='{.items[0].metadata.name}')
kubectl exec $PX_POD -n <px-namespace> -- /opt/pwx/bin/pxctl cluster options update --metro-dr-domain-protection on