Skip to main content
Version: 3.1

Installing on a Tanzu cluster


If you are not using a Tanzu cluster, skip to next section.

Label your drives

The existing Synchronous DR implementation assumes the availability of cloud drives for each of the two clusters. In Tanzu cloud drive is a PVC, which is a cluster-scoped resource and in order to help clusters distinguish between their drives, they should be labeled accordingly.

  1. Label your worker nodes on both clusters with the px-dr label. Set your cluster domain as a value:

    kubectl label nodes <list-of-nodes or --all> px-dr=<cluster-domain-name>
  2. Add the --node_pool_label parameter to the spec with a key `px-dr:

    kind: StorageCluster
    annotations: '--node_pool_label=px-dr'


    Only deploy Portworx in this mode when your Kubernetes clusters are separated by a metropolitan area network with a maximum latency of 10 ms. Pure Storage does not recommend you to run in this mode if your Kubernetes clusters are distributed across regions, such as AWS regions us-east and us-west.

  1. Apply the modified Portworx spec on both the clusters, and Portworx will form the stretch cluster.

Verify Portworx installation

You can run the following commands to ensure Portworx is successfully installed in a stretch fashion.

  • Run the following command on your source cluster to see if the Portworx cluster is up and running:

    kubectl get pods
    NAME             READY   STATUS        RESTARTS   AGE
    portworx-d6rk7 1/1 Running 0 4m
    portworx-d6rk8 1/1 Running 0 5m
    portworx-d6rk9 1/1 Running 0 7m
  • Run the following command to verify that stretch cluster is installed:

    kubectl exec portworx-d6rk7 -n kube-system -- /opt/pwx/bin/pxctl status
    Status: PX is operational
    License: Trial (expires in 31 days)
    Node ID: 04de0858-4081-47c3-a2ab-f0835b788738
    IP: X.X.X.230
    Local Storage Pool: 1 pool
    0 LOW raid0 150 GiB 9.0 GiB Online us-east-1b us-east-1
    Local Storage Devices: 1 device
    Device Path Media Type Size Last-Scan
    0:1 /dev/xvdf STORAGE_MEDIUM_SSD 150 GiB 09 Apr 19 22:57 UTC
    total - 150 GiB
    Cache Devices:
    No cache devices
    Cluster Summary
    Cluster ID: px-dr-cluster
    Cluster UUID: 27558ed9-7ddd-4424-92d4-5dfe2af572e0
    Scheduler: kubernetes
    Nodes: 6 node(s) with storage (6 online)
    IP ID SchedulerNodeName StorageNode Used Capacity Status StorageStatus Version Kernel OS c665fe35-57d9-4302-b6f7-a978f17cd020 ip-172-20-33-100.ec2.internal Yes 0 B 150 GiB Online Up 4.9.0-7-amd64 Debian GNU/Linux 9 (stretch) bbb2f11d-c6ad-46e7-a52f-530f313869f3 ip-172-40-50-47.ec2.internal Yes 0 B 150 GiB Online Up 4.9.0-7-amd64 Debian GNU/Linux 9 (stretch) a888a08e-0596-43a5-8d02-6faf19e8724c ip-172-40-34-140.ec2.internal Yes 0 B 150 GiB Online Up 4.9.0-7-amd64 Debian GNU/Linux 9 (stretch) 7a83c652-ffaf-452f-978c-82b0da1d2580 ip-172-20-36-212.ec2.internal Yes 0 B 150 GiB Online Up 4.9.0-7-amd64 Debian GNU/Linux 9 (stretch) 11e0656a-45a5-4a5b-b4e6-51e130959644 ip-172-20-59-248.ec2.internal Yes 0 B 150 GiB Online Up 4.9.0-7-amd64 Debian GNU/Linux 9 (stretch)
    X.X.X.230 04de0858-4081-47c3-a2ab-f0835b788738 ip-172-40-40-230.ec2.internal Yes 0 B 150 GiB Online Up (This node) 4.9.0-7-amd64 Debian GNU/Linux 9 (stretch)
    Global Storage Pool
    Total Used : 0 B
    Total Capacity : 900 GiB

    You can see that the cluster summary shows stretch against each node IP.


For instructions on configuring disk provisioning for your Tanzu cluster, refer to the Tanzu Operations section.

Was this page helpful?