Skip to main content
Version: 3.1

Install Portworx on Anthos bare metal or VMware cluster using the spec generator

This guide provides instructions for installing Portworx on Anthos.

Prerequisites

  • Anthos cluster
  • Your cluster must meet the requirements for installing a Portworx cluster.

Install Portworx on an Anthos cluster

Perform the following steps to install Portworx.

Get your Kubernetes version

Run this command to get the Kubernetes version installed on your Rancher cluster for use in the later section:

(kubectl version --short 2>&1 || kubectl version) | awk -Fv '/Server Version: / {print $3}'

Generate the specs

  1. Navigate to Portworx Central and login, or create an account.

  2. Select the Get Started button

  3. In the Product Line page -> Portworx Enterprise section, select Continue.

  4. In the Generate Spec page -> Step 1: Select Your Platform section -> Platform dropdown menu, you can select Google Cloud from the CLOUD section or vSphere or Pure FlashArray from the ON-PREMISES section.

    • If you select vSphere, then specify the vCenter endpoint and datastore prefix.
  5. In the Step 2: Select Kubernetes Distribution section -> Distribution Name dropdown menu, select Anthos as your distribution:

  6. Enter other relevant details, such as Cluster Selector Label and Namespace.

    • Cluster Selector Label: By assigning this label to a cluster, you can specify that certain configurations or software installations should only be applied to clusters that match the label criteria. For example, when installing Portworx on an Anthos cluster, you might want to target only those clusters that are designated for storage-intensive applications. To achieve this, you can use the Cluster Selector Label to mark these specific clusters. For instance, label your target cluster with a specific selector:

      metadata:
      labels:
      configmanagement.gke.io/cluster-selector: storage-intensive

      In your Portworx installation configuration, specify that it should only be applied to clusters with the storage-intensive label. This ensures that Portworx is only installed on clusters designated for storage-heavy workloads, optimizing resource usage and deployment strategies across your Anthos environment.

  7. In the Step 3: Summary section -> K8S Version field, enter the complete Kubernetes version string that you retrieved in the previous section. Also, you can modify the cluster name prefix if required.

  8. Select Save Spec to generate the specs.

    note

    Review all default configurations in the Summary section. Select Customize and proceed to the subsequent pages if you need to modify any storage and network related configurations. After modifying the configurations, you can select Finish to generate the spec.

Apply the specs

Apply the Portworx Operator and StorageCluster specs you generated in the section above using the following steps:

  1. Run the following wget command by replacing the <cluster-name> with your cluster name:

    wget -O portworx-anthos-storage-intensive-2024-02-13-05-04-27.zip 'https://install.portworx.com/3.1?operator=true&mc=false&kbver=1.25.0&ns=portworx&b=true&iop=6&s=%22size%3D150%22&pureSanType=ISCSI&c=px-cluster-<cluster-name>&acr=storage-intensive&ctl=true&stork=true&csi=true&mon=true&tel=true&st=k8s&promop=true'
  2. Extract the .zip file in the wget command:

    unzip portworx-anthos-storage-intensive-2024-02-13-05-04-27.zip

    You will get the px-operator and storage-cluster YAML files.

  3. Run the following command to deploy the Portworx Operator:

    kubectl create -f px-operator-kube-system-local-px-int-2024-02-21-09-42-55.yaml
    serviceaccount/portworx-operator created
    clusterrole.rbac.authorization.k8s.io/portworx-operator created
    clusterrolebinding.rbac.authorization.k8s.io/portworx-operator created
    deployment.apps/portworx-operator created
  4. Run the following command to deploy the StorageCluster:

    kubectl create -f storage-cluster-kube-system-local-px-int-2024-02-21-09-42-55.yaml
    storagecluster.core.libopenstorage.org/px-cluster-5c5f507a-92aa-4ce5-a578-f50775cb041b created

    Verify your Portworx installation

    Once you've installed Portworx, you can perform the following tasks to verify that Portworx has installed correctly.

    Verify if all pods are running

    Enter the following kubectl get pods command to list and filter the results for Portworx pods:

    kubectl get pods -n <px-namespace> -o wide | grep -e portworx -e px
    portworx-api-774c2                                      1/1     Running   0                2m55s   192.168.121.196   username-k8s1-node0    <none>           <none>
    portworx-api-t4lf9 1/1 Running 0 2m55s 192.168.121.99 username-k8s1-node1 <none> <none>
    portworx-api-dvw64 1/1 Running 0 2m55s 192.168.121.99 username-k8s1-node2 <none> <none>
    portworx-kvdb-94bpk 1/1 Running 0 4s 192.168.121.196 username-k8s1-node0 <none> <none>
    portworx-kvdb-8b67l 1/1 Running 0 10s 192.168.121.196 username-k8s1-node1 <none> <none>
    portworx-kvdb-fj72p 1/1 Running 0 30s 192.168.121.196 username-k8s1-node2 <none> <none>
    portworx-operator-58967ddd6d-kmz6c 1/1 Running 0 4m1s 10.244.1.99 username-k8s1-node0 <none> <none>
    prometheus-px-prometheus-0 2/2 Running 0 2m41s 10.244.1.105 username-k8s1-node0 <none> <none>
    px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d-9gs79 2/2 Running 0 2m55s 192.168.121.196 username-k8s1-node0 <none> <none>
    px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d-vpptx 2/2 Running 0 2m55s 192.168.121.99 username-k8s1-node1 <none> <none>
    px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d-bxmpn 2/2 Running 0 2m55s 192.168.121.191 username-k8s1-node2 <none> <none>
    px-csi-ext-868fcb9fc6-54bmc 4/4 Running 0 3m5s 10.244.1.103 username-k8s1-node0 <none> <none>
    px-csi-ext-868fcb9fc6-8tk79 4/4 Running 0 3m5s 10.244.1.102 username-k8s1-node2 <none> <none>
    px-csi-ext-868fcb9fc6-vbqzk 4/4 Running 0 3m5s 10.244.3.107 username-k8s1-node1 <none> <none>
    px-prometheus-operator-59b98b5897-9nwfv 1/1 Running 0 3m3s 10.244.1.104 username-k8s1-node0 <none> <none>

    Note the name of one of your px-cluster pods. You'll run pxctl commands from these pods in following steps.

    Verify Portworx cluster status

    You can find the status of the Portworx cluster by running pxctl status commands from a pod. Enter the following kubectl exec command, specifying the pod name you retrieved in the previous section:

    kubectl exec <pod-name> -n <px-namespace> -- /opt/pwx/bin/pxctl status
    Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
    Status: PX is operational
    Telemetry: Disabled or Unhealthy
    Metering: Disabled or Unhealthy
    License: Trial (expires in 31 days)
    Node ID: 788bf810-57c4-4df1-9a5a-70c31d0f478e
    IP: 192.168.121.99
    Local Storage Pool: 1 pool
    POOL IO_PRIORITY RAID_LEVEL USABLE USED STATUS ZONE REGION
    0 HIGH raid0 3.0 TiB 10 GiB Online default default
    Local Storage Devices: 3 devices
    Device Path Media Type Size Last-Scan
    0:1 /dev/vdb STORAGE_MEDIUM_MAGNETIC 1.0 TiB 14 Jul 22 22:03 UTC
    0:2 /dev/vdc STORAGE_MEDIUM_MAGNETIC 1.0 TiB 14 Jul 22 22:03 UTC
    0:3 /dev/vdd STORAGE_MEDIUM_MAGNETIC 1.0 TiB 14 Jul 22 22:03 UTC
    * Internal kvdb on this node is sharing this storage device /dev/vdc to store its data.
    total - 3.0 TiB
    Cache Devices:
    * No cache devices
    Cluster Summary
    Cluster ID: px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d
    Cluster UUID: 33a82fe9-d93b-435b-943e-6f3fd5522eae
    Scheduler: kubernetes
    Nodes: 3 node(s) with storage (3 online)
    IP ID SchedulerNodeName Auth StorageNode Used Capacity Status StorageStatus Version Kernel OS
    192.168.121.196 f6d87392-81f4-459a-b3d4-fad8c65b8edc username-k8s1-node0 Disabled Yes 10 GiB 3.0 TiB Online Up 2.11.0-81faacc 3.10.0-1127.el7.x86_64 CentOS Linux 7 (Core)
    192.168.121.99 788bf810-57c4-4df1-9a5a-70c31d0f478e username-k8s1-node1 Disabled Yes 10 GiB 3.0 TiB Online Up (This node) 2.11.0-81faacc 3.10.0-1127.el7.x86_64 CentOS Linux 7 (Core)
    192.168.121.191 a8c76018-43d7-4a58-3d7b-19d45b4c541a username-k8s1-node2 Disabled Yes 10 GiB 3.0 TiB Online Up 2.11.0-81faacc 3.10.0-1127.el7.x86_64 CentOS Linux 7 (Core)
    Global Storage Pool
    Total Used : 30 GiB
    Total Capacity : 9.0 TiB

    The Portworx status will display PX is operational if your cluster is running as intended.

    Verify pxctl cluster provision status

    • Find the storage cluster, the status should show as Online:

      kubectl -n <px-namespace> get storagecluster
      NAME                                              CLUSTER UUID                           STATUS   VERSION   AGE
      px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d 33a82fe9-d93b-435b-943e-6f3fd5522eae Online 2.11.0 10m
    • Find the storage nodes, the statuses should show as Online:

      kubectl -n <px-namespace> get storagenodes
      NAME                  ID                                     STATUS   VERSION          AGE
      username-k8s1-node0 f6d87392-81f4-459a-b3d4-fad8c65b8edc Online 2.11.0-81faacc 11m
      username-k8s1-node1 788bf810-57c4-4df1-9a5a-70c31d0f478e Online 2.11.0-81faacc 11m
      username-k8s1-node2 a8c76018-43d7-4a58-3d7b-19d45b4c541a Online 2.11.0-81faacc 11m
    • Verify the Portworx cluster provision status. Enter the following kubectl exec command, specifying the pod name you retrieved in the previous section:

      kubectl exec <pod-name> -n <px-namespace> -- /opt/pwx/bin/pxctl cluster provision-status
      Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
      NODE NODE STATUS POOL POOL STATUS IO_PRIORITY SIZE AVAILABLE USED PROVISIONED ZONE REGION RACK
      788bf810-57c4-4df1-9a5a-70c31d0f478e Up 0 ( 96e7ff01-fcff-4715-b61b-4d74ecc7e159 ) Online HIGH 3.0 TiB 3.0 TiB 10 GiB 0 B default default default
      f6d87392-81f4-459a-b3d4-fad8c65b8edc Up 0 ( e06386e7-b769-4ce0-b674-97e4359e57c0 ) Online HIGH 3.0 TiB 3.0 TiB 10 GiB 0 B default default default
      a8c76018-43d7-4a58-3d7b-19d45b4c541a Up 0 ( a2e0af91-bb02-1574-611b-8904cab0e019 ) Online HIGH 3.0 TiB 3.0 TiB 10 GiB 0 B default default default

    Create your first PVC

    For your apps to use persistent volumes powered by Portworx, you must use a StorageClass that references Portworx as the provisioner. Portworx includes a number of default StorageClasses, which you can reference with PersistentVolumeClaims (PVCs) you create. For a more general overview of how storage works within Kubernetes, refer to the Persistent Volumes section of the Kubernetes documentation.

    Perform the following steps to create a PVC:

    1. Create a PVC referencing the px-csi-db default StorageClass and save the file:

      kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
      name: px-check-pvc
      spec:
      storageClassName: px-csi-db
      accessModes:
      - ReadWriteOnce
      resources:
      requests:
      storage: 2Gi
    2. Run the kubectl apply command to create a PVC:

      kubectl apply -f <your-pvc-name>.yaml
      persistentvolumeclaim/example-pvc created

    Verify your StorageClass and PVC

    1. Enter the kubectl get storageclass command:

      kubectl get storageclass
      NAME                                 PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
      px-csi-db pxd.portworx.com Delete Immediate true 43d
      px-csi-db-cloud-snapshot pxd.portworx.com Delete Immediate true 43d
      px-csi-db-cloud-snapshot-encrypted pxd.portworx.com Delete Immediate true 43d
      px-csi-db-encrypted pxd.portworx.com Delete Immediate true 43d
      px-csi-db-local-snapshot pxd.portworx.com Delete Immediate true 43d
      px-csi-db-local-snapshot-encrypted pxd.portworx.com Delete Immediate true 43d
      px-csi-replicated pxd.portworx.com Delete Immediate true 43d
      px-csi-replicated-encrypted pxd.portworx.com Delete Immediate true 43d
      px-db kubernetes.io/portworx-volume Delete Immediate true 43d
      px-db-cloud-snapshot kubernetes.io/portworx-volume Delete Immediate true 43d
      px-db-cloud-snapshot-encrypted kubernetes.io/portworx-volume Delete Immediate true 43d
      px-db-encrypted kubernetes.io/portworx-volume Delete Immediate true 43d
      px-db-local-snapshot kubernetes.io/portworx-volume Delete Immediate true 43d
      px-db-local-snapshot-encrypted kubernetes.io/portworx-volume Delete Immediate true 43d
      px-replicated kubernetes.io/portworx-volume Delete Immediate true 43d
      px-replicated-encrypted kubernetes.io/portworx-volume Delete Immediate true 43d
      stork-snapshot-sc stork-snapshot Delete Immediate true 43d

      kubectl returns details about the StorageClasses available to you. Verify that px-csi-db appears in the list.

    2. Enter the kubectl get pvc command. If this is the only StorageClass and PVC that you've created, you should see only one entry in the output:

      kubectl get pvc <your-pvc-name>
      NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS           AGE
      example-pvc Bound pvc-dce346e8-ff02-4dfb-935c-2377767c8ce0 2Gi RWO example-storageclass 3m7s

      kubectl returns details about your PVC if it was created correctly. Verify that the configuration details appear as you intended.

Was this page helpful?