Install and configure Portworx on VMware Tanzu Kubernetes Grid Service
This page provides instructions for installing Portworx on VMware Tanzu Kubernetes Grid Service (TKGS) environment.
The following diagram demonstrates how Portworx operates the cloud drives:
 
When Portworx is deployed within Tanzu, the Cloud Drive (CD) module does the following:
- 
Initializes a Portworx node which involves joining the Tanzu cluster as a storage capable node, adding to the global capacity of the cluster. 
- 
The CD module executes a subsequence of calls to check the following points: - If there are a set of drives attached to the node, the CD module uses them. If they are not already attached to the node, the CD module attaches them.
- If there are no drives attached or available for that node, the CD module creates and attaches them.
 
- 
The CSI implementation in cloud drives makes calls to Kubernetes instead of talking to the underlying storage API. 
- Portworx does not support Tanzu Community Edition
- For the underlying datastore for the PVs and VMs, do not enable Storage DRS. VSphere CSI driver and Cloud Native Storage does not currently support Storage DRS feature in vSphere
Prerequisites
- A running Kubernetes cluster TKG or TKGS.
- If you're using TKGS, ensure all prerequisites are met based on VMware documentation.
- Enable Workload Management according to VMware Best Practices.
- Ensure the required ports in the 9001:9020range are open on all nodes.
Configure your environment
Follow the instructions in this section to configure your environment before installing Portworx.
(Optional) Install packages for encrypted volumes
If using encrypted volumes on Tanzu clusters with PhotonOS, install the device-mapper and cryptsetup packages on the cluster nodes using the following commands:
- Update the package repositories:
yum update -y
- Install the device-mapperpackages:yum install device-mapper
- Install the cryptsetup packages:
yum install cryptsetup
Any newly added nodes to the cluster will not have these packages pre-installed, which means that volumes attached to those nodes cannot be encrypted.
Create the StorageClass
In a Kubernetes environment, CSI (Container Storage Interface) is a standardized interface that allows Portworx to be integrated with TKGS clusters. TKGS leverages CSI to interact with Portworx, enabling dynamic provisioning and management of persistent volumes for containerized applications.
Follow the instructions to enable CSI cloud drive configuration. For this you must create a StorageClass with the CSI driver specified as a provisioner and install it in your Kubernetes cluster.
- Get a CSI provisioner name by running the following command on your Tanzu Kubernetes cluster for use in the next step:
kubectl get csidriver
- 
Create a StorageClass on your specified datastore with the following configuration. In the provisionerfield, enter the CSI driver name you obtained in step 1. Provide the datastore URL in thedatastoreurlfield from your VSpere client.kind: StorageClass
 apiVersion: storage.k8s.io/v1
 metadata:
 name: vsphere-immediate-sc
 annotations:
 storageclass.kubernetes.io/is-default-class: "true"
 provisioner: <csi_driver_name>
 parameters:
 datastoreurl: "<your-datastore-url>"
 allowVolumeExpansion: true
 reclaimPolicy: Delete
 volumeBindingMode: ImmediatenoteYou can also specify the SPBM policies in the StorageClass with the storagepolicynameparameter.
Install Portworx
Once you've configured your physical network and ensured that you meet the prerequisites, you're ready to deploy Portworx.
Generate the specs
To install Portworx with Kubernetes, you must first generate Kubernetes manifests that you will deploy in your cluster:
- 
Sign in to the Portworx Central console. 
 The system displays the Welcome to Portworx Central! page.
- 
In the Portworx Enterprise section, select Generate Cluster Spec. 
 The system displays the Generate Spec page.
- 
From the Portworx Version dropdown menu, select the Portworx version to install. 
- 
For Platform, select VMware Tanzu. 
- 
Specify the same StorageClass name that you created in the previous section ( vsphere-immediate-sc).
- 
Click Save Spec to generate the specs. 
Apply the specs
Apply the generated specs to your cluster.
kubectl apply -f px-spec.yaml
Verify your Portworx installation
Once you've installed Portworx, you can perform the following tasks to verify that Portworx has installed correctly.
Verify if all pods are running
Enter the following kubectl get pods command to list and filter the results for Portworx pods:
kubectl get pods -n <px-namespace> -o wide | grep -e portworx -e px
portworx-api-774c2                                      1/1     Running   0                2m55s   192.168.121.196   username-k8s1-node0    <none>           <none>
portworx-api-t4lf9                                      1/1     Running   0                2m55s   192.168.121.99    username-k8s1-node1    <none>           <none>
portworx-api-dvw64                                      1/1     Running   0                2m55s   192.168.121.99    username-k8s1-node2    <none>           <none>
portworx-kvdb-94bpk                                     1/1     Running   0                4s      192.168.121.196   username-k8s1-node0    <none>           <none>
portworx-kvdb-8b67l                                     1/1     Running   0                10s     192.168.121.196   username-k8s1-node1    <none>           <none>
portworx-kvdb-fj72p                                     1/1     Running   0                30s     192.168.121.196   username-k8s1-node2    <none>           <none>
portworx-operator-58967ddd6d-kmz6c                      1/1     Running   0                4m1s    10.244.1.99       username-k8s1-node0    <none>           <none>
prometheus-px-prometheus-0                              2/2     Running   0                2m41s   10.244.1.105      username-k8s1-node0    <none>           <none>
px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-3e9bf3cd834d-9gs79   2/2     Running   0                2m55s   192.168.121.196   username-k8s1-node0    <none>           <none>
px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-3e9bf3cd834d-vpptx   2/2     Running   0                2m55s   192.168.121.99    username-k8s1-node1    <none>           <none>
px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-3e9bf3cd834d-bxmpn   2/2     Running   0                2m55s   192.168.121.191   username-k8s1-node2    <none>           <none>
px-csi-ext-868fcb9fc6-54bmc                             4/4     Running   0                3m5s    10.244.1.103      username-k8s1-node0    <none>           <none>
px-csi-ext-868fcb9fc6-8tk79                             4/4     Running   0                3m5s    10.244.1.102      username-k8s1-node2    <none>           <none>
px-csi-ext-868fcb9fc6-vbqzk                             4/4     Running   0                3m5s    10.244.3.107      username-k8s1-node1    <none>           <none>
px-prometheus-operator-59b98b5897-9nwfv                 1/1     Running   0                3m3s    10.244.1.104      username-k8s1-node0    <none>           <none>
Note the name of one of your px-cluster pods. You'll run pxctl commands from these pods in following steps.
Verify Portworx cluster status
You can find the status of the Portworx cluster by running pxctl status commands from a pod. Enter the following kubectl exec command, specifying the pod name you retrieved in the previous section:
kubectl exec <pod-name> -n <px-namespace> -- /opt/pwx/bin/pxctl status
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
Status: PX is operational
Telemetry: Disabled or Unhealthy
Metering: Disabled or Unhealthy
License: Trial (expires in 31 days)
Node ID: xxxxxxxx-xxxx-xxxx-xxxx-70c31d0f478e
        IP: 192.168.121.99 
        Local Storage Pool: 1 pool
        POOL    IO_PRIORITY     RAID_LEVEL      USABLE  USED    STATUS  ZONE    REGION
        0       HIGH            raid0           3.0 TiB 10 GiB  Online  default default
        Local Storage Devices: 3 devices
        Device  Path            Media Type              Size            Last-Scan
        0:1     /dev/vdb        STORAGE_MEDIUM_MAGNETIC 1.0 TiB         14 Jul 22 22:03 UTC
        0:2     /dev/vdc        STORAGE_MEDIUM_MAGNETIC 1.0 TiB         14 Jul 22 22:03 UTC
        0:3     /dev/vdd        STORAGE_MEDIUM_MAGNETIC 1.0 TiB         14 Jul 22 22:03 UTC
        * Internal kvdb on this node is sharing this storage device /dev/vdc  to store its data.
        total           -       3.0 TiB
        Cache Devices:
         * No cache devices
Cluster Summary
        Cluster ID: px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-3e9bf3cd834d
        Cluster UUID: xxxxxxxx-xxxx-xxxx-xxxx-6f3fd5522eae
        Scheduler: kubernetes
        Nodes: 3 node(s) with storage (3 online)
        IP              ID                                      SchedulerNodeName       Auth            StorageNode     Used    Capacity        Status  StorageStatus       Version         Kernel                  OS
        192.168.121.196 xxxxxxxx-xxxx-xxxx-xxxx-fad8c65b8edc    username-k8s1-node0      Disabled        Yes             10 GiB  3.0 TiB         Online  Up 2.11.0-81faacc   3.10.0-1127.el7.x86_64  CentOS Linux 7 (Core)
        192.168.121.99  xxxxxxxx-xxxx-xxxx-xxxx-70c31d0f478e    username-k8s1-node1      Disabled        Yes             10 GiB  3.0 TiB         Online  Up (This node)      2.11.0-81faacc  3.10.0-1127.el7.x86_64  CentOS Linux 7 (Core)
        192.168.121.191 xxxxxxxx-xxxx-xxxx-xxxx-19d45b4c541a    username-k8s1-node2      Disabled        Yes             10 GiB  3.0 TiB         Online  Up  2.11.0-81faacc  3.10.0-1127.el7.x86_64  CentOS Linux 7 (Core)
Global Storage Pool        
        Total Used      :  30 GiB
        Total Capacity  :  9.0 TiB
The Portworx status will display PX is operational if your cluster is running as intended.
Verify pxctl cluster provision status
- 
Find the storage cluster, the status should show as Online:kubectl -n <px-namespace> get storageclusterNAME CLUSTER UUID STATUS VERSION AGE
 px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-3e9bf3cd834d xxxxxxxx-xxxx-xxxx-xxxx-6f3fd5522eae Online 2.11.0 10m
- 
Find the storage nodes, the statuses should show as Online:kubectl -n <px-namespace> get storagenodesNAME ID STATUS VERSION AGE
 username-k8s1-node0 xxxxxxxx-xxxx-xxxx-xxxx-fad8c65b8edc Online 2.11.0-81faacc 11m
 username-k8s1-node1 xxxxxxxx-xxxx-xxxx-xxxx-70c31d0f478e Online 2.11.0-81faacc 11m
 username-k8s1-node2 xxxxxxxx-xxxx-xxxx-xxxx-19d45b4c541a Online 2.11.0-81faacc 11m
- 
Verify the Portworx cluster provision status. Enter the following kubectl execcommand, specifying the pod name you retrieved in the previous section:kubectl exec <pod-name> -n <px-namespace> -- /opt/pwx/bin/pxctl cluster provision-statusDefaulted container "portworx" out of: portworx, csi-node-driver-registrar
 NODE NODE STATUS POOL POOL STATUS IO_PRIORITY SIZE AVAILABLE USED PROVISIONED ZONE REGION RACK
 xxxxxxxx-xxxx-xxxx-xxxx-70c31d0f478e Up 0 ( xxxxxxxx-xxxx-xxxx-xxxx-4d74ecc7e159 ) Online HIGH 3.0 TiB 3.0 TiB 10 GiB 0 B default default default
 xxxxxxxx-xxxx-xxxx-xxxx-fad8c65b8edc Up 0 ( xxxxxxxx-xxxx-xxxx-xxxx-97e4359e57c0 ) Online HIGH 3.0 TiB 3.0 TiB 10 GiB 0 B default default default
 xxxxxxxx-xxxx-xxxx-xxxx-19d45b4c541a Up 0 ( xxxxxxxx-xxxx-xxxx-xxxx-8904cab0e019 ) Online HIGH 3.0 TiB 3.0 TiB 10 GiB 0 B default default default
Create your first PVC
For your apps to use persistent volumes powered by Portworx, you must use a StorageClass that references Portworx as the provisioner. Portworx includes a number of default StorageClasses, which you can reference with PersistentVolumeClaims (PVCs) you create. For a more general overview of how storage works within Kubernetes, refer to the Persistent Volumes section of the Kubernetes documentation.
Perform the following steps to create a PVC:
- 
Create a PVC referencing the px-csi-dbdefault StorageClass and save the file:kind: PersistentVolumeClaim
 apiVersion: v1
 metadata:
 name: px-check-pvc
 spec:
 storageClassName: px-csi-db
 accessModes:
 - ReadWriteOnce
 resources:
 requests:
 storage: 2Gi
- 
Run the kubectl applycommand to create a PVC:kubectl apply -f <your-pvc-name>.yamlpersistentvolumeclaim/px-check-pvc created
Verify your StorageClass and PVC
- 
Enter the kubectl get storageclasscommand:kubectl get storageclassNAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
 px-csi-db pxd.portworx.com Delete Immediate true 43d
 px-csi-db-cloud-snapshot pxd.portworx.com Delete Immediate true 43d
 px-csi-db-cloud-snapshot-encrypted pxd.portworx.com Delete Immediate true 43d
 px-csi-db-encrypted pxd.portworx.com Delete Immediate true 43d
 px-csi-db-local-snapshot pxd.portworx.com Delete Immediate true 43d
 px-csi-db-local-snapshot-encrypted pxd.portworx.com Delete Immediate true 43d
 px-csi-replicated pxd.portworx.com Delete Immediate true 43d
 px-csi-replicated-encrypted pxd.portworx.com Delete Immediate true 43d
 px-db kubernetes.io/portworx-volume Delete Immediate true 43d
 px-db-cloud-snapshot kubernetes.io/portworx-volume Delete Immediate true 43d
 px-db-cloud-snapshot-encrypted kubernetes.io/portworx-volume Delete Immediate true 43d
 px-db-encrypted kubernetes.io/portworx-volume Delete Immediate true 43d
 px-db-local-snapshot kubernetes.io/portworx-volume Delete Immediate true 43d
 px-db-local-snapshot-encrypted kubernetes.io/portworx-volume Delete Immediate true 43d
 px-replicated kubernetes.io/portworx-volume Delete Immediate true 43d
 px-replicated-encrypted kubernetes.io/portworx-volume Delete Immediate true 43d
 stork-snapshot-sc stork-snapshot Delete Immediate true 43dkubectlreturns details about the StorageClasses available to you. Verify thatpx-csi-dbappears in the list.
- 
Enter the kubectl get pvccommand. If this is the only StorageClass and PVC that you've created, you should see only one entry in the output:kubectl get pvc <your-pvc-name>NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
 px-check-pvc Bound pvc-xxxxxxxx-xxxx-xxxx-xxxx-2377767c8ce0 2Gi RWO example-storageclass 3m7skubectlreturns details about your PVC if it was created correctly. Verify that the configuration details appear as you intended.