Install Portworx on IKS
This document provides instructions for installing Portworx using the Portworx Operator on IBM Cloud Kubernetes Service (IKS) using the IBM catalog. This document provides a default installation configuration which is designed to get you up and running with a typical cluster configuration with the following properties:
- The cluster is located in a single availability zone
 - Portworx is installed using an internal KVDB
 - Kubernetes has access to the public network and gateway
 
Prerequisites
Before you start installing Portworx, ensure that you meet the following minimum prerequisites:
- You must have an IBM Cloud account with admin privileges. Portworx does not support using a service ID.
 - You must have a Kubernetes cluster with at least 3 worker nodes deployed on IBM Cloud, and the cluster must meet both the Portworx minimum requirements as well as the following requirements:
- CPU: 16
 - Memory: 32 GB
 - Disk: 100 GB
 
 - You must have ability to provision cloud storage for each worker node.
 
(Optional) Create a StorageClass for encrypting cloud drives
If you want to set up encryption for cloud drives, create a custom StorageClass in your cluster (where you plan to deploy Portworx). This may impact the performance of your environment. Follow the steps outlined in the IBM documentation.
Install Portworx
- 
Navigate to IBM Cloud. From the Catalog page, search for and select Portworx Enterprise.
 - 
From the configuration page, make the following selections:
- Under Select a location, specify the location in which your Kubernetes cluster is located.
 - Under Select a pricing plan, select either Portworx Enterprise with Disaster Recovery (DR) or Enterprise.
 - Under Configure your resource, do the following:
- Choose a service name or accept the default.
 - Specify the resource group your Kubernetes cluster is in.
 
 - At IBM Cloud API key, enter your IBM Cloud API key.
 - At Portworx cluster name, enter a valid Portworx cluster name.
 - At Cloud drives, select Use Cloud Drives from the drop-down menu. This reveals a number of new fields:
- For Number of drives, select your desired number of cloud drives.
 - For Max storage nodes per zone, specify 3 storage nodes per zone.
 - For Storage Class name, retain the default storage class names. All cloud drives should show the same Storage Class name.
- For encrypting cloud drives, select the custom StorageClass created by you from the dropdown.
 
 - For Size, define your desired disk size in GB.
 
 - For Portworx metadata Key-value store, specify Portworx KVDB. This deploys Portworx with an internal KVDB cluster.
 - For Secret type, keep Kubernetes secret.
 - Leave Helm Parameters blank.
 - Enable CSI.
 - For Portworx versions, specify your desired Portworx version.
 
 - 
Agree to the terms and click Create to launch the Portworx cluster. This can take 20 minutes or more.
 
Verify your Portworx installation
Once you've installed Portworx, you can perform the following tasks to verify that Portworx has installed correctly.
Verify if all pods are running
Enter the following kubectl get pods command to list and filter the results for Portworx pods:
kubectl get pods -n <px-namespace> -o wide | grep -e portworx -e px
portworx-api-774c2                                      1/1     Running   0                2m55s   192.168.121.196   username-k8s1-node0    <none>           <none>
portworx-api-t4lf9                                      1/1     Running   0                2m55s   192.168.121.99    username-k8s1-node1    <none>           <none>
portworx-api-dvw64                                      1/1     Running   0                2m55s   192.168.121.99    username-k8s1-node2    <none>           <none>
portworx-kvdb-94bpk                                     1/1     Running   0                4s      192.168.121.196   username-k8s1-node0    <none>           <none>
portworx-kvdb-8b67l                                     1/1     Running   0                10s     192.168.121.196   username-k8s1-node1    <none>           <none>
portworx-kvdb-fj72p                                     1/1     Running   0                30s     192.168.121.196   username-k8s1-node2    <none>           <none>
portworx-operator-58967ddd6d-kmz6c                      1/1     Running   0                4m1s    10.244.1.99       username-k8s1-node0    <none>           <none>
prometheus-px-prometheus-0                              2/2     Running   0                2m41s   10.244.1.105      username-k8s1-node0    <none>           <none>
px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-3e9bf3cd834d-9gs79   2/2     Running   0                2m55s   192.168.121.196   username-k8s1-node0    <none>           <none>
px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-3e9bf3cd834d-vpptx   2/2     Running   0                2m55s   192.168.121.99    username-k8s1-node1    <none>           <none>
px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-3e9bf3cd834d-bxmpn   2/2     Running   0                2m55s   192.168.121.191   username-k8s1-node2    <none>           <none>
px-csi-ext-868fcb9fc6-54bmc                             4/4     Running   0                3m5s    10.244.1.103      username-k8s1-node0    <none>           <none>
px-csi-ext-868fcb9fc6-8tk79                             4/4     Running   0                3m5s    10.244.1.102      username-k8s1-node2    <none>           <none>
px-csi-ext-868fcb9fc6-vbqzk                             4/4     Running   0                3m5s    10.244.3.107      username-k8s1-node1    <none>           <none>
px-prometheus-operator-59b98b5897-9nwfv                 1/1     Running   0                3m3s    10.244.1.104      username-k8s1-node0    <none>           <none>
Note the name of one of your px-cluster pods. You'll run pxctl commands from these pods in following steps.
Verify Portworx cluster status
You can find the status of the Portworx cluster by running pxctl status commands from a pod. Enter the following kubectl exec command, specifying the pod name you retrieved in the previous section:
kubectl exec <pod-name> -n <px-namespace> -- /opt/pwx/bin/pxctl status
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
Status: PX is operational
Telemetry: Disabled or Unhealthy
Metering: Disabled or Unhealthy
License: Trial (expires in 31 days)
Node ID: xxxxxxxx-xxxx-xxxx-xxxx-70c31d0f478e
        IP: 192.168.121.99 
        Local Storage Pool: 1 pool
        POOL    IO_PRIORITY     RAID_LEVEL      USABLE  USED    STATUS  ZONE    REGION
        0       HIGH            raid0           3.0 TiB 10 GiB  Online  default default
        Local Storage Devices: 3 devices
        Device  Path            Media Type              Size            Last-Scan
        0:1     /dev/vdb        STORAGE_MEDIUM_MAGNETIC 1.0 TiB         14 Jul 22 22:03 UTC
        0:2     /dev/vdc        STORAGE_MEDIUM_MAGNETIC 1.0 TiB         14 Jul 22 22:03 UTC
        0:3     /dev/vdd        STORAGE_MEDIUM_MAGNETIC 1.0 TiB         14 Jul 22 22:03 UTC
        * Internal kvdb on this node is sharing this storage device /dev/vdc  to store its data.
        total           -       3.0 TiB
        Cache Devices:
         * No cache devices
Cluster Summary
        Cluster ID: px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-3e9bf3cd834d
        Cluster UUID: xxxxxxxx-xxxx-xxxx-xxxx-6f3fd5522eae
        Scheduler: kubernetes
        Nodes: 3 node(s) with storage (3 online)
        IP              ID                                      SchedulerNodeName       Auth            StorageNode     Used    Capacity        Status  StorageStatus       Version         Kernel                  OS
        192.168.121.196 xxxxxxxx-xxxx-xxxx-xxxx-fad8c65b8edc    username-k8s1-node0      Disabled        Yes             10 GiB  3.0 TiB         Online  Up 2.11.0-81faacc   3.10.0-1127.el7.x86_64  CentOS Linux 7 (Core)
        192.168.121.99  xxxxxxxx-xxxx-xxxx-xxxx-70c31d0f478e    username-k8s1-node1      Disabled        Yes             10 GiB  3.0 TiB         Online  Up (This node)      2.11.0-81faacc  3.10.0-1127.el7.x86_64  CentOS Linux 7 (Core)
        192.168.121.191 xxxxxxxx-xxxx-xxxx-xxxx-19d45b4c541a    username-k8s1-node2      Disabled        Yes             10 GiB  3.0 TiB         Online  Up  2.11.0-81faacc  3.10.0-1127.el7.x86_64  CentOS Linux 7 (Core)
Global Storage Pool        
        Total Used      :  30 GiB
        Total Capacity  :  9.0 TiB
The Portworx status will display PX is operational if your cluster is running as intended.
Verify pxctl cluster provision status
- 
Find the storage cluster, the status should show as
Online:kubectl -n <px-namespace> get storageclusterNAME CLUSTER UUID STATUS VERSION AGE
px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-3e9bf3cd834d xxxxxxxx-xxxx-xxxx-xxxx-6f3fd5522eae Online 2.11.0 10m - 
Find the storage nodes, the statuses should show as
Online:kubectl -n <px-namespace> get storagenodesNAME ID STATUS VERSION AGE
username-k8s1-node0 xxxxxxxx-xxxx-xxxx-xxxx-fad8c65b8edc Online 2.11.0-81faacc 11m
username-k8s1-node1 xxxxxxxx-xxxx-xxxx-xxxx-70c31d0f478e Online 2.11.0-81faacc 11m
username-k8s1-node2 xxxxxxxx-xxxx-xxxx-xxxx-19d45b4c541a Online 2.11.0-81faacc 11m - 
Verify the Portworx cluster provision status. Enter the following
kubectl execcommand, specifying the pod name you retrieved in the previous section:kubectl exec <pod-name> -n <px-namespace> -- /opt/pwx/bin/pxctl cluster provision-statusDefaulted container "portworx" out of: portworx, csi-node-driver-registrar
NODE NODE STATUS POOL POOL STATUS IO_PRIORITY SIZE AVAILABLE USED PROVISIONED ZONE REGION RACK
xxxxxxxx-xxxx-xxxx-xxxx-70c31d0f478e Up 0 ( xxxxxxxx-xxxx-xxxx-xxxx-4d74ecc7e159 ) Online HIGH 3.0 TiB 3.0 TiB 10 GiB 0 B default default default
xxxxxxxx-xxxx-xxxx-xxxx-fad8c65b8edc Up 0 ( xxxxxxxx-xxxx-xxxx-xxxx-97e4359e57c0 ) Online HIGH 3.0 TiB 3.0 TiB 10 GiB 0 B default default default
xxxxxxxx-xxxx-xxxx-xxxx-19d45b4c541a Up 0 ( xxxxxxxx-xxxx-xxxx-xxxx-8904cab0e019 ) Online HIGH 3.0 TiB 3.0 TiB 10 GiB 0 B default default default 
Create your first PVC
For your apps to use persistent volumes powered by Portworx, you must use a StorageClass that references Portworx as the provisioner. Portworx includes a number of default StorageClasses, which you can reference with PersistentVolumeClaims (PVCs) you create. For a more general overview of how storage works within Kubernetes, refer to the Persistent Volumes section of the Kubernetes documentation.
Perform the following steps to create a PVC:
- 
Create a PVC referencing the
px-csi-dbdefault StorageClass and save the file:kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: px-check-pvc
spec:
storageClassName: px-csi-db
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi - 
Run the
kubectl applycommand to create a PVC:kubectl apply -f <your-pvc-name>.yamlpersistentvolumeclaim/px-check-pvc created 
Verify your StorageClass and PVC
- 
Enter the
kubectl get storageclasscommand:kubectl get storageclassNAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
px-csi-db pxd.portworx.com Delete Immediate true 43d
px-csi-db-cloud-snapshot pxd.portworx.com Delete Immediate true 43d
px-csi-db-cloud-snapshot-encrypted pxd.portworx.com Delete Immediate true 43d
px-csi-db-encrypted pxd.portworx.com Delete Immediate true 43d
px-csi-db-local-snapshot pxd.portworx.com Delete Immediate true 43d
px-csi-db-local-snapshot-encrypted pxd.portworx.com Delete Immediate true 43d
px-csi-replicated pxd.portworx.com Delete Immediate true 43d
px-csi-replicated-encrypted pxd.portworx.com Delete Immediate true 43d
px-db kubernetes.io/portworx-volume Delete Immediate true 43d
px-db-cloud-snapshot kubernetes.io/portworx-volume Delete Immediate true 43d
px-db-cloud-snapshot-encrypted kubernetes.io/portworx-volume Delete Immediate true 43d
px-db-encrypted kubernetes.io/portworx-volume Delete Immediate true 43d
px-db-local-snapshot kubernetes.io/portworx-volume Delete Immediate true 43d
px-db-local-snapshot-encrypted kubernetes.io/portworx-volume Delete Immediate true 43d
px-replicated kubernetes.io/portworx-volume Delete Immediate true 43d
px-replicated-encrypted kubernetes.io/portworx-volume Delete Immediate true 43d
stork-snapshot-sc stork-snapshot Delete Immediate true 43dkubectlreturns details about the StorageClasses available to you. Verify thatpx-csi-dbappears in the list. - 
Enter the
kubectl get pvccommand. If this is the only StorageClass and PVC that you've created, you should see only one entry in the output:kubectl get pvc <your-pvc-name>NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
px-check-pvc Bound pvc-xxxxxxxx-xxxx-xxxx-xxxx-2377767c8ce0 2Gi RWO example-storageclass 3m7skubectlreturns details about your PVC if it was created correctly. Verify that the configuration details appear as you intended.