Disk Provisioning on VMware vSphere


This guide explains how the Portworx Dynamic Disk Provisioning feature works within Kubernetes on VMware and the requirements for it. It is not a requirement to have VMWare cloud provider plugin, nor VCP nor PKS for the below setup to work (e.g. a vanilla/upstream Kubernetes setup will work as well).

Disk provisioning on VMware is only supported if you are running with Kubernetes.

Architecture

Below diagram gives an overview of the Portworx architecture on vSphere using shared datastores.

  • Portworx runs as a Daemonset hence each Kubernetes minion/worker will have the Portworx daemon running.
  • Based on the given spec by the end user, Portworx on each node will create it’s disk on the configured shared datastore(s) or datastore cluster(s).
  • Portworx will aggregate all of the disks and form a single storage cluster. End users can carve PVCs (Persistent Volume Claims), PVs (Persistent Volumes) and Snapshots from this storage cluster.
  • Portworx tracks and manages the disks that it creates. So in a failure event, if a new VM spins up, Portworx on the new VM will be able to attach to the same disk that was previously created by the node on the failed VM.

Portworx architecture for PKS on vSphere using shared datastores or datastore clusters

Disk templates

A disk template defines the VMDK properties that Portworx will use as a reference for creating the actual disks out of which Portworx will create the virtual volumes for your PVCs. These templates are given to Portworx during installation as arguments to the Daemonset.

The template follows the following format:

"type=<vmdk type>,size=<size of the vmdk>"
  • type: Following two types are supported
    • thin
    • zeroedthick
    • eagerzeroedthick
  • size: This is the size of the VMDK

Limiting storage nodes

PX allows you to create a heterogenous cluster where some of the nodes are storage nodes and rest of them are storageless.

You can specify the number of storage nodes in your cluster by setting the max_storage_nodes_per_zone input argument. This instructs PX to limit the number of storage nodes in one zone to the value specified in max_storage_nodes_per_zone argument. The total number of storage nodes in your cluster will be

Total Storage Nodes = (Num of Zones) * max_storage_nodes_per_zone.

While planning capacity for your auto scaling cluster make sure the minimum size of your cluster is equal to the total number of storage nodes in PX. This ensures that when you scale up your cluster, only storage less nodes will be added. While when you scale down the cluster, it will scale to the minimum size which ensures that all PX storage nodes are online and available.

You can always ignore the max_storage_nodes_per_zone argument. When you scale up the cluster, the new nodes will also be storage nodes but while scaling down you will loose storage nodes causing PX to loose quorum.

Examples:

  • "-s", "type=zeroedthick,size=200", "-max_storage_nodes_per_zone", "1"

For a cluster of 6 nodes spanning 3 zones (zone-1a,zone-1b,zone-1c), in the above example PX will have 3 storage nodes (one in each zone) and 3 storage less nodes. PX will create a total 3 disks of size 200 each and attach one disk to each storage node.

  • "-s", "type=zeroedthick,size=200", "-s", "type=zeroedthick,size=100", "-max_storage_nodes_per_zone", "2"

For a cluster of 9 nodes spanning 2 zones (zone-1a,zone-1b), in the above example PX will have 4 storage nodes and 5 storage less nodes. PX will create a total of 8 disks (4 of size 200 and 4 of size 100). PX will attach a set of 2 disks (one of size 200 and one of size 100) to each of the 4 storage nodes.

Availability across failure domains

Since PX is a storage overlay that automatically replicates your data, we recommend using multiple availability zones when creating your VMware vSphere based cluster. Portworx automatically detects regions and zones that are populated using known Kubernetes node labels. You can also label nodes with custom labels to inform Portworx about region, zones and racks. The page Cluster Topology awareness explains this in more detail.

Installation

Step 1: Create a Kubernetes secret with your vCenter user and password

Update the following items in the Secret template below to match your environment:

  1. VSPHERE_USER: Use output of echo -n <vcenter-server-user> | base64
  2. VSPHERE_PASSWORD: Use output of echo -n <vcenter-server-password> | base64
apiVersion: v1
kind: Secret
metadata:
  name: px-vsphere-secret
  namespace: kube-system
type: Opaque
data:
  VSPHERE_USER: YWRtaW5pc3RyYXRvckB2c3BoZXJlLmxvY2Fs
  VSPHERE_PASSWORD: cHgxLjMuMEZUVw==

kubectl apply the above spec after you update the above template with your user and password.

Step 2: Download rest of the specs

Continue to Install on-premise for details instructions on installing Portworx.

Step 3: Add env variables in the DaemonSet spec

You will need to change (or add) the following sections in the Portworx DaemonSet section of the downloaded spec to match your environment:

  1. VSPHERE_VCENTER: Hostname of the vCenter server.
  2. VSPHERE_DATASTORE_PREFIX: Prefix of the ESXi datastore(s) that Portworx will use for storage.
  3. VSPHERE_INSTALL_MODE: “shared”
  4. VSPHERE_USER: should be a valueFrom with a secretKeyRef of VSPHERE_USER (referencing to the px-vsphere-secret defined above)
  5. VSPHERE_PASSWORD: should be a valueFrom with a secretKeyRef of VSPHERE_PASSWORD (referencing to the px-vsphere-secret defined above)

Optionally, you may also need to specify these:

  1. VSPHERE_VCENTER_PORT: with the port number, if your vsphere services are not on the default port 443
  2. VSPHERE_INSECURE: if you are using self-signed certificates

Step 4: Permission your vcenter-server-user appropriately

Your vcenter-server-user will need to either have the full Admin role or, for increased security, a custom-created role with the following minimum vsphere privileges:

  • Datastore.Browse
  • Datastore.FileManagement
  • Host.Local.ReconfigVM
  • VirtualMachine.Config.AddExistingDisk
  • VirtualMachine.Config.AddNewDisk
  • VirtualMachine.Config.AddRemoveDevice
  • VirtualMachine.Config.EditDevice
  • VirtualMachine.Config.RemoveDisk
  • VirtualMachine.Config.Settings

The above permissions are in the format returned from the govc utility (which is in itself useful for troubleshooting your vcenter-server-user access as well).

Apply the specs

Apply the generated specs in your cluster.

kubectl apply -f px-spec.yaml
Monitor the portworx pods

Wait till all portworx pods show as ready in the below output.

kubectl get pods -o wide -n kube-system -l name=portworx
Monitor Portworx cluster status
PX_POD=$(kubectl get pods -l name=portworx -n kube-system -o jsonpath='{.items[0].metadata.name}')
kubectl exec $PX_POD -n kube-system -- /opt/pwx/bin/pxctl status

Post-Install

Once you have a running Portworx installation, below sections are useful.



Last edited: Thursday, May 23, 2019