The steps below will help you enable dynamic provisioning of Portworx volumes in your Google Kurbenetes Engine (GKE) cluster.

Prerequisites

Key-value store

Portworx uses a key-value store for it’s clustering metadata. Please have a clustered key-value database (etcd or consul) installed and ready. For etcd installation instructions refer this doc.

Shared mounts

Portworx 1.3 and higher automatically enables shared mounts.

If you are installing Portworx 1.2, you must configure Docker to allow shared mounts propagation (see instructions), as otherwise Portworx will fail to start.

Firewall

Ensure ports 9001-9015 are open between the nodes that will run Portworx. Your nodes should also be able to reach the port KVDB is running on (for example etcd usually runs on port 2379).

NTP

Ensure all nodes running PX are time-synchronized, and NTP service is configured and running.

Note:
This deployment model where Portworx provisions storage drives is not supported with internal kvdb.

PX Version

Support for GKE is available only in portworx release version 1.4 and above.

Create a GKE cluster

Following two points are important when creating your GKE cluster.

  1. Portworx is supported on GKE cluster provisioned on Ubuntu Node Images. So it is important to specify the node image as Ubuntu when creating clusters.

  2. To manage and auto provision GCP disks, Portworx needs access to the GCP Compute Engine API. For GKE 1.10 and above, Compute Engine API access is disabled by default. This can be enabled in the “Project Access” section when creating the GKE cluster. You can either allow full access to all Cloud APIs or set access for each API. When settting access for each API, make sure to select Read Write for the Compute Engine dropdown.

  3. Portworx requires a ClusterRoleBinding for your user. Without this kubectl apply ... command fails with an error like clusterroles.rbac.authorization.k8s.io "portworx-pvc-controller-role" is forbidden.

Create a ClusterRoleBinding for your user using the following commands:

   # get current google identity
   $ gcloud info | grep Account
   Account: [myname@example.org]

   # grant cluster-admin to your current identity
   $ kubectl create clusterrolebinding myname-cluster-admin-binding \
      --clusterrole=cluster-admin --user=myname@example.org
   Clusterrolebinding "myname-cluster-admin-binding" created

More information about creating GKE clusters can be found here.

Install

Portworx gets deployed as a Kubernetes DaemonSet. Following sections describe how to generate the spec files and apply them.

Disk template

Portworx takes in a disk spec which gets used to provision GCP persistent disks dynamically.

A GCP disk template defines the Google persistent disk properties that Portworx will use as a reference. There are 2 ways you can provide this template to Portworx.

1. Using a template specification

The spec follows the following format:

"type=<GCP disk type>,size=<size of disk>"
  • type: Following two types are supported
    • pd-standard
    • pd-ssd
  • size: This is the size of the disk in GB

See GCP disk for more details on above parameters.

Examples:

  • "type=pd-ssd,size=200"
  • "type=pd-standard,size=200", "type=pd-ssd,size=100"

2. Using existing GCP disks as templates

You can also reference an existing GCP disk as a template. On every node where PX is brought up as a storage node, a new GCP disk(s) identical to the template will be created.

For example, if you created a template GCP disk called px-disk-template-1, you can pass this in to PX as a parameter as a storage device.

Ensure that these disks are created in the same zone as the GCP node group.

Limiting storage nodes

PX allows you to create a heterogenous cluster where some of the nodes are storage nodes and rest of them are storageless. You can specify the number of storage nodes in your cluster by setting the max_drive_set_count input argument. Modify the input arguments to PX as shown in the below examples.

Examples:

  • "-s", "type=pd-ssd,size=200", "-max_drive_set_count", "3"

For a cluster of 5 nodes, in the above example PX will have 3 storage nodes and 2 storage less nodes. PX will create a total 3 PDs of size 200 each and attach one PD to each storage node.

  • "-s", "type=pd-standard,size=200", "-s", "type=pd-ssd,size=100", "-max_drive_set_count", "3"

For a cluster of 5 nodes, in the above example PX will have 3 storage nodes and 2 storage less nodes. PX will create a total of 6 PDs (3 PDs of size 200 and 3PDs of size 100). PX will attach a set of 2PDs (one of size 200 and one of size 100) to each of the 3 storage nodes..

Generate the spec

We will supply the template(s) explained in previous section, when we create the Portworx spec.

To generate the spec file, head on to the below URLs for the PX release you wish to use.

Alternately, you can use curl to generate the spec as described in Generating Portworx Kubernetes spec using curl.

Secure ETCD and Certificates

If using secure etcd provide “https” in the URL and make sure all the certificates are in the /etc/pwx/ directory on each host which is bind mounted inside PX container.

Using Kubernetes Secrets to Provision Certificates

Instead of manually copying the certificates on all the nodes, it is recommended to use Kubernetes Secrets to provide etcd certificates to Portworx. This way, the certificates will be automatically available to new nodes joining the cluster.

Installing behind the HTTP proxy

During the installation Portworx may require access to the Internet, to fetch kernel headers if they are not available locally on the host system. If your cluster runs behind the HTTP proxy, you will need to expose PX_HTTP_PROXY and/or PX_HTTPS_PROXY environment variables to point to your HTTP proxy when starting the DaemonSet.

Use e=PX_HTTP_PROXY=<http-proxy>,PX_HTTPS_PROXY=<https-proxy> query param when generating the DaemonSet spec.

Applying the spec

Once you have generated the spec file, deploy Portworx.

$ kubectl apply -f px-spec.yaml

Monitor the portworx pods

kubectl get pods -o wide -n kube-system -l name=portworx

Monitor Portworx cluster status

PX_POD=$(kubectl get pods -l name=portworx -n kube-system -o jsonpath='{.items[0].metadata.name}')
kubectl exec $PX_POD -n kube-system -- /opt/pwx/bin/pxctl status

If you are still experiencing issues, please refer to Troubleshooting PX on Kubernetes and General FAQs.

Deploy a sample application

Now that you have Portworx installed, checkout various examples of applications using Portworx on Kubernetes.

Troubleshooting Notes

  • GKE instances under certain scenarios do not automatically re-attach the persistent disks used by PX.
    • Under the following scenarios, GKE will spin up a new VM as a replacement for older VMs with the same node name:
      • Halting a VM in GKE cluster
      • Upgrading GKE between different kubernetes version
      • Increasing the size of the node pool
    • However, in these cases the previously attached persistent disks will not be re-attached automatically.
    • Currently you will have to manually re-attach the persistent disk to the new VM and then restart portworx on that node.