k8s porx Logo

The steps below will help you enable dynamic provisioning of Portworx volumes in your Google Kurbenetes Engine (GKE) cluster.


Key-value store

Portworx uses a key-value store for it’s clustering metadata. Please have a clustered key-value database (etcd or consul) installed and ready. For etcd installation instructions refer this doc.

Shared mounts

Portworx 1.3 and higher automatically enables shared mounts.

If you are installing Portworx 1.2, you must configure Docker to allow shared mounts propagation (see instructions), as otherwise Portworx will fail to start.


Ensure ports 9001-9015 are open between the nodes that will run Portworx. Your nodes should also be able to reach the port KVDB is running on (for example etcd usually runs on port 2379).


Ensure all nodes running PX are time-synchronized, and NTP service is configured and running.

Create a GKE cluster

Portworx is supported on GKE cluster provisioned on Ubuntu Node Images.

You can create a 3 node GKE cluster with the gcloud cli using the following command:

$ gcloud container clusters create [CLUSTER_NAME] --image-type=ubuntu --zone=[ZONE_NAME]

You can set the default cluster with the following command:

$ gcloud container clusters get-credentials [CLUSTER_NAME] --zone=[ZONE_NAME]
Fetching cluster endpoint and auth data.
kubeconfig entry generated for gke-cluster-01.

More information about the gcloud command for GKE can be found here.

Add disks to nodes

After your GKE cluster is up, you will need to add disks to each of the nodes. These disks will be used by Portworx to create a storage pool.

You can do this by using the gcloud compute disks create and gcloud compute instances attach-disk commands as described here https://cloud.google.com/compute/docs/disks/add-persistent-disk#create_disk.

For example, after you GKE cluster is up, find the compute instances:

$ gcloud compute instances list
NAME                                   ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
gke-px-gke-default-pool-6a9f0154-gxfg  us-east1-b     n1-standard-1       RUNNING
gke-px-gke-default-pool-6a9f0154-tzj4  us-east1-b     n1-standard-1         RUNNING
gke-px-gke-default-pool-6a9f0154-vqpb  us-east1-b     n1-standard-1         RUNNING

Then for each instance create a persistent disk:

gcloud compute disks create [DISK_NAME] --size [DISK_SIZE] --type [DISK_TYPE]

Once the persistent disks have been created, attach a disk to each instance:

gcloud compute instances attach-disk [INSTANCE_NAME] --disk [DISK_NAME]


Portworx gets deployed as a Kubernetes DaemonSet. Following sections describe how to generate the spec files and apply them.

Generate the spec

To generate the spec file for the 1.2 release, head on to 1.2 install page.

To generate the spec file for the 1.3 release, head on to 1.3 install page.

Alternately, you can use curl to generate the spec as described in Generating Portworx Kubernetes spec using curl.

Secure ETCD:
If using secure etcd provide “https” in the URL and make sure all the certificates are in the /etc/pwx/ directory on each host which is bind mounted inside PX container.

Installing behind the HTTP proxy

During the installation Portworx may require access to the Internet, to fetch kernel headers if they are not available locally on the host system. If your cluster runs behind the HTTP proxy, you will need to expose PX_HTTP_PROXY and/or PX_HTTPS_PROXY environment variables to point to your HTTP proxy when starting the DaemonSet.

Use e=PX_HTTP_PROXY=<http-proxy>,PX_HTTPS_PROXY=<https-proxy> query param when generating the DaemonSet spec.

Applying the spec

Once you have generated the spec file, deploy Portworx.

$ kubectl apply -f px-spec.yaml

Monitor the portworx pods

kubectl get pods -o wide -n kube-system -l name=portworx

Monitor Portworx cluster status

PX_POD=$(kubectl get pods -l name=portworx -n kube-system -o jsonpath='{.items[0].metadata.name}')
kubectl exec $PX_POD -n kube-system -- /opt/pwx/bin/pxctl status

If you are still experiencing issues, please refer to Troubleshooting PX on Kubernetes and General FAQs.

Deploy a sample application

Now that you have Portworx installed, checkout various examples of applications using Portworx on Kubernetes.

Troubleshooting Notes

  • The kubectl apply ... command fails with “forbidden” error:
    • If you encounter an error with the cluster role permission (clusterroles.rbac.authorization.k8s.io "portworx-pvc-controller-role" is forbidden), create a ClusterRoleBinding for your user using the following commands:
     # get current google identity
     $ gcloud info | grep Account
     Account: [[email protected]]
     # grant cluster-admin to your current identity
     $ kubectl create clusterrolebinding myname-cluster-admin-binding \
        --clusterrole=cluster-admin [email protected]
     Clusterrolebinding "myname-cluster-admin-binding" created
  • GKE instances under certain scenarios do not automatically re-attach the persistent disks used by PX.
    • Under the following scenarios, GKE will spin up a new VM as a replacement for older VMs with the same node name:
      • Halting a VM in GKE cluster
      • Upgrading GKE between different kubernetes version
      • Increasing the size of the node pool
    • However, in these cases the previously attached persistent disks will not be re-attached automatically.
    • Currently you will have to manually re-attach the persistent disk to the new VM and then restart portworx on that node.