k8s porx Logo


Key-value store

Portworx uses a key-value store for it’s clustering metadata. Please have a clustered key-value database (etcd or consul) installed and ready. For etcd installation instructions refer this doc.

Shared mounts

Portworx 1.3 and higher automatically enables shared mounts.

If you are installing Portworx 1.2, you must configure Docker to allow shared mounts propagation (see instructions), as otherwise Portworx will fail to start.


Ensure ports 9001-9015 are open between the nodes that will run Portworx. Your nodes should also be able to reach the port KVDB is running on (for example etcd usually runs on port 2379).


Ensure all nodes running PX are time-synchronized, and NTP service is configured and running.


Portworx supports Openshift 3.7 and above.


Portworx gets deployed as a Kubernetes DaemonSet. Following sections describe how to generate the spec files and apply them.

Add Portworx service accounts to the privileged security context

oc adm policy add-scc-to-user privileged system:serviceaccount:kube-system:px-account
oc adm policy add-scc-to-user privileged system:serviceaccount:kube-system:portworx-pvc-controller-account

Generate the spec

Make sure you give osft=true as part of the parameters while generating the spec.

To generate the spec file for the 1.2 release, head on to 1.2 install page.

To generate the spec file for the 1.3 release, head on to 1.3 install page.

Alternately, you can use curl to generate the spec as described in Generating Portworx Kubernetes spec using curl.

Secure ETCD:
If using secure etcd provide “https” in the URL and make sure all the certificates are in the /etc/pwx/ directory on each host which is bind mounted inside PX container.

Installing behind the HTTP proxy

During the installation Portworx may require access to the Internet, to fetch kernel headers if they are not available locally on the host system. If your cluster runs behind the HTTP proxy, you will need to expose PX_HTTP_PROXY and/or PX_HTTPS_PROXY environment variables to point to your HTTP proxy when starting the DaemonSet.

Use e=PX_HTTP_PROXY=<http-proxy>,PX_HTTPS_PROXY=<https-proxy> query param when generating the DaemonSet spec.

Apply the spec

Once you have generated the spec file, deploy Portworx.

oc apply -f px-spec.yaml

Monitor the portworx pods

kubectl get pods -o wide -n kube-system -l name=portworx

Monitor Portworx cluster status

PX_POD=$(kubectl get pods -l name=portworx -n kube-system -o jsonpath='{.items[0].metadata.name}')
kubectl exec $PX_POD -n kube-system -- /opt/pwx/bin/pxctl status

If you are still experiencing issues, please refer to Troubleshooting PX on Kubernetes and General FAQs.

Deploy a sample application

We will test if the installation was successful using a persistent mysql deployment.

  • Create a Portworx StorageClass by applying following spec:
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
    name: px-demo-sc
provisioner: kubernetes.io/portworx-volume
   repl: "3"
  • Log into Openshift console: https://MASTER-IP:8443/console

  • Create a new project “hello-world”.

  • Import and deploy this mysql application template
    • For STORAGE\_CLASS\_NAME, we use the storage class px-demo-sc created in step before.
  • Verify mysql deployment is active.

You can find other examples at applications using Portworx on Kubernetes.