k8s porx Logo

Portworx can run alongside Kubernetes and provide Persistent Volumes to other applications running on Kubernetes. This section describes how to deploy PX within a Kubernetes cluster and have PX provide highly available volumes to any application deployed via Kubernetes.

Since Kubernetes v1.6 release, Kubernetes includes the Portworx native driver support which allows Dynamic Volume Provisioning.

The native portworx driver in Kubernetes supports the following features:

  1. Dynamic Volume Provisioning
  2. Storage Classes
  3. Persistent Volume Claims
  4. Persistent Volumes

Interactive Tutorial

Follow this interactive tutorial to get a high level overview on installing Portworx on Kubernetes.

Prerequisites

  • VERSIONS: Portworx recommends running with Kubernetes 1.7.5 or newer
    • If your Kubernetes cluster version is between 1.6.0 and 1.6.4, you will need to set mas=true when creating the spec (see install section below), to allow Portworx to run on the Kubernetes master node.
  • SHARED MOUNTS: If you are running Docker v1.12, you must configure Docker to allow shared mounts propagation (see instructions), as otherwise Kubernetes will not be able to install Portworx.
    Newer versions of Docker have shared mounts propagation already enabled, so no additional actions are required.
  • FIREWALL: Ensure ports 9001-9015 are open between the Kubernetes nodes that will run Portworx.
  • NTP: Ensure all nodes running PX are time-synchronized, and NTP service is configured and running.
  • KVDB: Please have a clustered key-value database (etcd or consul) installed and ready. For etcd installation instructions refer this doc.
  • STORAGE: At least one of the PX-nodes should have extra storage available, in a form of unformatted partition or a disk-drive.
    Also please note that storage devices explicitly given to Portworx (ie. s=/dev/sdb,/dev/sdc3) will be automatically formatted by PX.

NOTE:
This page describes the procedure of installing & managing Portworx as a OCI container, which is the default and recommended method of installation. If you are looking for legacy instructions of running Portworx as a docker container, you can find them here.

Install

If you are installing on Openshift, follow these instructions.

Portworx gets deployed as a Kubernetes DaemonSet. Following sections describe how to generate the spec files and apply them.

Generating the spec

To generate the spec file, head on to https://install.portworx.com and fill in the parameters. When filing the kbver (Kubernetes version) on the page, use output of:

kubectl version --short | awk -Fv '/Server Version: /{print $3}'

Alternately, you can use curl to generate the spec as described in Generating Portworx Kubernetes spec using curl.

Secure ETCD:
If using secure etcd provide “https” in the URL and make sure all the certificates are in the /etc/pwx/ directory on each host which is bind mounted inside PX container.

Applying the spec

Once you have generated the spec file, deploy Portworx.

$ kubectl apply -f px-spec.yaml

You can monitor the status using following commands.

# Monitor the portworx pods

$ kubectl get pods -o wide -n kube-system -l name=portworx


# Monitor Portworx cluster status

$ PX_POD=$(kubectl get pods -l name=portworx -n kube-system -o jsonpath='{.items[0].metadata.name}')
$ kubectl exec $PX_POD -n kube-system -- /opt/pwx/bin/pxctl status

If you are still experiencing issues, please refer to Troubleshooting PX on Kubernetes and General FAQs.

Restricting PX to certain nodes

To restrict Portworx to run on only a subset of nodes in the Kubernetes cluster, we can use the px/enabled Kubernetes label on the minion nodes you do not wish to install Portworx on. Below are examples to prevent Portworx from installing and starting on minion2 and minion5 nodes.

If Portworx Daemonset is not yet deployed in your cluster:

  $ kubectl label nodes minion2 minion5 px/enabled=false --overwrite

If Portworx has already been deployed in your cluster:

  $ kubectl label nodes minion2 minion5 px/enabled=remove --overwrite

Above label will remove the existing systemd Portworx service and also apply the px/enabled=false label to stop Portworx from running in future.

Scaling

Portworx is deployed as a DaemonSet. Therefore it automatically scales as you grow your Kubernetes cluster. There are no additional requirements to install Portworx on the new nodes in your Kubernetes cluster.

Installing behind the HTTP proxy

During the installation Portworx may require access to the Internet, to fetch kernel headers if they are not available locally on the host system. If your cluster runs behind the HTTP proxy, you will need to expose PX_HTTP_PROXY and/or PX_HTTPS_PROXY environment variables to point to your HTTP proxy when starting the DaemonSet.

Use e=PX_HTTP_PROXY=<http-proxy>,PX_HTTPS_PROXY=<https-proxy> query param when generating the DaemonSet spec. For example:

  $ curl -o px-spec.yaml \
    "https://install.portworx.com?c=mycluster&k=etcd://etcd.fake.net:2379&e=PX_HTTP_PROXY=<http-proxy>,PX_HTTPS_PROXY=<https-proxy>"

To view a list of all Portworx environment variables, go to passing environment variables.

Upgrade

For information about upgrading Portworx inside Kubernetes, please refer to the dedicated upgrade page.

Service control

One can control the Portworx systemd service using the Kubernetes labels:

  • stop / start / restart the PX-OCI service
    • note: this is the equivalent of running systemctl stop portworx, systemctl start portworx … on the node
    kubectl label nodes minion2 px/service=start
    kubectl label nodes minion5 px/service=stop
    kubectl label nodes --all px/service=restart
    
  • enable / disable the PX-OCI service
    • note: this is the equivalent of running systemctl enable portworx, systemctl disable portworx on the node
    kubectl label nodes minion2 minion5 px/service=enable
    

Uninstall

Uninstalling or deleting the portworx daemonset only removes the portworx containers from the nodes. As the configurations files which PX use are persisted on the nodes the storage devices and the data volumes are still intact. These portworx volumes can be used again if the PX containers are started with the same configuration files.

You can uninstall Portworx from the cluster using:

  1. Remove the Portworx systemd service and terminate pods by labelling nodes as below. On each node, Portworx monitors this label and will start removing itself when the label is applied.
    kubectl label nodes --all px/enabled=remove --overwrite
    
  2. Monitor the PX pods until all of them are terminated
    kubectl get pods -o wide -n kube-system -l name=portworx
    
  3. Remove all PX Kubernetes Objects

    a. If you have a copy of the spec file you used to install Portworx:

     kubectl delete -f px-spec.yaml
    

    b. If you don’t, you can use the Web form:

     VER=$(kubectl version --short | awk -Fv '/Server Version: /{print $3}')
     kubectl delete -f 'https://install.portworx.com?kbver=$VER'
    
  4. Remove the ‘px/enabled’ label from your nodes
    kubectl label nodes --all px/enabled-
    

Note:
During uninstall, the Portworx configuration files under /etc/pwx/ directory are preserved, and will not be deleted.

Delete PX Cluster configuration

The commands used in this section are DISRUPTIVE and will lead to loss of all your data volumes. Proceed with CAUTION.

You can remove PX cluster configuration by deleting the configuration files under /etc/pwx directory on all nodes:

  • If the portworx pods are running, you can run the following command:
  PX_PODS=$(kubectl get pods -n kube-system -l name=portworx | awk '/^portworx/{print $1}')
  for pod in $PX_PODS; do
      kubectl -n kube-system exec -it $pod -- rm -rf /etc/pwx/
  done
  • otherwise, if the portworx pods are not running, you can remove PX cluster configuration by manually removing the contents of /etc/pwx directory on all the nodes.

Note
If you are wiping off the cluster to re-use the nodes for installing a brand new PX cluster, make sure you use a different ClusterID in the DaemonSet spec file (ie. -c myUpdatedClusterID).