Install Portworx on PKS using the Operator

PKS preparation

Before installing Portworx, let’s ensure the PKS environment is prepared correctly.

Enable privileged containers and kubectl exec

Ensure that following options are enabled on all plans on the PKS tile.

  • Enable Privileged Containers
  • Disable DenyEscalatingExec (This is useful to run kubectl exec to run pxctl commands)

Enable zero downtime upgrades for Portworx PKS clusters

Use the following steps to add a runtime addon to the Bosh Director to stop the Portworx service.

Why is this needed ? When stopping and upgrading instances bosh attempts to unmount /var/vcap/store. Portworx has it’s root filesystem for it’s OCI container mounted on /var/vcap/store/opt/pwx/oci and the runc container is running using it. So one needs to stop Portworx and unmount /var/vcap/store/opt/pwx/oci in order to allow bosh to proceed with stopping the instances. The addon ensures this is done automatically and enables zero downtime upgrades.

Perform these steps on any machine where you have the bosh CLI.

  1. Create and upload the release.

    Replace director-environment below with the environment which points to the Bosh Director.

    git clone
    cd portworx-stop-bosh-release
    mkdir src
    bosh create-release --final --version=1.0.0
    bosh -e director-environment upload-release
  2. Add the addon to the Bosh Director.

    First let’s fetch your current Bosh Director runtime config.

    bosh -e director-environment runtime-config

    If this is empty, you can simply use the runtime config at runtime-configs/director-runtime-config.yaml.

    If you already have an existing runtime config, add the release and addon in runtime-configs/director-runtime-config.yaml to your existing runtime config.

    Once we have the runtime config file prepared, let’s update it in the Director.

    bosh -e director-environment update-runtime-config runtime-configs/director-runtime-config.yaml
  3. Apply the changes

    After the runtime config is updated, go to your Operations Manager Installation Dashboard and click “Apply Changes”. This will ensure bosh will add the addon on all new vm instances.

    If you already have an existing Portworx cluster, you will need to recreate the VM instances using the bosh recreate command.

Installing Portworx

For on-premises clusters, PKS (Pivotal Container Service) supports VMware vSphere.


Below diagram gives an overview of the Portworx architecture on vSphere using shared datastores.

  • Portworx runs on each Kubernetes minion/worker.
  • Based on the given spec by the end user, Portworx on each node will create it’s disk on the configured shared datastore(s) or datastore cluster(s).
  • Portworx will aggregate all of the disks and form a single storage cluster. End users can carve PVCs (Persistent Volume Claims), PVs (Persistent Volumes) and Snapshots from this storage cluster.
  • Portworx tracks and manages the disks that it creates. So in a failure event, if a new VM spins up, Portworx on the new VM will be able to attach to the same disk that was previously created by the node on the failed VM.

Portworx architecture for PKS on vSphere using shared datastores or datastore clusters

Install the Operator

Enter the following kubectl create command to deploy the Operator:

kubectl create -f

ESXi datastore preparation

Create one or more shared datastore(s) or datastore cluster(s) which is dedicated for Portworx storage. Use a common prefix for the names of the datastores or datastore cluster(s). We will be giving this prefix during Portworx installation later in this guide.

Step 1: vCenter user for Portworx

You will need to provide Portworx with a vCenter server user that will need to either have the full Admin role or, for increased security, a custom-created role with the following minimum vSphere privileges:

  • Datastore
    • Allocate space
    • Browse datastore
    • Low level file operations
    • Remove file
  • Host
    • Local operations
    • Reconfigure virtual machine
  • Virtual machine
    • Change Configuration
    • Add existing disk
    • Add new disk
    • Add or remove device
    • Advanced configuration
    • Change Settings
    • Extend virtual disk
    • Modify device settings
    • Remove disk

If you create a custom role as above, make sure to select “Propagate to children” when assigning the user to the role.

All commands in the subsequent steps need to be run on a machine with kubectl access.

Step 2: Create a Kubernetes secret with your vCenter user and password

Update the following items in the Secret template below to match your environment:

  1. VSPHERE_USER: Use output of printf <vcenter-server-user> | base64
  2. VSPHERE_PASSWORD: Use output of printf <vcenter-server-password> | base64

    apiVersion: v1
    kind: Secret
     name: px-vsphere-secret
     namespace: kube-system
    type: Opaque
     VSPHERE_USER: YWRtaW5pc3RyYXRvckB2c3BoZXJlLmxvY2Fs

kubectl apply the above spec after you update the above template with your user and password.

Step 3: Generate rest of the specs

vSphere environment details

Export following env variables based on your vSphere environment. These variables will be used in a later step when generating the yaml spec.

# Hostname or IP of your vCenter server

# Prefix of your shared ESXi datastore(s) names. Portworx will use datastores who names match this prefix to create disks.
export VSPHERE_DATASTORE_PREFIX=mydatastore-

# Change this to the port number vSphere services are running on if you have changed the default port 443

Disk templates

A disk template defines the VMDK properties that Portworx will use as a reference for creating the actual disks out of which Portworx will create the virtual volumes for your PVCs.

Following example will create a 150GB zeroed thick vmdk on each VM.

export VSPHERE_DISK_TEMPLATE=type=zeroedthick,size=150

The template follows the following format:

"type=<vmdk type>,size=<size of the vmdk>"
  • type: Supported types are thin, zeroedthick ,eagerzeroedthick, lazyzeroedthick
  • size: This is the size of the VMDK in GiB

Now generate the spec with the following curl command.

Observe how curl below uses the environment variables setup up above as query parameters.
VER=$(kubectl version --short | awk -Fv '/Server Version: /{print $3}')
curl -fsL -o px-spec.yaml "$VER&c=portworx-demo-cluster&b=true&st=k8s&pks=true&vsp=true&ds=$VSPHERE_DATASTORE_PREFIX&vc=$VSPHERE_VCENTER&s=%22$VSPHERE_DISK_TEMPLATE%22&operator=true"
The specs above use Portworx with an internal etcd. If you are using a dedicated etcd cluster, replace b=true with k=<YOUR-ETCD-ENDPOINTS>

Apply the specs

Apply the generated specs to your cluster.

kubectl apply -f px-spec.yaml

Monitor the Portworx nodes

  1. Enter the following kubectl get command, waiting until all Portworx nodes show as ready in the output:

    kubectl -n kube-system get storagenodes -l name=portworx
  2. Enter the following kubectl describe command with the NAME of one of the Portworx nodes to show the current installation status for individual nodes:

    kubectl -n kube-system describe storagenode <portworx-node-name>
    Type     Reason                             Age                     From                  Message
    ----     ------                             ----                    ----                  -------
    Normal   PortworxMonitorImagePullInPrgress  7m48s                   portworx, k8s-node-2  Portworx image portworx/px-enterprise:2.5.0 pull and extraction in progress
    Warning  NodeStateChange                    5m26s                   portworx, k8s-node-2  Node is not in quorum. Waiting to connect to peer nodes on port 9002.
    Normal   NodeStartSuccess                   5m7s                    portworx, k8s-node-2  PX is ready on this node
    NOTE: In your output, the image pulled will differ based on your chosen Portworx license type and version.

Last edited: Wednesday, Oct 6, 2021