To upgrade to a 1.2 release, refer to Upgrading Portworx to a 1.2 release

For all other releases, follow this page.

We do not recommend upgrading Portworx using Kubernetes instructions (e.g. via kubectl set image ds/portworx portworx=portworx/XXXX:### -n kube-system).

Upgrading Portworx

This guide describes the procedure to upgrade Portworx running as OCI container using talisman.

You are running Portworx as OCI if the Portworx daemonset image is portworx/oci-monitor. If not, you first need to migrate to OCI.

To upgrade to the 1.3 stable release, run the curl command: curl -fsL | bash -s

To upgrade to the 1.4 tech preview release, run the curl command: curl -fsL | bash -s -- -t 1.4.0-rc1

This runs a script that will start a Kubernetes Job to perform the following operations:

  1. Runs a DaemonSet on the cluster which fetches the new Portworx image. This reduces the time Portworx is down between the old and new versions as the image is already pulled.

  2. If the upgrade is from version 1.2 to 1.3, it will scale down all Deployments and StatefulSets that use shared Portworx PersistentVolumeClaims.

  3. Triggers RollingUpdate of the Portworx DaemonSet to the default stable image.
    • If the upgrade is from version 1.2 to 1.3, all application pods using Portworx PersistentVolumeClaims will be rescheduled to other nodes in the cluster before the new Portworx version starts on that node.
  4. Restore any Deployments or StatefulSets that were scaled down in step 2 back to their original replicas.

This script will also monitor the above operations.

Customizing the upgrade process

Specify a different Portworx upgrade image

You can invoke the upgrade script with the -t to override the default Portworx image. For example below command upgrades Portworx to portworx/oci-monitor:1.4.0-rc1 image.

curl -fsL | bash -s -- -t 1.4.0-rc1

Disable scaling down of shared Portworx applications during the upgrade

You can invoke the upgrade script with –scaledownsharedapps off to skip scaling down Deployments and StatefulSets that use shared Portworx PersistentVolumeClaim.

For example:

curl -fsL | bash -s -- --scaledownsharedapps off

Reboot requirement:
By default, the upgrade process scales down shared applications as that avoids a node reboot when upgrading between major Portworx versions. Disabling that flag would mean the node would need a reboot before Portworx comes up with the new major version.


Find out status of Portworx pods

To get more information about the status of Portworx daemonset across the nodes, run:

$ kubectl get pods -o wide -n kube-system -l name=portworx
NAME             READY   STATUS              RESTARTS   AGE   IP              NODE
portworx-9njsl   1/1     Running             0          16d   minion4
portworx-fxjgw   1/1     Running             0          16d   minion5
portworx-fz2wf   1/1     Running             0          5m   minion3
portworx-x29h9   0/1     ContainerCreating   0          0s   minion2

As we can see in the example output above:

  • looking at the STATUS and READY, we can tell that the rollout-upgrade is currently creating the container on the “minion2” node
  • looking at AGE, we can tell that:
    • “minion4” and “minion5” have Portworx up for 16 days (likely still on old version, and to be upgraded), while the
    • “minion3” has Portworx up for only 5 minutes (likely just finished upgrade and restarted Portworx)
  • if we keep on monitoring, we will observe that the upgrade will not switch to the “next” node until STATUS is “Running” and the READY is 1/1 (meaning, the “readynessProbe” reports Portworx service is operational).

Find out version of all nodes in Portworx cluster

One can run the following command to inspect the Portworx cluster:

$ PX_POD=$(kubectl get pods -n kube-system -l name=portworx -o jsonpath='{.items[0]}')
$ kubectl exec -it $PX_POD -n kube-system /opt/pwx/bin/pxctl cluster list

Nodes in the cluster:
ID      DATA IP         CPU        MEM TOTAL  ...   VERSION             STATUS
minion5   1.530612   4.0 GB     ...    Online
minion4   3.836317   4.0 GB     ...    Online
minion3   3.324808   4.1 GB     ...   Online
minion2   3.316327   4.1 GB     ...   Online
  • from the output above, we can confirm that the:
    • “minion4” and “minion5” are still on the old Portworx version (, while
    • “minion3” and “minion2” have already been upgraded to the latest version (in our case, v1.2.11.10).

Manually restoring scaled down shared applications

If the upgrade job crashes unexpectedly and fails to restore shared applications back to their original replica counts, you can run the following command to restore them.

curl -fsL | bash -s -- --scaledownsharedapps off

Migrating from Legacy Portworx to Portworx with OCI

If your Portworx DaemonSet image is portworx/oci-monitor, you are already running as OCI and this section is not relavent to your cluster.

The legacy Portworx installations had deployed the core Portworx engine as Docker containers, but since then we have changed the deployments to run Portworx via OCI runC, which eliminates cyclical dependencies with docker and kubelet, speeds up service restarts, and brings other improvements.

When deployed as OCI, Portworx runs an OCI monitor as a DaemonSet. The OCI monitor pod on each node manages the lifecycle of a systemd service that actually runs the Portworx storage engine as a runc container.

To migrate to OCI, please follow the migration instructions.

Upgrading Legacy Portworx running as Docker containers

Since Portworx v1.2.11, the recommended method of installing Portworx is using OCI. If your Portworx DaemonSet image is portworx/oci-monitor, you are already running as OCI and this section is not relavent to your cluster.

If your Portworx DaemonSet image is portworx/px-enterprise, you are running Portworx as Docker containers. It is recommended you first migrate to OCI using these steps. Once migrated to OCI, you can use the OCI upgrade instructions.