Skip to main content
Version: 3.1

Upgrade Portworx on Kubernetes using the Daemonset

This guide describes the procedure to upgrade Portworx running as OCI container using talisman.

note
  • If you're upgrading an auth-enabled Portworx cluster to Portworx 2.6.0, you must upgrade Stork to version 2.4.5.
  • Operator versions prior to 1.4 and Autopilot currently do not support auth-enabled clusters running Portworx 2.6.0. Support for this is planned for a future release

Upgrade Portworx

note

When Portworx is managing the underlying storage devices in an Anthos deployment, add the following runtime argument to the DaemonSet spec:

containers:
- name: portworx
args:
["-c", "<cluster-name>", "-rt_opts" ,"wait-before-retry-period-in-secs=360"]

This annotation ensures that during an Anthos or a Portworx upgrade, Portworx does not failover internal KVDB to a storageless node.

To upgrade to the 3.0 release, run the following command:

curl -fsSL "https://install.portworx.com/3.0/upgrade" | bash -s

This runs a script that will start a Kubernetes Job to perform the following operations:

  1. Updates RBAC objects that are being used by Portworx with the latest set of permissions that are required
  2. Triggers RollingUpdate of the Portworx DaemonSet to the default stable image and monitors that for completion

If you see any issues, review the Troubleshooting section on this page.

note

If you're running Portworx 2.0.3.7, Portworx by Pure Storage recommends upgrading directly to 2.1.2 or later as this version fixes several issues in the previous build. Please see the release notes page for more details.

Upgrade Portworx using Helm

If you installed Portworx using Helm, enter the helm upgrade command to upgrade Portworx, specifying:

  • The name of your release (this example uses my-release)
  • The --set flag with the imageVersion option and the version you want to upgrade to (this example uses 3.0)
  • The -f flag with the path to your values.yaml file (this example uses ./helm/charts/portworx/values.yaml)
  • The path to your chart directory (this example uses ./helm/charts/portworx)
helm upgrade my-release --set imageVersion=3.0 -f ./helm/charts/portworx/values.yaml  ./helm/charts/portworx

Upgrade Stork

  1. On a machine that has kubectl access to your cluster, enter the following commands to download the latest Stork specs:

    KBVER=$(kubectl version --short | awk -Fv '/Server Version: /{print $3}')
    curl -o stork.yaml -L "https://install.portworx.com/3.0?kbver=${KBVER}&comp=stork"

    If you are using your own private or custom registry for your container images, add &reg=<your-registry-url> to the URL. Example:

    curl -o stork.yaml -L "https://install.portworx.com/3.0?kbver=1.17.5&comp=stork&reg=artifactory.company.org:6555"
  2. Next, apply the spec with:

    kubectl apply -f stork.yaml

Customize the upgrade process

Specify a different Portworx upgrade image

You can invoke the upgrade script with the -t to override the default Portworx image. For example below command upgrades Portworx to portworx/oci-monitor:2.5.0 image.

curl -fsSL "https://install.portworx.com/3.0/upgrade" | bash -s -- -t 2.5.0

Airgapped clusters

Step 1: Make container images available to your nodes

To make container images available to nodes that do not have access to the internet, please follow the air-gapped install instructions first.

Step 2: Run the upgrade

Once you've made the new container images available for your nodes, perform one of the following steps, depending on how you're storing your images:

Step 2a: Upgrade using local registry server

If you uploaded the container images to your local registry server, you must run the upgrade script with your registry server image names:

REGISTRY=myregistry.net:5443
curl -fsL "https://install.portworx.com/3.0/upgrade" | bash -s -- \
-I $REGISTRY/portworx/talisman -i $REGISTRY/portworx/oci-monitor -t 3.0

Step 2b: Upgrade using images directly on your nodes

Fetch and run the upgrade script with the following curl command to override the automatically defined image locations and instruct Kubernetes and Portworx to use the images located on your nodes during the upgrade:

curl -fsL "https://install.portworx.com/3.0/upgrade" | bash -s -- -t 3.0

Troubleshooting

The "field is immutable" error message

If the you see the following error when you upgrade Stork, it means that the kubectl apply -f stork.yaml command tries to update a label selector which is immutable:

The Deployment "stork-scheduler" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"component":"scheduler", "name":"stork-scheduler", "tier":"control-plane"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

To resolve this problem:

  1. Delete the existing Stork deployment
  2. Resume the upgrade process by applying the new spec.

Failed to apply spec due Forbidden: may not be used when type is ClusterIP

If you had an older version of Portworx manifests installed, and you try to apply the latest manifests, you might see the following error during kubectl apply.

Service "portworx-service" is invalid: [spec.ports[0].nodePort: Forbidden: may not be used when `type` is 'ClusterIP', spec.ports[1].nodePort: Forbidden: may not be used when `type` is 'ClusterIP', spec.ports[2].nodePort: Forbidden: may not be used when `type` is 'ClusterIP', spec.ports[3].nodePort: Forbidden: may not be used when `type` is 'ClusterIP']
Error from server (Invalid): error when applying patch:

To fix this:

  • Change the type of the portworx-service service to type ClusterIP. If the type was NodePort, you will also have to remove the nodePort entries from the spec.

    kubectl edit service portworx-service -n kube-system
  • Change the type of the portworx-api service to type ClusterIP. If the type was NodePort, you will also have to remove the nodePort entries from the spec.

    kubectl edit service portworx-api -n kube-system
  • Reapply your specs.

Find out status of Portworx pods

To get more information about the status of Portworx DaemonSet across the nodes, run:

kubectl get pods -o wide -n kube-system -l name=portworx
NAME             READY   STATUS              RESTARTS   AGE   IP              NODE
portworx-9njsl 2/2 Running 0 16d 192.168.56.73 minion4
portworx-fxjgw 2/2 Running 0 16d 192.168.56.74 minion5
portworx-fz2wf 2/2 Running 0 5m 192.168.56.72 minion3
portworx-x29h9 1/2 ContainerCreating 0 0s 192.168.56.71 minion2

As we can see in the example output above:

  • looking at the STATUS and READY, we can tell that the rollout-upgrade is currently creating the container on the “minion2” node
  • looking at AGE, we can tell that:
    • “minion4” and “minion5” have Portworx up for 16 days (likely still on old version, and to be upgraded), while the
    • “minion3” has Portworx up for only 5 minutes (likely just finished upgrade and restarted Portworx)
  • if we keep on monitoring, we will observe that the upgrade will not switch to the “next” node until STATUS is “Running” and the READY is 1/1 (meaning, the “readynessProbe” reports Portworx service is operational).

Find out version of all nodes in the Portworx cluster

One can run the following command to inspect the Portworx cluster:

PX_POD=$(kubectl get pods -n kube-system -l name=portworx -o jsonpath='{.items[0].metadata.name}')
kubectl exec $PX_POD -n kube-system -- /opt/pwx/bin/pxctl cluster list
[...]
Nodes in the cluster:
ID DATA IP CPU MEM TOTAL ... VERSION STATUS
minion5 192.168.56.74 1.530612 4.0 GB ... 1.2.11.4-3598f81 Online
minion4 192.168.56.73 3.836317 4.0 GB ... 1.2.11.4-3598f81 Online
minion3 192.168.56.72 3.324808 4.1 GB ... 1.2.11.10-421c67f Online
minion2 192.168.56.71 3.316327 4.1 GB ... 1.2.11.10-421c67f Online
  • From the output above, we can confirm that the:
    • “minion4” and “minion5” are still on the old Portworx version (1.2.11.4), while
    • “minion3” and “minion2” have already been upgraded to the latest version (in our case, v1.2.11.10).
Was this page helpful?