Skip to main content
Version: 3.2

Upgrade Portworx on Kubernetes using the Operator

caution

If you're upgrading OpenShift to 4.3, you must change Portworx before you can do so. See the Preparing Portworx to upgrade to OpenShift 4.3 page for details.

If you're using the Portworx Operator, you can upgrade or change your Portworx version at any time by modifying the StorageCluster spec. In addition to managing a Portworx cluster, the Operator also manages the following other components in the Portworx platform:

  • Stork
  • Autopilot

For simplicity, the Portworx Operator handles the component upgrades without user intervention. When Portworx upgrades, the Operator upgrades the installed components to the recommended version as well.

Smart upgrade

The Smart upgrade feature introduces a streamlined, resilient upgrade process for Portworx nodes, allowing them to be upgraded in parallel while maintaining volume quorum and without application disruption.

By default, smart upgrade is disabled and maxUnavailable value is set to 1. This means one Portworx node is upgraded at a time.

You can enable the smart upgrade by setting the spec.updateStrategy.rollingUpdate.disruption.allow parameter to false and set the maximum number of nodes that can be upgraded at a time using the spec.updateStrategy.rollingUpdate.maxUnavailable parameter in the StorageCluster.

The following are the key benefits of using smart upgrades:

  • Parallel upgrades: Based on volume distribution, the Portworx Operator tries the best to select multiple nodes for concurrent upgrades, accelerating the upgrade process while eliminating downtime and application disruption.
  • Volume quorum maintenance: Ensures volume quorum is maintained throughout the upgrade process.
  • Managed node upgrades: You can use the spec.updateStrategy.rollingUpdate.maxUnavailable parameter in the StorageCluster CRD to manage the number of nodes upgraded in parallel.
  • Automatic reconciliation: The Portworx operator actively monitors and reconciles the storage nodes during upgrades, ensuring smooth progression while preserving quorum integrity.
important
  • There will be a downtime for applications using volumes with a replication factor of 1.
  • Smart upgrade is not supported for synchronous DR setup.

Prerequisites

  • You must already be running Portworx through the Operator, this method will not work for other Portworx deployments.
  • You must be running the latest available version of the Operator.

  • For smart upgrades, ensure the following prerequisites are met:
    • Required Portworx and Operator versions:
      • Portworx version 3.1.2 or later
      • Operator version 24.2.0 or later
    • The cluster must be ready and available for upgrade. You can use the pxctl status and kubectl get storagenodes -n portworx commands to check the cluster status.
      • No nodes or pools should be under maintenance.
      • No decommissioned nodes should appear in the output of the kubectl get storagenodes command.
      • No nodes should have the px/service=stop or px/service=disabled label. If nodes have these labels, remove them and restart the Portworx service or decommission the node before the upgrade.

Prepare Anthos environments

If you're not running Portworx on Anthos, skip this section.

If your Portworx cluster is managing the underlying storage devices in an Anthos deployment, add the following annotation to the StorageCluster spec to ensure that during an Anthos or a Portworx upgrade, Portworx does not failover the internal KVDB to a storageless node:

portworx.io/misc-args: "-rt_opts wait-before-retry-period-in-secs=360"

Upgrade Portworx

note
  • If using configmap, update the version manifest for Portworx Operator. Otherwise, you might not see the expected image versions.
  • If you override the default PodDisruptionBudget in GKE environments, ensure that the maxUnavailable is less than the storage-pdb-min-available value for a balanced speed and disruption. Portworx by Pure Storage recommends using the following configurations for surge upgrades:
    • maxSurge=1
    • maxUnavailable=0

  1. Identify Your StorageCluster:

    Retrieve the name of your Portworx StorageCluster within the appropriate namespace:

    kubectl get storagecluster -n <px-namespace>
    NAME                                            STATUS   VERSION   AGE
    px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Online 2.10.3 43d

    If your cluster is installed in a different namespace, specify it using the -n flag.

  2. Edit the StorageCluster:

    Modify the StorageCluster resource to update Portworx to the desired version:

    kubectl edit storagecluster -n <px-namespace> <storagecluster-name>

    In the editor, make the following changes in the StorageCluster:

    • Update the spec.image field to your desired Portworx version.
    • (optional) For smart upgrade, ensure that the spec.updateStrategy.rollingUpdate.disruption.allow parameter is set to false and the maximum number of nodes that can be upgraded at a time is set using the spec.updateStrategy.rollingUpdate.maxUnavailable parameter.
    apiVersion: core.libopenstorage.org/v1
    kind: StorageCluster
    metadata:
    name: portworx
    namespace: <px-namespace>
    spec:
    image: portworx/oci-monitor:<desired-version>
    # For smart upgrades, ensure that the below lines are uncommented and `disruption.allow` is set to false. You can use the `maxUnavailable` field to control the maximum number of Portworx nodes that can be upgraded at a time.
    #updateStrategy:
    # type: RollingUpdate
    # rollingUpdate:
    # maxUnavailable: 5
    # minReadySeconds: 0
    # disruption:
    # allow: false
    note
    • If there are any component images configured in the StorageCluster, such as the spec.stork.image or spec.autopilot.image fields, you need to update the image fields to the latest version.
    • To look up recent versions, refer to the following release notes:
  3. Verify the Upgrade:

    Confirm the upgrade by checking the Portworx version on the nodes:

    kubectl get storagenodes -n <px-namespace> -l name=portworx
    NAME       ID                                     STATUS   VERSION          AGE
    node-1-1 xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Online <desired-version> 10d

note

For air-gapped clusters, if you do not see the expected image versions that you have configured in the configmap, you should edit the StorageCluster to include the autoUpdateComponents: Once parameter. This will force Portworx Operator to reconcile all the components and retrieve the correct images.

Was this page helpful?