Skip to main content
Version: 2.7

Upgrade Portworx Backup

Portworx Backup supports upgrades to version n from versions n-1 or n-2 versions. For example, if you are on Portworx Backup version 2.4.x, you can directly upgrade to 2.7.x. Suppose, if you are on 2.3.x and you wish to upgrade to 2.7.x, you need to upgrade in phased manner. Firstly upgrade from version 2.3.x to 2.5.x, and then upgrade from version 2.5.x to 2.7.x.

To upgrade from Portworx Backup version 2.4.x or 2.5.x to 2.7.x, you can just use the px-central chart.

Pre-upgrade job

Starting from Portworx Backup version 2.7.3, a pre-upgrade job checks the health of the pods of MongoDB StatefulSet to ensure that they are in Ready state before proceeding with the upgrade. The upgrade fails if all three MongoDB pods in the StatefulSet are not in the Ready state. By default, Portworx Backup assigns mongo-0 as the primary pod and mongo-1 and mongo-2 become secondary pods.

Before you proceed with the helm upgrade to 2.7.3 from previous versions of Portworx Backup, ensure that all MongoDB pods are in Running state. If not, there can arise two cases:

Case 1

Secondary pod is not in Ready state

If secondary pod (mongo-1 or mongo-2) is not in Ready state

  • Identify the secondary failing pod that is not in the Ready state.
  • Delete the PVC (Persistent Volume Claim) associated with the failing pod.
  • Delete the pod to allow it to be recreated.

Ensure that all pods have reached Ready state and continue with the Helm upgrade.

Case 2

Primary pod is not in Ready state

Due to issues in node or network or underlying storage, if mongo-0 becomes unresponsive for some time, one of the secondary pod mongo-1 and mongo-2 gets stepped up as primary pod. If mongo-0 gets recovered and still not in Ready state, then perform the following steps to bring the mongo-0 pod to Ready state:

  1. Sync the data from the primary (mongo-1 or mongo-2) to mongo-0pod, execute the following steps:

    a. Run the following command to enter MongoDB shell from mongo-0 pod:

    mongosh admin -u root -p pxcentral

    b. Run the following commands to find out the primary pod:

    rs.status()

    Output string:

      members: [
    {
    _id: 1,
    name: 'pxc-backup-mongodb-1.pxc-backup-mongodb-headless:<port-number>',
    health: 1,
    state: 1,
    stateStr: 'PRIMARY',

    c. Run the following command to set the highest priority to the primary pod (that you have got from the output of step b):

      cfg = rs.conf()
    cfg.members[0].priority = 1
    cfg.members[1].priority = 5
    cfg.members[2].priority = 1
    rs.reconfig(cfg)
    rs.status()

    Now, wait until the sync process to complete.

To make mongo-0 primary pod again:

  1. Enter mongo shell from mongo-1 with the following command:

    mongosh admin -u root -p pxcentral
  2. Now run the following commands to set mongo-0 as primary pod:

    cfg = rs.conf()
    cfg.members[0].priority = 5
    cfg.members[1].priority = 1
    cfg.members[2].priority = 1
    rs.reconfig(cfg)
    rs.status()

    Once all the pods are in the Ready state, proceed with the Helm upgrade.

    If primary or any of the secondary could not be recovered with the steps mentioned above, please contact Portworx Backup support team.

KDMP backup behavior for upgrades

Assume that you are on Portworx Backup version 2.6.0 or below, with cloud native driver installed on your application clusters. When you upgrade to Portworx Backup version 2.7.x and above from previous versions, the existing manual backups will not be affected:

The backups that are scheduled (to be created in the future) with the cloud native driver:

  • Get migrated to CSI snapshots along with offload to backup location option if volume snapshot class is available

  • Utilizes KDMP driver to create a direct KDMP backup if volume snapshot class is not available

    note

    Scheduled backups that were created using cloud native driver before the upgrade will only get migrated and not the ones that are created after upgrade. Scheduled backups created with other drivers remain unchanged. One of the backup might fail after migrating native driver to CSI + Offload if the reconciler has not updated the schedule CR.

    note

    Helm upgrade fails if all the three mongoDB pods are not in Ready state. If you have to upgrade to Portworx Backup version 2.7.3 from previous versions successfully if all three mongoDB pods are not in Ready state. Delete the pod and the PVC associated with the failing pod. Scale up the replica set to three and ensure that all the pods are in Ready state before the helm upgrade.

Following sections guide you to upgrade your Portworx Backup (in internet-connected and air-gapped environments) and Stork component:

Was this page helpful?