Skip to main content
EARLY ACCESS

This feature is available as Early Access (EA) and should not be used in production.

Version: 3.3

Move Volumes using Pool Drain

Use the Pool Drain operation to evacuate all volumes from a set of storage pools or nodes to other pools or nodes in your Portworx cluster. This feature can be used to safely clear out all volume replicas on a node, prior to performing any disruptive operations on it.

The pool drain operation automates the movement of volume replicas to other eligible pools in the cluster while preventing data loss and honoring defined volume placement strategies. Creating new volumes on the pools being drained is not allowed; however, writes to volumes being moved continue uninterrupted.

When to use Pool Drain?

Use pool drain to clear out all volumes on a pool or a node, prior to performing any disruptive operations on it. For instance, to prevent data loss during decommission a node, use pool drain to move all volumes away from the node. This is an enhancement over the current workflow of having to manually increase the replication factor for each volume temporarily, and then removing the volume replicas on the node to be decommissioned

How to use Pool Drain?

  1. Submit a pool drain Job

    Use the below command to cordon the pool and begin the drain operation.

    pxctl service pool drain submit --source-uuids <pool-ID1>,<pool-ID-2>

    Note that only one active pool drain request can be submitted at a time. You can either pause the ongoing pool drain operation or cancel to submit a new one.

    pxctl sv pool drain submit --source-uuids 0fxxxx7e-03ee-4ab4-ae6e-524838exxxx5
    Pool drain request: xxxx08682720967179 submitted successfully.
    For latest status: pxctl service pool drain status --job-id xxxx08682720967179

    Specify multiple source pools and/or define target pools as needed. To move all volumes from a node, provide the source node and the target node in a similar manner.

    pxctl service pool drain submit --source-uuids <node-ID-1>,<node-ID-2> --target-uuids <node-ID-3>,<node-ID-4>

    If no targets are specified, Portworx will automatically select an appropriate node or pool for moving the volume.

    Note that currently only one active pool drain job can be submitted at a time.

  2. Monitor the status of a pool drain job

    You can monitor the progress of an ongoing pool drain job as follows

    pxctl sv pool drain status –job-id <job-id>
    pxctl sv pool drain status --job-id xxxx08682720967179
    Rebalance summary:

    Job ID : xxxx08682720967179
    Job State : DONE
    Last updated : Thu, 29 May 2025 22:22:47 UTC
    Total running time : 3 minutes and 33 seconds
    Job summary
    - Provisioned space balanced : 40 GiB done, 0 B pending
    - Volume replicas balanced : 2 done, 0 pending

    Rebalance actions:

    Replica add action:
    Volume : XXXX0581640481XXXX
    Pool : be1bXXXX-f59a-XXXX-b147-4c0c5d1933dc
    Node : 8d12XXXX-87e8-48c0-XXXX-a59759a9e533
    Replication set ID : 0
    Start : Thu, 29 May 2025 22:19:13 UTC
    End : Thu, 29 May 2025 22:19:35 UTC
    State : DONE
    Work summary
    - Provisioned space balanced : 20 GiB done, 0 B pending

    Replica remove action:
    Volume : 892905816404812476
    Pool : 0fxxxx7e-03ee-4ab4-ae6e-524838exxxx5
    Node : a4aaXXXX-dc7e-XXXX-8b8a-8c244bfb423b
    Replication set ID : 0
    Start : Thu, 29 May 2025 22:19:35 UTC
    End : Thu, 29 May 2025 22:19:36 UTC
    State : DONE
    Work summary
    - Provisioned space balanced : 20 GiB done, 0 B pending

    This will show the list of volumes that are currently being acted on, and the status of their operations. Once all volumes have been moved from the sources, the status of the pool drain job is updated to DONE.

  3. Pause, resume or cancel a pool drain job

    You can pause an active pool drain job, halting any further processing of volumes, using the command:

    pxctl sv pool drain pause –job-id <job-id>

    Note: Previously submitted ha-increase or ha-reduce operations may still progress even after the job is paused.

    To resume a paused pool drain job, run:

    pxctl sv pool drain resume –job-id <job-id>

    To cancel an active pool drain job, use the following command:

    pxctl sv pool drain cancel –job-id <job-id>

    It is not possible to resume a cancelled pool drain job. Note: Cancelling an active pool drain job may result in some volumes having incorrect ha-levels.

  4. Clear Pool drain status

    Provisioning volumes on pools that are being drained is not allowed. Once Portworx successfully moves all volume replicas from a storage pool, it updates the pool status to drained.

    pxctl sv pool show
    PX drive configuration:
    Pool ID: 0
    UUID: 0fxxxx7e-03ee-4ab4-ae6e-524838exxxx5
    IO Priority: LOW
    Labels: iopriority=LOW,medium=STORAGE_MEDIUM_MAGNETIC
    Size: 1000 GiB
    Status: Drained
    Has metadata: Yes
    Balanced: Yes
    Drives:
    1: /dev/sdc, Total size 1000 GiB, Online
    Cache Drives:
    No Cache drives found in this pool

    To allow for provisioning volumes on the storage pool again, clear the status of a pool using:

    pxctl sv pool drain clear –uuid <pool-ID> 

    Note that this command only works with pool IDs. To clear the drain status for multiple pools, run this command for each pool.

Troubleshooting Pool Drain operations

  • If a node having volumes with a replication factor of 1 is Offline, StorageDown or in Maintenance Mode, a pool drain operation on that node will not be able to move these volumes and will eventually end up being cancelled.
  • Ensure source pools or nodes are online and healthy before starting.

Pool Drain Command line reference

ActionCommand Example
Submit jobpxctl service pool drain submit --source-uuids <pool-ID1> If you define target pool: pxctl service pool drain submit --source-uuids <pool-ID1>,<pool-ID2> --target-uuids <pool-ID3>,<pool-ID4>
List jobspxctl service pool drain list
Check job statuspxctl service pool drain status –job-id <job-id>
Pause jobpxctl service pool drain pause –job-id <job-id>
Resume jobpxctl service pool drain resume –job-id <job-id>
Cancel jobpxctl service pool drain cancel –job-id <job-id>