Decommission a node in ARO
This guide describes a recommended workflow for decommissioning a Portworx node in your cluster.
Migrate application pods using Portworx volumes that are running on this node
If you plan to remove Portworx from a node, applications running on that node using Portworx need to be migrated. If Portworx is not running, existing application containers will end up with read-only volumes and new ones will fail to start.
Perform the following steps to migrate select pods.
-
Cordon the node using the following command:
oc adm cordon <node>
-
Reschedule application pods using Portworx volumes on different nodes:
oc delete pod <pod-name> -n <application-namespace>
Since application pods are expected to be managed by a controller like
Deployment
orStatefulSet
, a new replacement pod on another node will be created.
Remove Portworx from the cluster
Follow this section "Removing offline Nodes" or "Removing a functional node from a cluster" to decommission the Portworx node from the cluster.
Removing a functional node from a cluster
A functional Portworx node may need to be removed from the cluster. In this section, we'll demonstrate the removal of a node by running commands on the node itself as well as the removal of a node from another node.
The following output shows the state of the cluster and the different node IPs and node IDs:
pxctl status
Status: PX is operational
Node ID: xxxxxxxx-xxxx-xxxx-xxxx-0027f6bbcbd1
IP: X.X.X.0
Local Storage Pool: 1 pool
POOL IO_PRIORITY SIZE USED STATUS ZONE REGION
0 LOW 64 GiB 11 GiB Online c us-east-1
Local Storage Devices: 1 device
Device Path Media Type Size Last-Scan
0:1 /dev/xvdf STORAGE_MEDIUM_SSD 64 GiB 25 Feb 17 21:13 UTC
total - 64 GiB
Cluster Summary
Cluster ID: xxxxxxxx-xxxx-xxxx-xxxx-d4a612b74cc3
IP ID Used Capacity Status
172.31.40.38 xxxxxxxx-xxxx-xxxx-xxxx-2bc112f5f131 11 GiB 64 GiB Online
172.31.37.211 xxxxxxxx-xxxx-xxxx-xxxx-a85e0514ae8b 11 GiB 64 GiB Online
172.31.35.130 xxxxxxxx-xxxx-xxxx-xxxx-893373631483 11 GiB 64 GiB Online
172.31.45.106 xxxxxxxx-xxxx-xxxx-xxxx-2eeddcd64d51 11 GiB 64 GiB Online
172.31.45.56 xxxxxxxx-xxxx-xxxx-xxxx-ec8e1420e645 11 GiB 64 GiB Online
172.31.46.119 xxxxxxxx-xxxx-xxxx-xxxx-0027f6bbcbd1 11 GiB 64 GiB Online (This node)
172.31.39.201 xxxxxxxx-xxxx-xxxx-xxxx-936b1b58aa24 11 GiB 64 GiB Online
172.31.33.151 xxxxxxxx-xxxx-xxxx-xxxx-41e70a72eafd 11 GiB 64 GiB Online
172.31.33.252 xxxxxxxx-xxxx-xxxx-xxxx-428e727eb6b8 11 GiB 64 GiB Online
Global Storage Pool
Total Used : 99 GiB
Total Capacity : 576 GiB
Suspend active cloudsnap operations
-
Identify any active cloudsnap operations being run on the node that you intend to decommission:
pxctl cloudsnap status
The
STATE
of active operations shows asBackup-Active
:NAME SOURCEVOLUME STATE NODE TIME-ELAPSED COMPLETED
xxxxxxxx-xxxx-xxxx-xxxx-278535e49860 885345022234521857 Backup-Done 10.13.90.125 39.746191264s Tue, 22 Mar 2022 22:53:37 UTC
xxxxxxxx-xxxx-xxxx-xxxx-3c3ccf47f276 186701534582547510 Backup-Done 10.13.90.122 1.677455484s Tue, 22 Mar 2022 23:59:49 UTC
xxxxxxxx-xxxx-xxxx-xxxx-73176c2d03e2 885345022234521857 Backup-Done 10.13.90.125 27.550329395s Wed, 23 Mar 2022 00:00:15 UTC
xxxxxxxx-xxxx-xxxx-xxxx-2307865c1b93 649554470078043771 Backup-Active 10.13.90.125 5m12.61653365sFrom this output, identify the volumes with active backups. For example, if node 10.13.90.125 is being decommissioned, then the volume with active backup is
649554470078043771
. -
Identify the namespace of the volume the cloudsnap operation is occuring on. The namespace is displayed under the
Labels
section in the output from the following command. Replace<source_volume>
with theSOURCEVOLUME
value for the volume that is in aBackup-Active
state from the previous output:pxctl volume inspect <source_volume>
Volume : 649554470078043771
Name : pvc-xxxxxxxx-xxxx-xxxx-xxxx-d4827680f6de
Size : 500 GiB
Format : ext4
HA : 3
IO Priority : LOW
Creation time : Mar 22 20:37:52 UTC 2022
Shared : v4 (service)
Status : up
State : Attached: xxxxxxxx-xxxx-xxxx-xxxx-10f4e076cac8 (10.13.90.119)
Last Attached : Mar 22 20:37:58 UTC 2022
Device Path : /dev/pxd/pxd649554470078043771
Labels : mount_options=nodiscard=true,namespace=vdbench,nodiscard=true,pvc=vdbench-pvc-sharedv4,repl=3,sharedv4=true,sharedv4_svc_type=ClusterIP
... -
Suspend backup operations for the volume and wait for current backup to complete:
storkctl suspend volumesnapshotschedule vdbench-pvc-sharedv4-schedule -n vdbench
-
Verify the suspension. The
SUSPEND
field will show astrue
:storkctl get volumesnapshotschedule -n vdbench
NAME PVC POLICYNAME PRE-EXEC-RULE POST-EXEC-RULE RECLAIM-POLICY SUSPEND LAST-SUCCESS-TIME
vdbench-pvc-sharedv4-schedule vdbench-pvc-sharedv4 testpolicy Delete true 22 Mar 22 17:10 PDT
Repeat these steps until all active snaps complete and all backup operations are suspended on the node that you want to decommission.
Prevention of data loss
If any node hosts a volume with replication factor of 1, then Portworx disallows decommissioning of such nodes because there is data loss.
One possible workaround to decommission such a node is to increase the replication of single replica volumes by running volume ha-update
.
-
List all the volumes hosts on the decommisioning node:
pxctl volume list --node xxxxxxxx-xxxx-xxxx-xxxx-2eeddcd64d51
ID NAME SIZE HA SHARED ENCRYPTED PROXY-VOLUME IO_PRIORITY STATUS SNAP-ENABLED
633738568577538909 pvc-xxxxxxxx-xxxx-xxxx-xxxx-d4827680f6de 2 GiB 3 no no no LOW up - attached on 172.31.45.106 no
161898313715947409 pvc-xxxxxxxx-xxxx-xxxx-xxxx-68f0d970e10c 2 GiB 1 no no no LOW up - attached on 172.31.45.106 no -
Increase the replication factor:
pxctl volume ha-update --repl 2 161898313715947409
Once the volume is completely replicated onto another node, continue with the node decommissioning. This time, the volume already has another replica on another node, so decommissioning the node will reduce the replication factor of the volume and remove the node.
Placing the node in maintenance mode
After identifying the node to be removed (see "Identify the node that you want to remove from the cluster" above), place the node in maintenance mode.
Log in to the node to be decommissioned:
pxctl service maintenance --enter
This is a disruptive operation, PX will restart in maintenance mode.
Are you sure you want to proceed ? (Y/N): y
Entered maintenance mode.
Run the cluster delete command
Example 1: Running the cluster delete command from a different node
ssh
to 172.31.46.119
and run the following command:
pxctl cluster delete xxxxxxxx-xxxx-xxxx-xxxx-2eeddcd64d51
Node xxxxxxxx-xxxx-xxxx-xxxx-2eeddcd64d51 successfully deleted.
Example 2: Running the cluster delete command from the same node
ssh
to 172.31.33.252
and type:
pxctl cluster delete xxxxxxxx-xxxx-xxxx-xxxx-428e727eb6b8
Node xxxxxxxx-xxxx-xxxx-xxxx-428e727eb6b8 successfully deleted.
Clean up Portworx metadata on the node
Remove Portworx installation or stop Portworx on the node to allow the clean up of Portworx metadata. See remove Portworx installation/stop Portworx.
To learn how to remove or clean up Portworx metadata on the decommmisioned node, see clean up Portworx metadata on the node.