Decommission a Node
This guide describes a recommended workflow for decommissioning a Portworx node in your Kubernetes cluster.
Step 1. Migrate application pods using Portworx volumes that are running on this node
If you plan to remove Portworx from a node, applications running on that node using Portworx need to be migrated. If Portworx is not running, existing application containers will end up with read-only volumes and new ones will fail to start.
Perform the following steps to migrate select pods:
Cordon the node using:
kubectl cordon <node>
Delete the application pods using Portworx volumes using:
kubectl delete pod <pod-name>
Since application pods are expected to be managed by a controller like
StatefulSet, Kubernetes will spin up a new replacement pod on another node.
Step 2. Decommission Portworx
To decommission Portworx, perform the following steps:
a) Remove Portworx from the cluster
Follow this guide to decommission the Portworx node from the cluster.
b) Remove Portworx installation from the node
Apply the px/enabled=remove label and it will remove the existing Portworx systemd service. It will also apply the px/enabled=false label to stop Portworx from running in future.
For example, below command will remove existing Portworx installation from minion2 and also ensure that Portworx pod doesn’t run there in future.
kubectl label nodes minion2 px/enabled=remove --overwrite
Step 3. Ensure application pods using Portworx don’t run on this node
If you need to continue using the Kubernetes node without Portworx, you will need to ensure your application pods using Portworx volumes don’t get scheduled here.
You can ensure this by adding the
schedulerName: stork field to your application specs (Deployment, Statefulset, etc). Stork is a scheduler extension that will schedule pods using Portworx PVCs only on nodes that have Portworx running. Refer to the Using scheduler convergence article for more information.
Another way to achieve this is to use inter-pod affinity
- Basically we will define a pod affinity rule in your applications that ensure that application pods get scheduled only on nodes where the Portworx pod is running.
Consider below nginx example:
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 1 template: metadata: labels: app: nginx spec: affinity: # Inter-pod affinity rule restricting nginx pods to run only on nodes where Portworx pods are running (Portworx pods have a label # name=portworx which is used in the rule) podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: name operator: In values: - "portworx" topologyKey: kubernetes.io/hostname namespaces: - "kube-system" hostNetwork: true containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: nginx-persistent-storage mountPath: /usr/share/nginx/html volumes: - name: nginx-persistent-storage persistentVolumeClaim: claimName: px-nginx-pvc
Step 4. Uncordon the node
You can now uncordon the node using:
kubectl uncordon <node>
If you want to permanently decommision the node, you can skip Step 5. (Optional) Rejoin node to the cluster.
Step 5. (Optional) Rejoin node to the cluster
If you want Portworx to start again on this node and join as a new node, follow the node rejoin steps.
Step 6. (FlashArray cloud drives only) Disconnect or delete drives from your FlashArray
If you are using Pure FlashArray as a cloud drive provider, follow the additional instructions to decommission FlashArray nodes.