Scale up
When you add a node or a worker to your Kubernetes cluster, you do not need to install Portworx on it. Since Portworx is installed via Operator, it takes care of configuring new node with Portworx.
Add a new node
-
List your storage nodes on the cluster using the following command.
kubectl get storagenodes -n <px_namespace>
NAME ID STATUS VERSION AGE
ip-10-xx-xxx-161.pwx.purestorage.com 83ff9847-70d7-4237-xxxx-9f0ed0de6c92 Online 3.1.5.0-454b18c 24m
ip-10-xx-xxx-103.pwx.purestorage.com d963e55c-96a5-4b9b-xxxx-1d7e6ce9f4de Online 3.1.5.0-454b18c 24m
ip-10-xx-xxx-5.pwx.purestorage.com 221b7a93-76b4-4af9-xxxx-d1052a5e5c86 Online 3.1.5.0-454b18c 24m -
Add a new node to your Kubernetes cluster and list your storage nodes on the cluster using the following command:
kubectl get storagenodes -n <px_namespace>
NAME ID STATUS VERSION AGE
ip-10-xx-xxx-161.pwx.purestorage.com 83ff9847-70d7-4237-xxxx-9f0ed0de6c92 Online 3.1.5.0-454b18c 26m
ip-10-xx-xxx-103.pwx.purestorage.com d963e55c-96a5-4b9b-xxxx-1d7e6ce9f4de Online 3.1.5.0-454b18c 26m
ip-10-xx-xxx-5.pwx.purestorage.com 221b7a93-76b4-4af9-xxxx-d1052a5e5c86 Online 3.1.5.0-454b18c 26m
ip-10-xx-xxx-7.pwx.purestorage.com Initializing 13sNew node is initializing on the cluster.
-
Use the
kubectl get pods
command to display your Pods:kubectl get pods -n <px_namespace> -l "name=portworx"
NAME READY STATUS RESTARTS AGE
px-cluster-78fcac4f-4e81-4e43-be80-aeed17cf96xx-xxxxc 1/1 Running 0 33m
px-cluster-78fcac4f-4e81-4e43-be80-aeed17cf96xx-xxxxz 1/1 Running 0 33m
px-cluster-78fcac4f-4e81-4e43-be80-aeed17cf96xx-xxxxp 1/1 Running 0 33m
px-cluster-78fcac4f-4e81-4e43-be80-aeed17cf96xx-xxxxv 1/1 Running 0 6m42s -
Your Portworx cluster automatically scales as you scale your Kubernetes cluster. Portworx is installed on the newly added node. Display the status of your Portworx cluster, by entering the
pxctl status
command on one of the Portworx pods:pxctl status
Status: PX is operational
Telemetry: Healthy
Metering: Disabled or Unhealthy
License: Trial (expires in 31 days)
Node ID: 27a55341-bf14-4363-xxxx-af176171e06b
IP: 10.13.171.7
Local Storage Pool: 2 pools
POOL IO_PRIORITY RAID_LEVEL USABLE USED STATUS ZONE REGION
0 HIGH raid0 64 GiB 4.0 GiB Online default default
1 HIGH raid0 384 GiB 12 GiB Online default default
Local Storage Devices: 4 devices
Device Path Media Type Size Last-Scan
0:1 /dev/sdb STORAGE_MEDIUM_SSD 64 GiB 23 Sep 24 13:29 UTC
1:1 /dev/sdc STORAGE_MEDIUM_SSD 128 GiB 23 Sep 24 13:29 UTC
1:2 /dev/sdd STORAGE_MEDIUM_SSD 128 GiB 23 Sep 24 13:29 UTC
1:3 /dev/sde STORAGE_MEDIUM_SSD 128 GiB 23 Sep 24 13:29 UTC
total - 448 GiB
Cache Devices:
* No cache devices
Cluster Summary
Cluster ID: px-cluster-78fcac4f-4e81-4e43-xxxx-aeed17cf96a2
Cluster UUID: e8cc86f5-d150-42a2-xxxx-d0241aff1fb9
Scheduler: kubernetes
Total Nodes: 4 node(s) with storage (4 online)
IP ID SchedulerNodeName Auth StorageNode Used Capacity Status StorageStatus Version Kernel OS
10.xx.xxx.103 d963e55c-96a5-4bxx-xxxx-1d7e6ce9f4de ip-10-xx-xxx-103.pwx.purestorage.com Disabled Yes 16 GiB 448 GiB Online Up 3.1.5.0-454b18c 6.5.0-27-generic Ubuntu 22.04.3 LTS
10.xx.xxx.161 83ff9847-70d7-42xx-xxxx-9f0ed0de6c92 ip-10-xx-xxx-161.pwx.purestorage.com Disabled Yes 16 GiB 448 GiB Online Up 3.1.5.0-454b18c 6.5.0-27-generic Ubuntu 22.04.3 LTS
10.xx.xxx.7 27a55341-bf14-43xx-xxxx-af176171e06b ip-10-xx-xxx-7.pwx.purestorage.com Disabled Yes 16 GiB 448 GiB Online Up (This node) 3.1.5.0-454b18c 6.5.0-27-generic Ubuntu 22.04.3 LTS
10.xx.xxx.5 221b7a93-76b4-4axx-xxxx-d1052a5e5c86 ip-10-xx-xxx-5.pwx.purestorage.com Disabled Yes 16 GiB 448 GiB Online Up 3.1.5.0-454b18c 6.5.0-27-generic Ubuntu 22.04.3 LTS
Warnings:
WARNING: Internal Kvdb is not using dedicated drive on nodes [10.xx.xxx.103 10.xx.xxx.5 10.xx.xxx.161]. This configuration is not recommended for production clusters.
Global Storage Pool
Total Used : 64 GiB
Total Capacity : 1.8 TiB
Scale up the Cassandra StatefulSet
-
Display your stateful sets by entering the
kubectl get statefulsets
command:kubectl get sts cassandra
NAME READY AGE
cassandra 3/3 35mIn the above example output, note that the number of replicas is three.
-
To scale up the
cassandra
stateful set, you must increase the number of replicas. Enter thekubectl scale statefulsets
command, specifying the following:- The name of your stateful set (this example uses
cassandra
) - The desired number of replicas (this example creates four replicas)
kubectl scale statefulsets cassandra --replicas=4
statefulset "cassandra" scaled
- The name of your stateful set (this example uses
-
To list your Pods, enter the
kubectl get pods
command:kubectl get pods -l "app=cassandra"
NAME READY STATUS RESTARTS AGE
cassandra-0 1/1 Running 0 36m
cassandra-1 1/1 Running 0 35m
cassandra-2 1/1 Running 0 34m
cassandra-3 0/1 ContainerCreating 0 28sIn the above example output, a new pod is spinning up.
-
Display your stateful sets by entering the
kubectl get statefulsets
command:kubectl get sts cassandra
NAME READY AGE
cassandra 4/4 45mIn the above example output, note that the number of replicas is now updated to four.
-
To open a shell session into one of your Pods, enter the following
kubectl exec
command, specifying your Pod name. This example opens thecassandra-0
Pod:kubectl exec -it cassandra-0 -- bash
-
Use the
nodetool status
command to retrieve information about your Cassandra cluster:nodetool status
Datacenter: DC1-K8Demo
======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 10.2xx.xxx.4 103.84 KiB 32 46.5% xxxxxxxx-xxxx-xxxx-xxxx-2cc0159b9859 Rack1-K8Demo
UN 10.2xx.xx.199 83.12 KiB 32 57.7% xxxxxxxx-xxxx-xxxx-xxxx-bfaf5a24b364 Rack1-K8Demo
UN 10.2xx.xx.135 83.16 KiB 32 48.0% xxxxxxxx-xxxx-xxxx-xxxx-7d22d0cc1b0b Rack1-K8Demo
UN 10.2xx.xxx.199 65.64 KiB 32 47.9% xxxxxxxx-xxxx-xxxx-xxxx-b6b9e8b0a130 Rack1-K8Demo -
Terminate the shell session:
exit