Service operations using pxctl
The Portworx pxctl
CLI tool allows you to run the following service operations:
- Perform a node audit
- Manage the call home feature
- Generate diagnostics package
- Get the version of the installed software
- Configure kvdb
- Place Portworx in maintenance mode
- Manage the physical storage drives
These commands are helpful when you want do debug issues related to your Portworx cluster.
You can see an overview of the available service operations by running the following command:
/opt/pwx/bin/pxctl service --help
Perform a node audit
You can audit the node with:
pxctl service audit
AuditID Error Message
kvdb-limits none KV limits audit not yet available
kvdb-response none KV response audit not yet available
Manage the call home feature
With pxctl
, you can enable and disable the call home feature:
pxctl service call-home enable
Call home feature successfully enabled
If you want to disable this feature, just run:
pxctl service call-home disable
Call home feature successfully disabled
Generate a complete diagnostics package
When there is an operational failure, you can use the pxctl service diags
command to generate a complete diagnostics package. Run the pxctl service diags
command with the --help
flag to list the available subcommands and flags.
The diagnostics package will be available at /var/cores. It will be automatically uploaded to Pure1 if telemetry is enabled.
See Enable Pure1 integration for details on enabling Pure1 telemetry.
As an example, here's how to generate the diagnostics package for a container called px-enterprise
:
pxctl service diags -a
Connected to Docker daemon. unix:///var/run/docker.sock
Running PX diagnostics on local node....
Using PX OCI container: 2.8.0.0-c60727b
Archived cores to: /var/cores/test-k8s1-node0-px-cores.20210514230349.tgz, cleaning up archived cores...
Removing /var/cores/core-px-storage-sig6-user0-group0-pid312-time1620773250...
Getting diags file....
Creating a diags tar ball...
Executing core cleanup...
Finished core cleanup.
Removing /var/cores/test-k8s1-node0-px-cores.20210514230349.tgz...
Generated diags: /var/cores/test-k8s1-node0-diags-20210514230401.tar.gz
Done generating PX diagnostics.
Get the version of the installed software
The following command displays the version of the installed software:
pxctl service info
PX (OCI) Version: 2.0.2.1-1d83ac2
PX (OCI) Build Version: 1d83ac2baeb27451222edcd543249dd2c2f941e4
PX Kernel Module Version: 72D3C244593F45167A6B49D
Configure KVDB
You can configure the KVDB with the pxctl service kvdb
command. Run the pxctl service kvdb
command with the --help
flag to list the available subcommands and flags.
Place Portworx in maintenance mode
Use the pxctl service maintenance
command to enter or exit maintenance mode. Once the node is in maintenance mode, you can add or replace drives, add memory, and so on.
Run the pxctl service maintenance
command with the --help
flag to list the available subcommands and flags.
Enter maintenance mode with:
pxctl service maintenance --enter
This is a disruptive operation, PX will restart in maintenance mode.
Are you sure you want to proceed ? (Y/N): y
Once you're done adding or replacing drives, or adding memory, you can exit maintenance mode by running:
pxctl service maintenance --exit
Manage the physical storage drives
You can manage the physical storage drives on a node using the pxctl service drive
command. Run the pxctl service drive
command with the --help
flag to list the available subcommands and flags.
Add a physical drive to a server
Use the pxctl service drive add
command to add a physical drive to a server. Run the pxctl service drive add
command with the --help
flag to list the available subcommands and flags.
The following example shows how to add a physical drive:
pxctl service drive add --drive /dev/mapper/volume-3bfa72dd -o start
Adding drives may make storage offline for the duration of the operation.
Are you sure you want to proceed ? (Y/N): y
Adding device /dev/mapper/volume-3bfa72dd ...
Drive add successful. Requires restart.
To add physical drives, you must place the server in maintenance mode first.
Rebalance storage across drives
Over time, your cluster's storage may become unbalanced, with some pools and drives filled to capacity and others utilized less. This can occur for a number of reasons, such as:
- Adding new nodes or pools
- Increasing the size of pools by increasing the size of underlying drives or adding new drives
- Volumes being removed from only a subset of nodes
If your cluster's storage is unbalanced, you can use the pxctl service pool rebalance
command to redistribute the volume replicas. This command determines which pools are over-loaded and under-loaded, and moves volume replicas from the former to the latter. This ensures that all pools on the nodes are evenly loaded.
You can run this cluster-wide command from any of your Portworx nodes.
Start a rebalance operation
Use the submit
subcommand to start the rebalance operation, which returns a job ID:
pxctl service pool rebalance submit
This command will start rebalance for:
- all storage pools for checking if they are over-loaded
- all storage pools for checking if they are under-loaded
which meet either of following conditions:
1. Pool's provision space is over 20% or under 20% of mean value across pools
2. Pool's used space is over 20% or under 20% of mean value across pools
*Note: --remove-repl-1-snapshots is off, space from such snapshots will not be reclaimed
Do you wish to proceed ? (Y/N): Y
Pool rebalance request: 859941020356581382 submitted successfully.
For latest status: pxctl service pool rebalance status --job-id 859941020356581382
The rebalance operation runs as a background service.
List running rebalance jobs
Enter the list
subcommand to see all currently running jobs:
pxctl service pool rebalance list
JOB STATE CREATE TIME STATUS
859941020356581382 RUNNING 2020-08-11T11:16:12.928785518Z
Monitor a rebalance operation
Monitor the status of a rebalance operation, as well as all the steps it has taken so far, by entering the status
subcommand with the --job-id
flag and the ID of a running rebalance job:
pxctl service pool rebalance status --job-id 859941020356581382
Rebalance summary:
Job ID : 859941020356581382
Job State : DONE
Last updated : Sun, 23 Aug 2020 22:08:31 UTC
Total running time : 4 minutes and 25 seconds
Job summary
- Provisioned space balanced : 827 GiB done, 0 B pending
- Used space balanced : 17 GiB done, 0 B pending
- Volume replicas balanced : 42 done, 0 pending
Rebalance actions:
Replica add action:
Volume : 956462089713112944
Pool : xxxxxxxx-xxxx-xxxx-xxxx-eaca75365214
Node : xxxxxxxx-xxxx-xxxx-xxxx-489393d6636b
Replication set ID : 0
Start : Sun, 23 Aug 2020 22:04:06 UTC
End : Sun, 23 Aug 2020 22:04:27 UTC
Work summary
- Provisioned space balanced : 20 GiB done, 0 B pending
Replica remove action:
Volume : 956462089713112944
Pool : xxxxxxxx-xxxx-xxxx-xxxx-5e34a2b712e3
Node : xxxxxxxx-xxxx-xxxx-xxxx-81b36fef6177
Replication set ID : 0
Start : Sun, 23 Aug 2020 22:04:06 UTC
End : Sun, 23 Aug 2020 22:04:29 UTC
Work summary
- Provisioned space balanced : 20 GiB done, 0 B pending
Pause or terminate a running rebalance operation
If you need to temporarily suspend a running rebalance operation, you can pause it. Otherwise, you can cancel it entirely:
-
Use the
cancel
subcommand subcommand with the--job-id
flag and the ID of a running rebalance job to terminate a running rebalance operation:pxctl service pool rebalance cancel --job-id 859941020356581382
-
Use
pause
subcommand subcommand with the--job-id
flag and the ID of a running rebalance job to terminate a running rebalance operation:pxctl service pool rebalance pause --job-id 859941020356581382
pxctl service pool rebalance reference
Rebalance storage pools
pxctl service pool rebalance [command] [flags]
Commands
Command | Description |
---|---|
cancel | Cancels a rebalance job specified with the --job-ID flag and a valid job ID |
list | Lists rebalance jobs in the system |
pause | Pauses a rebalance job specified with the --job-ID flag and a valid job ID |
resume | Resumes a rebalance job specified with the --job-ID flag and a valid job ID |
status | Shows the status of a rebalance job specified with the --job-ID flag and a valid job ID |
submit | Start a new rebalance job |
Display drive information
You can use the pxctl service drive show
command to display drive information on the server:
pxctl service drive show
PX drive configuration:
Pool ID: 0
Type: Default
UUID: xxxxxxxx-xxxx-xxxx-xxxx-2b69eeebb81b
IO Priority: HIGH
Labels: medium=STORAGE_MEDIUM_MAGNETIC,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,iopriority=HIGH,kubernetes.io/arch=amd64,kubernetes.io/hostname=myhostname-k8s1-node0,kubernetes.io/os=linux
Size: 3.0 TiB
Status: Online
Has metadata: Yes
Balanced: Yes
Drives:
3: /dev/vdd, Total size 1.0 TiB, Online
1: /dev/vdb, Total size 1.0 TiB, Online
2: /dev/vdc, Total size 1.0 TiB, Online
Cache Drives:
No Cache drives found in this pool