Skip to main content
Version: 3.1

Manage your storage pool capacity in GKE

As your cluster usage increases and the data on your storage pools grows, you may start to run out of capacity. Portworx periodically measures and saves storage pool space usage statistics, which can be queried via the appropriate metrics or the CLI tool.

This page provides information on how Portworx alerts you when a storage pool reaches its maximum capacity and the consequences of taking no action. It also explains how to recover your pool if it has reached its maximum capacity.

Alerts and impacts when a pool reaches maximum capacity

To manage your pools' storage capacity, Portworx triggers alerts and takes necessary actions to avoid disruption.

  • When a storage pool reaches 80% of its capacity, Portworx triggers an alert that the storage pool has high space usage.

  • When a storage pool reaches 90% of its capacity, Portworx takes the following actions:

    • Sets the pool as full and the node asStorageDown.
    • Rejects all active IOs on the pool with the ENOSPC error.
    • Sets mountpoints as read-only, and write operations will not be allowed.

To avoid disruption, you must expand your storage pool capacity by adding drives or increasing disk size. For more details, refer to the Expand your storage pool size section.

Implications of a full pool

If, for some reason, you could not expand your storage pool, it can lead to resource allocation issues. The following table explains the implications of a full pool in different scenarios:

Volume typeNode healthImplications
repl 1All nodes are marked full
  • Read operations are allowed on the device.
  • The existing mounts will continue to be accessible
  • Portworx rejects new mount requests
  • Portworx block devices can be manually mounted with the correct options
  • .
repl 2 or 3All nodes are marked full
  • The volume will be out of a quorum
  • No read operations will be allowed on the device.
  • Portworx rejects new mount requests.
.
repl 2 or 3Some nodes are marked fullRead operations are allowed as long as the volume is in clean state.

If all the nodes with volume replication are marked as full, the volume will become unavailable, resulting in no read or write operations.

Revive your storage pool

You can restore a pool by deleting data or volume, or replication volume. Choose one of the following methods to restore your storage pool.

  • Delete volume replicas:

     pxctl volume ha-update --repl <current-HA-level -1> --node <node-ID> <volume-name>

    Example:

    pxctl volume ha-update --repl -1 --node node1 vol1

    This command will decrease the number of replicas of the volume on the specified node.

  • Delete data or volume:

    pxctl volume delete <vol-name>

  • Use kubectl to cordon the affected nodes. Run the following command to temporarily increase the free space threshold value and bring the pools online:
    pxctl cluster options update --free-space-threshold-gb <threshold-value>