Skip to main content
Version: 3.5

Internal KVDB for Portworx on Kubernetes

Portworx includes a built-in internal key-value database (KVDB) that eliminates the need for an external KVDB such as etcd. When you install Portworx through Portworx Central, the internal KVDB is enabled by default. Portworx automatically deploys, configures, and manages the internal KVDB cluster.

The internal KVDB runs on three nodes in the Kubernetes cluster and stores Portworx metadata required for cluster operations.

Prerequisites

The internal KVDB requires a dedicated drive for data storage. You can configure either a KVDB drive or a metadata drive, depending on your deployment requirements.

KVDB drive requirements

  • If IOPS are independent of disk size, Portworx recommends a minimum size of 32 GB and a minimum of 450 IOPs.
  • If IOPS are dependent on disk size, Portworx recommends a size of 150 GB to ensure you get a minimum of 450 IOPs.

Metadata drive requirements

  • If IOPS are independent of disk size, Portworx recommends a minimum size of 64 GB and a minimum of 450 IOPs.

  • If IOPS are dependent on disk size, Portworx recommends a size of 150 GB to ensure you get a minimum of 450 IOPS.

    note

    If you use cloud-based storage, size the drive according to your cloud provider’s specifications to meet the minimum IOPS requirements.

Control internal KVDB node placement

You can control which nodes run the internal KVDB by labeling Kubernetes nodes. Add the label px/metadata-node=true to designate nodes for the internal KVDB cluster.

kubectl label nodes <list-of-node-names> px/metadata-node=true

The following scenarios describes how Portworx interprets the px/metadata-node label and its values:

  • If the node is labeled px/metadata-node=true: The node is eligible to run the internal KVDB and it becomes part of the internal KVDB cluster.
  • If the node is labeled px/metadata-node=false: The node is excluded from the internal KVDB cluster.
  • If the label px/metadata-node is not present: All Portworx storage nodes are eligible to run the internal KVDB.
  • If the label has an invalid value (for example, px/metadata-node=blah): Portworx does not start the internal KVDB on that node.
  • If the node has mixed labels: If no nodes are labeled px/metadata-node=true, but one or more nodes are labeled px/metadata-node=false, the labeled nodes are excluded and the remaining nodes are eligible to run the internal KVDB.

Install Portworx with internal KVDB

To use the internal KVDB during installation, log in to Portworx Central, select the Built-in option for etcd when creating your Portworx cluster.
Portworx automatically deploys and manages the internal KVDB cluster as part of the installation.

Internal KVDB cluster recovery

If a Portworx node that was previously part of the internal KVDB cluster becomes unavailable for more than three minutes, Portworx automatically attempts to restore cluster health by performing the following:

  1. Removes the unresponsive node from the internal KVDB cluster.
note

The node remains part of the Portworx cluster.

  1. Initializes the internal KVDB on an available Portworx storage node that is not currently part of the KVDB cluster.
  2. Adds the new node to the internal KVDB cluster.

Internal KVDB backup mechanism

Portworx automatically creates periodic backups of the internal KVDB every two minutes and stores them in the /var/lib/osd/kvdb_backup directory on all Portworx nodes, retaining a rolling set of 10 backup files on the internal KVDB drive.

The following is an example of a backup file:

ls /var/lib/osd/kvdb_backup
pwx_kvdb_schedule_153664_2019-02-06T22:30:39-08:00.dump

Use these backup files to recover the internal KVDB in disaster recovery scenarios.

caution

Contact Portworx Technical Support to recover your Portworx cluster in the event of a cluster failure, such as:

  • When all internal KVDB nodes and the corresponding drives where their data resides are lost and unrecoverable.
  • When quorum is lost (two internal KVDB nodes are lost and are unrecoverable).