Skip to main content
Version: 3.2

Internal KVDB for Portworx in airgapped bare metal

Summary and Key concepts

Summary:

This article explains the setup and management of the internal KVDB (Key-Value Database) used by Portworx, which is automatically deployed when Portworx is installed through Portworx Central. The internal KVDB cluster consists of three nodes and removes the need for an external KVDB. It provides guidelines for setting up dedicated drives for internal KVDB, including drive sizing and IOPS recommendations. Additionally, it covers optional node labeling for controlling KVDB placement, recovery mechanisms in case of node failures, and backup processes that store regular internal KVDB snapshots. The article also touches on what to do in case of a full cluster failure and when to contact Portworx support.

Kubernetes Concepts:

  • kubectl label: Used to label nodes to designate them for internal KVDB.
  • Nodes: The nodes where internal KVDB is installed and managed.

Portworx Concepts:

  • Portworx Central: Platform for managing Portworx installations, cluster, and for generating spec.

The built-in internal KVDB is enabled by default when you install Portworx through Portworx Central. Portworx automatically deploys the internal KVDB cluster and manages the internal key-value store cluster. The internal KVDB is installed on a set of three nodes in your cluster, and it removes the requirement for an external KVDB.

Prerequisites

Internal KVDB requires a dedicated drive to store its data. You can choose to have either a KVDB drive or a metadata drive.

KVDB drive:

  • If IOPS are independent of disk size, the minimum recommended size is 32 GB and a minimum of 450 IOPs.
  • If IOPS are dependent on disk size, the recommended size is 150 GB to ensure you get a minimum of 450 IOPs.

Metadata drive:

  • If IOPS are independent of disk size, then the recommended size is 64 GB and a minimum of 450 IOPs.
  • If IOPS are dependent on disk size, then the recommended size is 150 GB to ensure you get a minimum of 450 IOPS.

(Optional) Label the nodes before installing Portworx

If you want to control where KVDB is placed, you can use the following command to label nodes in Kubernetes. This command will designate them for the internal KVDB cluster:

kubectl label nodes <list-of-node-names> px/metadata-node=true

Here are a few scenarios based on the labels and their values:

  • If a node is labeled px/metadata-node=true, it becomes part of the internal KVDB cluster.
  • If a node is labeled px/metadata-node=false, it will not be part of the internal KVDB cluster.
  • If no node labels are found, all storage nodes have the potential to run the internal KVDB
  • If an incorrect label is present on the node, such as px/metadata-node=blah, Portworx will not start KVDB on that node.
  • If no node is labeled as px/metadata-node=true, but one node is labeled as px/metadata-node=false, that specific node will never be part of the KVDB cluster, whereas the remaining nodes are potential internal KVDB nodes.

Install Portworx using the internal KVDB cluster

Navigate to Portworx Central and select Built-in for etcd when creating your Portworx cluster.

Additional concepts

Here are a few concepts related to the internal KVDB.

How internal KVDB cluster recovery works

If a Portworx node that was previously part of the internal KVDB cluster becomes unavailable for more than 3 minutes, any other available storage nodes that are not part of the KVDB cluster will attempt to join it. The following is the sequence of actions taken when a node attempts to rejoin the KVDB cluster:

  1. The non-responsive member is removed from the internal KVDB cluster. (Note that the node remains part of the Portworx cluster.)
  2. Internal KVDB is initiated on a new node.
  3. The new node adds itself to the internal KVDB cluster.

Internal KVDB backup mechanism

Portworx takes regular internal backups of its key-value space and saves them in a file on the host. Portworx takes these internal KVDB backups every 2 minutes and places them in the /var/lib/osd/kvdb_backup directory on all nodes in your cluster. It also keeps a rolling count of 10 backups at a time on the internal KVDB storage drive.

A backup file looks as follows:

ls /var/lib/osd/kvdb_backup
pwx_kvdb_schedule_153664_2019-02-06T22:30:39-08:00.dump

These backup files can be used for recovering the internal KVDB cluster in the event of a disaster.

caution

In the event of a cluster failure, such as:

  • When all internal KVDB nodes and the corresponding drives where their data resides are lost and unrecoverable.
  • When quorum is lost (two internal KVDB nodes are lost and are unrecoverable). You should contact Portworx Technical Support to recover your cluster from such a state.
Was this page helpful?