Skip to main content
Version: 3.1

Disk Provisioning on Google Cloud Platform (GCP)

note

If you are running on GKE, visit Portworx on GKE.

The steps below will help you enable dynamic provisioning of Portworx volumes in your GCP cluster.

Prerequisites

Key-value store

Portworx uses a key-value database (KVDB) to store the cluster's state, configuration data, and metadata associated with storage volumes and snapshots. ou can configure either an external or an internal KVDB. For more information, see KVBD for Portworx topic.

Firewall

Ensure ports 9001-9022 are open between the nodes that will run Portworx. Your nodes should also be able to reach the port KVDB is running on (for example etcd usually runs on port 2379).

Create a GCP cluster

To manage and auto provision GCP disks, Portworx needs access to the GCP Compute Engine API. There are two ways to do this.

Using instance privileges

Provide the instances running Portworx privileges to access the GCP API server. This is the preferred method since it requires the least amount of setup on each instance.

  • Owner and Compute Admin Roles

    These Roles provides Portworx access to the Google Cloud Storage APIs to provision persistent disks. Make sure the service account for the instances has these roles.

  • Cloud KMS predefined roles

    Following predefined roles provide Portworx access to the Google Cloud KMS APIs to manage secrets.

    roles/cloudkms.cryptoKeyEncrypterDecrypter
    roles/cloudkms.publicKeyViewer

Using an account file

Alternatively, you can give Portworx access to the GCP API server via an account file and environment variables. First, you will need to create a service account in GCP and download the account file.

To access the GCP API server, Portworx needs a service account with the following roles

  • Owner and Compute Admin Roles

    These Roles provides Portworx access to the Google Cloud Storage APIs to provision persistent disks. Make sure the service account created below has these roles.

  • Cloud KMS predefined roles

    Following predefined roles provide Portworx access to the Google Cloud KMS APIs to manage secrets.

    roles/cloudkms.cryptoKeyEncrypterDecrypter
    roles/cloudkms.publicKeyViewer

Follow these steps to create a service account and download its corresponding account file:

  1. Create a service account in the "Service Account" section that has the above permissions.
  2. Go to IAM & admin -> Service Accounts -> (Instance Service Account) -> Select "Create Key" and download the .json file.

This JSON file needs to be made available on any GCP instance that will run Portworx. Place this file under a /etc/pwx/ directory on each GCP instance. For example, /etc/pwx/gcp.json.

Install

If you used an account file above, you will have to configure the Portworx installation arguments to access this file by way of its environment variables. In the installation arguments for Portworx, pass in the location of this file via the environment variable GOOGLE_APPLICATION_CREDENTIALS. For example, use -e GOOGLE_APPLICATION_CREDENTIALS=/etc/pwx/gcp.json.

If you installing on Kuberenetes, you can use a secret to mount /etc/pwx/gcp.json into the Portworx Daemonset and then expose GOOGLE_APPLICATION_CREDENTIALS as an env in the Daemonset.

Follow these instructions to install Portworx based on your container orchestration environment.

Disk template

Portworx takes in a disk spec which gets used to provision GCP persistent disks dynamically.

A GCP disk template defines the Google persistent disk properties that Portworx will use as a reference. There are 2 ways you can provide this template to Portworx.

1. Using a template specification

The spec follows the following format:

"type=<GCP disk type>,size=<size of disk>"
  • type: Following two types are supported
    • pd-standard
    • pd-ssd
  • size: This is the size of the disk in GB

See GCP disk for more details on above parameters.

Examples:

  • "type=pd-ssd,size=200"
  • "type=pd-standard,size=200", "type=pd-ssd,size=100"

2. Using existing GCP disks as templates

You can also reference an existing GCP disk as a template. On every node where Portworx is brought up as a storage node, a new GCP disk(s) identical to the template will be created.

For example, if you created a template GCP disk called px-disk-template-1, you can pass this in to Portworx as a parameter as a storage device.

Ensure that these disks are created in the same zone as the GCP node group.

Limiting storage nodes

Portworx allows you to create a heterogenous cluster where some of the nodes are storage nodes and rest of them are storageless.

You can specify the number of storage nodes in your cluster by setting the max_storage_nodes_per_zone input argument. This instructs Portworx to limit the number of storage nodes in one zone to the value specified in max_storage_nodes_per_zone argument. The total number of storage nodes in your cluster will be:

Total Storage Nodes = (Num of Zones) * max_storage_nodes_per_zone

While planning capacity for your auto scaling cluster make sure the minimum size of your cluster is equal to the total number of storage nodes in Portworx. This ensures that when you scale up your cluster, only storageless nodes will be added. While when you scale down the cluster, it will scale to the minimum size which ensures that all Portworx storage nodes are online and available.

note

You can always ignore the max_storage_nodes_per_zone argument. When you scale up the cluster, the new nodes will also be storage nodes but while scaling down you will lose storage nodes causing Portworx to lose quorum.

Examples:

  • "-s", "type=pd-ssd,size=200", "-max_storage_nodes_per_zone", "1"

For a cluster of 6 nodes spanning 3 zones (us-east-1a,us-east-1b,us-east-1c), in the above example Portworx will have 3 storage nodes (one in each zone) and 3 storageless nodes. Portworx will create a total 3 disks of size 200 each and attach one disk to each storage node.

  • "-s", "type=pd-standard,size=200", "-s", "type=pd-ssd,size=100", "-max_storage_nodes_per_zone", "2"

For a cluster of 9 nodes spanning 2 zones (us-east-1a,us-east-1b), in the above example Portworx will have 4 storage nodes and 5 storageless nodes. Portworx will create a total of 8 disks (4 of size 200 and 4 of size 100). Portworx will attach a set of 2 disks (one of size 200 and one of size 100) to each of the 4 storage nodes.

Was this page helpful?