Skip to main content
Version: 3.2

Install Portworx with PX-StoreV2 on AWS

PX-StoreV2 is a Portworx datastore optimized for supporting IO intensive workloads for configurations utilizing high performance NVMe class devices. It efficiently manages and balances workload across nodes by dynamically assigning tasks to the most suitable nodes based on their available resources. Hence, improving performance and scalability of your cluster.

This document explains how to install Portworx with the PX-StoreV2 datastore on Amazon Web Services (AWS).

note
  • PX-StoreV2 datastore installation is supported only with a fresh Portworx installation on Amazon Web Services (AWS) Elastic Kubernetes Service (EKS).
  • PX-StoreV2 datastore installation is not supported on Red Hat OpenShift Service on AWS (ROSA) and Red Hat OpenShift Container Platform (OCP).
  • Upgrading from a previous Portworx version to deploy PX-StoreV2 datastore with cloud drives is not supported.
  • Once Portworx is deployed with the PX-StoreV2 datastore, you can use all of Portworx's features except for the following:
    • XFS volumes
    • Aggregated volumes
    • PX-Cache
    • PDS
    • PX-Security

Prerequisites

You must have a Kubernetes cluster deployed on infrastructure that meets the following minimum requirements for Portworx with PX-StoreV2:

  • Linux kernel version: 4.20 or newer (minimum), 5.0 or newer (recommended), with the following packages:

    • Rhel: device-mapper mdadm lvm2 device-mapper-persistent-data augeas
    • Debian: dmsetup mdadm lvm2 thin-provisioning-tools augeas-tools
    • Suse: dmsetup mdadm lvm2 device-mapper-persistent-data augeas
    • Ubuntu: dmsetup mdadm lvm2 thin-provisioning-tools augeas-tools
    note

    During installation, Portworx will automatically try to pull the required packages from distribution specific repositories. This is a mandatory requirement and installation will fail if this prerequisite is not met.

  • A minimum of 64 GB system metadata device on each node where you want to deploy Portworx. If you do not provide a metadata device, one will be automatically added to the spec.

  • An SD/NVME or GP3/IO1 drive type with a memory of more than 8 GB per node.

  • A minimum of 8 cores CPU per node.

Create an IAM policy

Provide the permissions for all the instances in the autoscaling cluster by creating an IAM role.

Perform the following steps on your AWS Console:

  1. Navigate to the IAM page on your AWS console, then select Policies under the Identity and Access Management (IAM) sidebar section, then select the Create Policy button in the upper right corner:

    AWS create policy page

  2. Choose the JSON tab, then paste the following permissions into the editor, providing your own value for Sid if applicable. You can either use the minimum permissions required or use the permissions required for disk encryption:

    note

    These are the minimum permissions needed for storage operations for a Portworx cluster. For complete permissions required for all of Portworx storage operations, see the credentials reference.

    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Sid": "",
    "Effect": "Allow",
    "Action": [
    "ec2:AttachVolume",
    "ec2:ModifyVolume",
    "ec2:DetachVolume",
    "ec2:CreateTags",
    "ec2:CreateVolume",
    "ec2:DeleteTags",
    "ec2:DeleteVolume",
    "ec2:DescribeTags",
    "ec2:DescribeVolumeAttribute",
    "ec2:DescribeVolumesModifications",
    "ec2:DescribeVolumeStatus",
    "ec2:DescribeVolumes",
    "ec2:DescribeInstances",
    "autoscaling:DescribeAutoScalingGroups"
    ],
    "Resource": ["*"]
    }
    ]
    }
  3. Name and create the policy.

    Create policy

Attach IAM policy

Attach the above created policy to your Node instance role or user account.

Follow the instructions below for attaching the policy with your NodeInstanceRole:

  1. From the IAM page, click Roles in the left pane.

  2. On the Roles page, search for and select your nodegroup NodeInstanceRole using your cluster name. The following example shows eksctl-victorpeksdemo2-nodegroup-NodeInstanceRole-M9QTT58HQ9ZX as the nodegroup Instance Role:

    Search for your policy

    note

    If there are more than one nodegroup NodeInstanceRole for your cluster, attach the policy to those NodeInstanceRoles as well.

  3. Attach the previously created policy by selecting Attach policies from the Add permissions dropdown on the right side of the screen:

    Attach your policy

  4. Under Other permissions policies, search for your policy name. Select your policy name and select the Attach policies button to attach it.

    The policy you attached will appear under Permissions policies if successful:

    Confirm your policy is added

Install Portworx

Follow the instructions in this section to deploy Portworx with the PX-StoreV2 datastore.

Generate the specs

To install Portworx with Kubernetes, you must generate Kubernetes manifests that you will deploy in your cluster.

Navigate to Portworx Central and log in, or create an account, then follow the process to generate a spec.

Apply the specs

Apply the specs using the following commands:

  1. Deploy the Operator:

    kubectl apply -f 'https://install.portworx.com/<version-number>?comp=pxoperator'
    serviceaccount/portworx-operator created
    podsecuritypolicy.policy/px-operator created
    clusterrole.rbac.authorization.k8s.io/portworx-operator created
    clusterrolebinding.rbac.authorization.k8s.io/portworx-operator created
    deployment.apps/portworx-operator created
  2. Deploy the StorageCluster:

    kubectl apply -f 'https://install.portworx.com/<version-number>?operator=true&mc=false&kbver=&b=true&c=px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-8dfd338e915b&stork=true&csi=true&mon=true&tel=false&st=k8s&promop=true'
    storagecluster.core.libopenstorage.org/px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-fab038f0bbe6 created
    note
    • In your output, the image pulled will differ based on your chosen Portworx license type and version.
    • For Portworx Enterprise, the default license activated on the cluster is a 30 day trial that you can convert to a SaaS-based model or a generic fixed license.
    • For Portworx Essentials, your cluster must have internet connectivity so that it can send usage information every 24 hours to renew the license on the cluster. You can convert a Portworx Essentials license to either a fixed license or SaaS-based license.

Pre-flight check

After you applying the specs, the Portworx Operator performs a pre-flight check across the cluster and must pass for each node. This check determines whether each node in your cluster is compatible with the PX-StoreV2 datastore. If each node in the cluster meets the following hardware and software requirements, PX-StoreV2 will be automatically set as your default datastore during Portworx installation:

  • Hardware:
    • CPU: A minimum of 8 cores CPU per node.
    • Drive type: SD/NVME or GP3/IO1 drive with a memory of more than 8 GB per node.
    • Metadata device: A minimum of 64 GB system metadata device on each node.
  • Software:
    • Linux kernel version: 4.20 or newer with the following packages:
      • Rhel: device-mapper mdadm lvm2 device-mapper-persistent-data augeas
      • Debian: dmsetup mdadm lvm2 thin-provisioning-tools augeas-tools
      • Suse: dmsetup mdadm lvm2 device-mapper-persistent-data augeas

Verify your Portworx installation

Once you've installed Portworx, you can perform the following tasks to verify that Portworx is correctly installed and using the PX-StoreV2 datastore.

Verify if all pods are running

Enter the following command to list and filter the results for Portworx pods and specify the namespace where you have deployed Portworx:

kubectl get pods -n <px-namespace> -o wide | grep -e portworx -e px
NAME                                                    READY   STATUS    RESTARTS         AGE     IP              NODE                         NOMINATED NODE   READINESS GATES
portworx-api-8scq2 1/1 Running 1 (90m ago) 5h1m xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
portworx-api-f24b9 1/1 Running 1 (108m ago) 5h1m xx.xx.xxx.xxx username-vms-silver-sight-3 <none> <none>
portworx-api-f95z5 1/1 Running 1 (90m ago) 5h1m xx.xx.xxx.xxx username-vms-silver-sight-2 <none> <none>
portworx-kvdb-558g5 1/1 Running 0 3m46s xx.xx.xxx.xxx username-vms-silver-sight-2 <none> <none>
portworx-kvdb-9tfjd 1/1 Running 0 2m57s xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
portworx-kvdb-cjcxg 1/1 Running 0 3m7s xx.xx.xxx.xxx username-vms-silver-sight-3 <none> <none>
portworx-operator-548b8d4ccc-qgnkc 1/1 Running 13 (4m26s ago) 5h2m xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
portworx-pvc-controller-ff669698-62ngd 1/1 Running 1 (108m ago) 5h1m xx.xx.xxx.xxx username-vms-silver-sight-3 <none> <none>
portworx-pvc-controller-ff669698-6b4zj 1/1 Running 1 (90m ago) 5h1m xx.xx.xxx.xxx username-vms-silver-sight-2 <none> <none>
portworx-pvc-controller-ff669698-pffvl 1/1 Running 1 (90m ago) 5h1m xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
prometheus-px-prometheus-0 2/2 Running 2 (90m ago) 5h xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-fab038f0bbe6-2qsp4 2/2 Running 13 (108m ago) 3h20m xx.xx.xxx.xxx username-vms-silver-sight-3 <none> <none>
px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-fab038f0bbe6-5vnzv 2/2 Running 16 (90m ago) 3h20m xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-fab038f0bbe6-lxzd5 2/2 Running 16 (90m ago) 3h20m xx.xx.xxx.xxx username-vms-silver-sight-2 <none> <none>
px-csi-ext-77fbdcdcc9-7hkpm 4/4 Running 4 (108m ago) 3h19m xx.xx.xxx.xxx username-vms-silver-sight-3 <none> <none>
px-csi-ext-77fbdcdcc9-9ck26 4/4 Running 4 (90m ago) 3h18m xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
px-csi-ext-77fbdcdcc9-ddmjr 4/4 Running 14 (90m ago) 3h20m xx.xx.xxx.xxx username-vms-silver-sight-2 <none> <none>
px-prometheus-operator-7d884bc8bc-5sv9r 1/1 Running 1 (90m ago) 5h1m xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>

Note the name of one of your px-cluster pods. You'll run pxctl commands from these pods in following steps.

Verify Portworx cluster status

You can find the status of the Portworx cluster by running pxctl status commands from a pod. Enter the following kubectl exec command, specifying the pod name you retrieved in the previous section:

kubectl exec <px-pod>  -n <px-namespace> -- /opt/pwx/bin/pxctl status
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
Status: PX is operational
Telemetry: Disabled or Unhealthy
Metering: Disabled or Unhealthy
License: Trial (expires in 31 days)
Node ID: xxxxxxxx-xxxx-xxxx-xxxx-bf578f9addc1
IP: xx.xx.xxx.xxx
Local Storage Pool: 1 pool
POOL IO_PRIORITY RAID_LEVEL USABLE USED STATUS ZONE REGION
0 HIGH raid0 25 GiB 33 MiB Online default default
Local Storage Devices: 1 device
Device Path Media Type Size Last-Scan
0:0 /dev/sda STORAGE_MEDIUM_SSD 32 GiB 10 Oct 22 23:45 UTC
total - 32 GiB
Cache Devices:
* No cache devices
Kvdb Device:
Device Path Size
/dev/sdc 1024 GiB
* Internal kvdb on this node is using this dedicated kvdb device to store its data.
Metadata Device:
1 /dev/sdd STORAGE_MEDIUM_SSD 64 GiB
Cluster Summary
Cluster ID: px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-fab038f0bbe6
Cluster UUID: xxxxxxxx-xxxx-xxxx-xxxx-5d610fa334bd
Scheduler: kubernetes
Nodes: 3 node(s) with storage (3 online)
IP ID SchedulerNodeName Auth StorageNode Used Capacity Status StorageStatus Version Kernel OS
xx.xx.xxx.xxx xxxxxxxx-xxxx-xxxx-xxxx-bf578f9addc1 username-vms-silver-sight-3 Disabled Yes(PX-StoreV2) 33 MiB 25 GiB Online Up (This node) 2.12.0-28944c8 5.4.217-1.el7.elrepo.x86_64 CentOS Linux 7 (Core)
xx.xx.xxx.xxx xxxxxxxx-xxxx-xxxx-xxxx-4a1bafeff5bc username-vms-silver-sight-0 Disabled Yes(PX-StoreV2) 33 MiB 25 GiB Online Up 2.12.0-28944c8 5.4.217-1.el7.elrepo.x86_64 CentOS Linux 7 (Core)
xx.xx.xxx.xxx xxxxxxxx-xxxx-xxxx-xxxx-502e658bc307 username-vms-silver-sight-2 Disabled Yes(PX-StoreV2) 33 MiB 25 GiB Online Up 2.12.0-28944c8 5.4.217-1.el7.elrepo.x86_64 CentOS Linux 7 (Core)
Global Storage Pool
Total Used : 99 MiB
Total Capacity : 74 GiB

The Portworx status will display PX is operational, and the StorageNode entries for each node will read Yes(PX-StoreV2).

Verify Portworx pool status

Run the following command to view the Porworx drive configurations for your pod:

kubectl exec <px-pod>  -n <px-namespace> -- /opt/pwx/bin/pxctl service pool show
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
PX drive configuration:
Pool ID: 0
Type: PX-StoreV2
UUID: xxxxxxxx-xxxx-xxxx-xxxx-db8abe01d4f0
IO Priority: HIGH
Labels: kubernetes.io/arch=amd64,kubernetes.io/hostname=username-vms-silver-sight-3,kubernetes.io/os=linux,medium=STORAGE_MEDIUM_SSD,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,iopriority=HIGH
Size: 25 GiB
Status: Online
Has metadata: No
Balanced: Yes
Drives:
0: /dev/sda, Total size 32 GiB, Online
Cache Drives:
No Cache drives found in this pool
Metadata Device:
1: /dev/sdd, STORAGE_MEDIUM_SSD

The Type: PX-StoreV2 output shows that your pod is using the PX-StoreV2 datastore.

Verify pxctl cluster provision status

  • Find the storage cluster using the following command; the status should show the cluster is Online:

    kubectl -n <px-namespace> get storagecluster
    NAME                                              CLUSTER UUID                           STATUS   VERSION          AGE
    px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-fab038f0bbe6 xxxxxxxx-xxxx-xxxx-xxxx-5d610fa334bd Online 2.12.0-dev-rc1 5h6m
  • Find the storage nodes, whose statuses should show as Online:

    kubectl -n <px-namespace> get storagenodes
    NAME                          ID                                     STATUS   VERSION          AGE
    username-vms-silver-sight-0 xxxxxxxx-xxxx-xxxx-xxxx-4a1bafeff5bc Online 2.12.0-28944c8 3h25m
    username-vms-silver-sight-2 xxxxxxxx-xxxx-xxxx-xxxx-502e658bc307 Online 2.12.0-28944c8 3h25m
    username-vms-silver-sight-3 xxxxxxxx-xxxx-xxxx-xxxx-bf578f9addc1 Online 2.12.0-28944c8 3h25m
  • Verify the Portworx cluster provision status. Enter the following kubectl exec command, specifying the pod name you retrieved in the previous section:

    kubectl exec <px-pod> -n <px-namespace> -- /opt/pwx/bin/pxctl cluster provision-status
    NODE					                NODE STATUS	 POOL						              POOL STATUS  IO_PRIORITY	SIZE	AVAILABLE	USED   PROVISIONED ZONE REGION	RACK
    xxxxxxxx-xxxx-xxxx-xxxx-502e658bc307 Up 0 ( xxxxxxxx-xxxx-xxxx-xxxx-f9131bf7ef9d ) Online HIGH 32 GiB 32 GiB 33 MiB 0 B default default default
    xxxxxxxx-xxxx-xxxx-xxxx-4a1bafeff5bc Up 0 ( xxxxxxxx-xxxx-xxxx-xxxx-434152789beb ) Online HIGH 32 GiB 32 GiB 33 MiB 0 B default default default
    xxxxxxxx-xxxx-xxxx-xxxx-bf578f9addc1 Up 0 ( xxxxxxxx-xxxx-xxxx-xxxx-db8abe01d4f0 ) Online HIGH 32 GiB 32 GiB 33 MiB 0 B default default default

Create your first PVC

For your apps to use persistent volumes powered by Portworx, you must use a StorageClass that references Portworx as the provisioner. Portworx includes a number of default StorageClasses, which you can reference with PersistentVolumeClaims (PVCs) you create. For a more general overview of how storage works within Kubernetes, refer to the Persistent Volumes section of the Kubernetes documentation.

Perform the following steps to create a PVC:

  1. Create a PVC referencing the px-csi-db default StorageClass and save the file:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
    name: px-check-pvc
    spec:
    storageClassName: px-csi-db
    accessModes:
    - ReadWriteOnce
    resources:
    requests:
    storage: 2Gi
  2. Run the kubectl apply command to create a PVC:

    kubectl apply -f <your-pvc-name>.yaml
    persistentvolumeclaim/example-pvc created

Verify your StorageClass and PVC

  1. Enter the kubectl get storageclass command:

    kubectl get storageclass
    NAME                                 PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    px-csi-db pxd.portworx.com Delete Immediate true 43d
    px-csi-db-cloud-snapshot pxd.portworx.com Delete Immediate true 43d
    px-csi-db-cloud-snapshot-encrypted pxd.portworx.com Delete Immediate true 43d
    px-csi-db-encrypted pxd.portworx.com Delete Immediate true 43d
    px-csi-db-local-snapshot pxd.portworx.com Delete Immediate true 43d
    px-csi-db-local-snapshot-encrypted pxd.portworx.com Delete Immediate true 43d
    px-csi-replicated pxd.portworx.com Delete Immediate true 43d
    px-csi-replicated-encrypted pxd.portworx.com Delete Immediate true 43d
    px-db kubernetes.io/portworx-volume Delete Immediate true 43d
    px-db-cloud-snapshot kubernetes.io/portworx-volume Delete Immediate true 43d
    px-db-cloud-snapshot-encrypted kubernetes.io/portworx-volume Delete Immediate true 43d
    px-db-encrypted kubernetes.io/portworx-volume Delete Immediate true 43d
    px-db-local-snapshot kubernetes.io/portworx-volume Delete Immediate true 43d
    px-db-local-snapshot-encrypted kubernetes.io/portworx-volume Delete Immediate true 43d
    px-replicated kubernetes.io/portworx-volume Delete Immediate true 43d
    px-replicated-encrypted kubernetes.io/portworx-volume Delete Immediate true 43d
    stork-snapshot-sc stork-snapshot Delete Immediate true 43d

    kubectl returns details about the StorageClasses available to you. Verify that px-csi-db appears in the list.

  2. Enter the kubectl get pvc command. If this is the only StorageClass and PVC that you've created, you should see only one entry in the output:

    kubectl get pvc <your-pvc-name>
    NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS           AGE
    example-pvc Bound pvc-xxxxxxxx-xxxx-xxxx-xxxx-2377767c8ce0 2Gi RWO example-storageclass 3m7s

    kubectl returns details about your PVC if it was created correctly. Verify that the configuration details appear as you intended.

Was this page helpful?