On-prem installation of Portworx with PX-StoreV2 on bare metal
This document explains how to install Portworx with the PX-StoreV2 datastore in an on-premises environment.
PX-StoreV2 is a Portworx datastore optimized for supporting IO intensive workloads for configurations utilizing high performance NVMe class devices. It’s the platform that enables PX-Fast feature that targets for workloads requiring high data ingestion rates with consistent latencies.
PX-Fast is a Portworx feature that enables an accelerated IO path for the volumes that meet certain prerequisites. It is optimized for workloads requiring consistent low latencies. PX-Fast is built on top of PX-StoreV2 datastore.
- PX-Fast requires a special license. Contact the Portworx support team for obtaining this license.
- Upgrading from a previous Portworx version to deploy PX-StoreV2 datastore is not supported.
- Once Portworx is deployed with the PX-StoreV2 datastore, you can use all of Portworx's features except for the following:
- XFS volumes
- Aggregated volumes
- PX-Cache
- PX-Security
Prerequisites
You must have a Kubernetes cluster deployed on infrastructure that meets the following minimum requirements for Portworx with PX-StoreV2:
-
Linux kernel version: 4.20 or newer is the minimum required version, 5.0 or newer is recommended, with the following packages:
- Rhel: device-mapper mdadm lvm2 device-mapper-persistent-data augeas
- Debian: dmsetup mdadm lvm2 thin-provisioning-tools augeas-tools
- Suse: dmsetup mdadm lvm2 device-mapper-persistent-data augeas
- Ubuntu: dmsetup mdadm lvm2 thin-provisioning-tools augeas-tools
noteDuring installation, Portworx will automatically try to pull the required packages from distribution specific repositories. This is a mandatory requirement and installation will fail if this prerequisite is not met.
-
A minimum of 64 GB system metadata device on each node where you want to deploy Portworx.
-
An NVMe or SSD drive type with 8 GB of memory per node.
-
A minimum of 8 cores CPU per node.
Install Portworx
Follow the instructions in this section to deploy Portworx with the PX-StoreV2 datastore.
Generate the specs
To install Portworx with Kubernetes, you must first generate Kubernetes manifests that you will deploy in your cluster:
-
Navigate to Portworx Central and log in, or create an account.
-
Select Portworx Enterprise from the Product Catalog page.
-
On the Product Line page, choose any option depending on which license you intend to use, then click Continue to start the spec generator.
-
Select version 2.12 or newer from the Portworx version dropdown and DAS/SAN as your platform, then click Customize at the bottom of the Summary section.
-
On the Basic page, specify an existing namespace where you will deploy Portworx in the Namespace field and click Next.
-
On the Storage page, select On Premises as your environment and select the type of OnPrem storage. Choose the PX-StoreV2 option and provide your metadata device path in the Metadata Path textbox, then click Next.
-
Provide your network options and click Next.
-
On the Customize page, choose None for the Are you running on either of these? option, then click Finish to generate the specs.
Once you've generated your specs, you're ready to apply them and deploy Portworx. You also can save your specs on Portworx Central for future reference.
Apply the specs
Apply the specs using the following commands:
-
Deploy the Operator:
kubectl apply -f 'https://install.portworx.com/<version-number>?comp=pxoperator'
serviceaccount/portworx-operator created
podsecuritypolicy.policy/px-operator created
clusterrole.rbac.authorization.k8s.io/portworx-operator created
clusterrolebinding.rbac.authorization.k8s.io/portworx-operator created
deployment.apps/portworx-operator created -
Deploy the StorageCluster:
kubectl apply -f 'https://install.portworx.com/<version-number>?operator=true&mc=false&kbver=&b=true&c=px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-8dfd338e915b&stork=true&csi=true&mon=true&tel=false&st=k8s&promop=true'
storagecluster.core.libopenstorage.org/px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-fab038f0bbe6 created
- In your output, the image pulled will differ based on your chosen Portworx license type and version.
- For Portworx Enterprise, the default license activated on the cluster is a 30 day trial that you can convert to a SaaS-based model or a generic fixed license.
- For Portworx Essentials, your cluster must have internet connectivity so that it can send usage information every 24 hours to renew the license on the cluster. You can convert a Portworx Essentials license to either a fixed license or SaaS-based license.
Verify your Portworx installation
Once you've installed Portworx, you can perform the following tasks to verify that Portworx is correctly installed and using the PX-StoreV2 datastore.
Verify if all pods are running
Enter the following command to list and filter the results for Portworx pods and specify the namespace where you have deployed Portworx:
kubectl get pods -n <px-namespace> -o wide | grep -e portworx -e px
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
portworx-api-8scq2 1/1 Running 1 (90m ago) 5h1m xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
portworx-api-f24b9 1/1 Running 1 (108m ago) 5h1m xx.xx.xxx.xxx username-vms-silver-sight-3 <none> <none>
portworx-api-f95z5 1/1 Running 1 (90m ago) 5h1m xx.xx.xxx.xxx username-vms-silver-sight-2 <none> <none>
portworx-kvdb-558g5 1/1 Running 0 3m46s xx.xx.xxx.xxx username-vms-silver-sight-2 <none> <none>
portworx-kvdb-9tfjd 1/1 Running 0 2m57s xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
portworx-kvdb-cjcxg 1/1 Running 0 3m7s xx.xx.xxx.xxx username-vms-silver-sight-3 <none> <none>
portworx-operator-548b8d4ccc-qgnkc 1/1 Running 13 (4m26s ago) 5h2m xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
portworx-pvc-controller-ff669698-62ngd 1/1 Running 1 (108m ago) 5h1m xx.xx.xxx.xxx username-vms-silver-sight-3 <none> <none>
portworx-pvc-controller-ff669698-6b4zj 1/1 Running 1 (90m ago) 5h1m xx.xx.xxx.xxx username-vms-silver-sight-2 <none> <none>
portworx-pvc-controller-ff669698-pffvl 1/1 Running 1 (90m ago) 5h1m xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
prometheus-px-prometheus-0 2/2 Running 2 (90m ago) 5h xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-fab038f0bbe6-2qsp4 2/2 Running 13 (108m ago) 3h20m xx.xx.xxx.xxx username-vms-silver-sight-3 <none> <none>
px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-fab038f0bbe6-5vnzv 2/2 Running 16 (90m ago) 3h20m xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-fab038f0bbe6-lxzd5 2/2 Running 16 (90m ago) 3h20m xx.xx.xxx.xxx username-vms-silver-sight-2 <none> <none>
px-csi-ext-77fbdcdcc9-7hkpm 4/4 Running 4 (108m ago) 3h19m xx.xx.xxx.xxx username-vms-silver-sight-3 <none> <none>
px-csi-ext-77fbdcdcc9-9ck26 4/4 Running 4 (90m ago) 3h18m xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
px-csi-ext-77fbdcdcc9-ddmjr 4/4 Running 14 (90m ago) 3h20m xx.xx.xxx.xxx username-vms-silver-sight-2 <none> <none>
px-prometheus-operator-7d884bc8bc-5sv9r 1/1 Running 1 (90m ago) 5h1m xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
Note the name of one of your px-cluster
pods. You'll run pxctl
commands from these pods in following steps.
Verify Portworx cluster status
You can find the status of the Portworx cluster by running pxctl status
commands from a pod. Enter the following kubectl exec
command, specifying the pod name you retrieved in the previous section:
kubectl exec <px-pod> -n <px-namespace> -- /opt/pwx/bin/pxctl status
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
Status: PX is operational
Telemetry: Disabled or Unhealthy
Metering: Disabled or Unhealthy
License: Trial (expires in 31 days)
Node ID: xxxxxxxx-xxxx-xxxx-xxxx-bf578f9addc1
IP: xx.xx.xxx.xxx
Local Storage Pool: 1 pool
POOL IO_PRIORITY RAID_LEVEL USABLE USED STATUS ZONE REGION
0 HIGH raid0 25 GiB 33 MiB Online default default
Local Storage Devices: 1 device
Device Path Media Type Size Last-Scan
0:0 /dev/sda STORAGE_MEDIUM_SSD 32 GiB 10 Oct 22 23:45 UTC
total - 32 GiB
Cache Devices:
* No cache devices
Kvdb Device:
Device Path Size
/dev/sdc 1024 GiB
* Internal kvdb on this node is using this dedicated kvdb device to store its data.
Metadata Device:
1 /dev/sdd STORAGE_MEDIUM_SSD 64 GiB
Cluster Summary
Cluster ID: px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-fab038f0bbe6
Cluster UUID: xxxxxxxx-xxxx-xxxx-xxxx-5d610fa334bd
Scheduler: kubernetes
Nodes: 3 node(s) with storage (3 online)
IP ID SchedulerNodeName Auth StorageNode Used Capacity Status StorageStatus Version Kernel OS
xx.xx.xxx.xxx xxxxxxxx-xxxx-xxxx-xxxx-bf578f9addc1 username-vms-silver-sight-3 Disabled Yes(PX-StoreV2) 33 MiB 25 GiB Online Up (This node) 2.12.0-28944c8 5.4.217-1.el7.elrepo.x86_64 CentOS Linux 7 (Core)
xx.xx.xxx.xxx xxxxxxxx-xxxx-xxxx-xxxx-4a1bafeff5bc username-vms-silver-sight-0 Disabled Yes(PX-StoreV2) 33 MiB 25 GiB Online Up 2.12.0-28944c8 5.4.217-1.el7.elrepo.x86_64 CentOS Linux 7 (Core)
xx.xx.xxx.xxx xxxxxxxx-xxxx-xxxx-xxxx-502e658bc307 username-vms-silver-sight-2 Disabled Yes(PX-StoreV2) 33 MiB 25 GiB Online Up 2.12.0-28944c8 5.4.217-1.el7.elrepo.x86_64 CentOS Linux 7 (Core)
Global Storage Pool
Total Used : 99 MiB
Total Capacity : 74 GiB
The Portworx status will display PX is operational
, and the StorageNode
entries for each node will read Yes(PX-StoreV2)
.
Verify Portworx pool status
Run the following command to view the Porworx drive configurations for your pod:
kubectl exec <px-pod> -n <px-namespace> -- /opt/pwx/bin/pxctl service pool show
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
PX drive configuration:
Pool ID: 0
Type: PX-StoreV2
UUID: xxxxxxxx-xxxx-xxxx-xxxx-db8abe01d4f0
IO Priority: HIGH
Labels: kubernetes.io/arch=amd64,kubernetes.io/hostname=username-vms-silver-sight-3,kubernetes.io/os=linux,medium=STORAGE_MEDIUM_SSD,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,iopriority=HIGH
Size: 25 GiB
Status: Online
Has metadata: No
Balanced: Yes
Drives:
0: /dev/sda, Total size 32 GiB, Online
Cache Drives:
No Cache drives found in this pool
Metadata Device:
1: /dev/sdd, STORAGE_MEDIUM_SSD
The Type: PX-StoreV2
output shows that your pod is using the PX-StoreV2 datastore.
Verify pxctl cluster provision status
-
Find the storage cluster using the following command; the status should show the cluster is
Online
:kubectl -n <px-namespace> get storagecluster
NAME CLUSTER UUID STATUS VERSION AGE
px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-fab038f0bbe6 xxxxxxxx-xxxx-xxxx-xxxx-5d610fa334bd Online 2.12.0-dev-rc1 5h6m -
Find the storage nodes, whose statuses should show as
Online
:kubectl -n <px-namespace> get storagenodes
NAME ID STATUS VERSION AGE
username-vms-silver-sight-0 xxxxxxxx-xxxx-xxxx-xxxx-4a1bafeff5bc Online 2.12.0-28944c8 3h25m
username-vms-silver-sight-2 xxxxxxxx-xxxx-xxxx-xxxx-502e658bc307 Online 2.12.0-28944c8 3h25m
username-vms-silver-sight-3 xxxxxxxx-xxxx-xxxx-xxxx-bf578f9addc1 Online 2.12.0-28944c8 3h25m -
Verify the Portworx cluster provision status. Enter the following
kubectl exec
command, specifying the pod name you retrieved in the previous section:kubectl exec <px-pod> -n <px-namespace> -- /opt/pwx/bin/pxctl cluster provision-status
NODE NODE STATUS POOL POOL STATUS IO_PRIORITY SIZE AVAILABLE USED PROVISIONED ZONE REGION RACK
xxxxxxxx-xxxx-xxxx-xxxx-502e658bc307 Up 0 ( xxxxxxxx-xxxx-xxxx-xxxx-f9131bf7ef9d ) Online HIGH 32 GiB 32 GiB 33 MiB 0 B default default default
xxxxxxxx-xxxx-xxxx-xxxx-4a1bafeff5bc Up 0 ( xxxxxxxx-xxxx-xxxx-xxxx-434152789beb ) Online HIGH 32 GiB 32 GiB 33 MiB 0 B default default default
xxxxxxxx-xxxx-xxxx-xxxx-bf578f9addc1 Up 0 ( xxxxxxxx-xxxx-xxxx-xxxx-db8abe01d4f0 ) Online HIGH 32 GiB 32 GiB 33 MiB 0 B default default default
Create PVCs
Once Portwox is installed, you can proceed to Create PX-Fast PVCs.