Skip to main content
Version: 3.2

Install Portworx on OpenShift on vSphere with PX-StoreV2

PX-StoreV2 is a Portworx datastore optimized for supporting IO intensive workloads for configurations utilizing high performance NVMe class devices. It efficiently manages and balances workload across nodes by dynamically assigning tasks to the most suitable nodes based on their available resources. Hence, improving performance and scalability of your cluster.

Follow the instructions on this page to install Portworx on vSphere with PX-StoreV2.

note
  • Upgrading from a previous Portworx version to deploy PX-StoreV2 datastore with cloud drives is not supported.
  • Once Portworx is deployed with the PX-StoreV2 datastore, you can use all of Portworx's features except for the following:
    • XFS volumes
    • Aggregated volumes
    • PX-Cache

Prerequisites

  • Your cluster must be running OpenShift 4.13 or higher.

  • You must have an OpenShift cluster deployed on infrastructure that meets the minimum requirements for Portworx.

  • Ensure that any underlying nodes used for Portworx in OCP have Secure Boot disabled.

  • You must have supported disk types.

  • Linux kernel version: 4.20 or newer (minimum), 5.0 or newer (recommended), with the Rhel: packages:

    • device-mapper mdadm lvm2 device-mapper-persistent-data augeas
    note

    During installation, Portworx will automatically try to pull the required packages from distribution specific repositories. This is a mandatory requirement and installation will fail if this prerequisite is not met.

  • A minimum of 64 GB system metadata device on each node where you want to deploy Portworx. If you do not provide a metadata device, one will be automatically added to the spec.

  • An SD/NVME drive type with a memory of more than 8 GB per node.

  • A minimum of 8 cores CPU per node.

Create a monitoring ConfigMap

Newer OpenShift versions do not support the Portworx Prometheus deployment. As a result, you must enable monitoring for user-defined projects before installing the Portworx Operator. Use the instructions in this section to configure the OpenShift Prometheus deployment to monitor Portworx metrics.

To integrate OpenShift’s monitoring and alerting system with Portworx, create a cluster-monitoring-config ConfigMap in the openshift-monitoring namespace:

apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
enableUserWorkload: true

The enableUserWorkload parameter enables monitoring for user-defined projects in the OpenShift cluster. This creates a prometheus-operated service in the openshift-user-workload-monitoring namespace.

Install the Portworx Operator

Before you can install Portworx on your OpenShift cluster, you must first install the Portworx Operator. Perform the following steps to prepare your OpenShift cluster by installing the Operator.

  1. From your OpenShift UI, select OperatorHub in the left pane.

  2. On the OperatorHub page, search for Portworx and select the Portworx Enterprise or Portworx Essentials card:

    search catalog

  3. Click Install to install Portworx Operator:

    select catalog

  4. Portworx Operator begins to install and takes you to the Install Operator page. On this page:

    • Select the A specific namespace on the cluster option for Installation mode.
    • Choose the Create Project option from the Installed Namespace dropdown.

    Installed operator page

  5. In the Create Project window, provide the name portworx and click Create to create a namespace called portworx.

  6. To manage your Porworx cluster using the Portworx dashboard within the OpenShift UI, select Enable for the Console plugin option.

  7. Click Install to deploy Portworx Operator in the portworx namespace.

Deploy Portworx

The Portworx Enterprise Operator takes a custom Kubernetes resource called StorageCluster as input. The StorageCluster is a representation of your Portworx cluster configuration. Once the StorageCluster object is created, the Operator will deploy a Portworx cluster corresponding to the specification in the StorageCluster object. The Operator will watch for changes on the StorageCluster and update your cluster according to the latest specifications.

For more information about the StorageCluster object and how the Operator manages changes, refer to the StorageCluster article.

Configure Storage DRS settings

Portworx does not support the movement of VMDK files from the datastores on which they were created. Do not move them manually or have any settings that would result in a movement of these files. To prevent Storage DRS from moving VMDK files, configure the Storage DRS settings as follows using your vSphere console.

From the Edit Storage DRS Settings window of your selected datastore cluster, edit the following settings:

  • For Storage DRS automation, choose the No Automation (Manual Mode) option, and set the same for other settings, as shown in the following screencapture:

    VsphereDRS1

  • For Runtime Settings, clear the Enable I/O metric for SDRS recommendations option.

    VsphereDRS1

  • For Advanced options, clear the Keep VMDKs together by default options.

    VsphereDRS1

Grant the required cloud permissions

Grant permissions Portworx requires by creating a secret with user credentials:

Provide Portworx with a vCenter server user that has the following minimum vSphere privileges using your vSphere console:

  • Datastore

    • Allocate space
    • Browse datastore
    • Low level file operations
    • Remove file
  • Host

    • Local operations
    • Reconfigure virtual machine
  • Virtual machine

    • Change Configuration
    • Add existing disk
    • Add new disk
    • Add or remove device
    • Advanced configuration
    • Change Settings
    • Extend virtual disk
    • Modify device settings
    • Remove disk

    If you create a custom role as above, make sure to select Propagate to children when assigning the user to the role.

    Why select Propagate to Children ?

    In vSphere, resources are organized hierarchically. By selecting "Propagate to Children," you ensure that the permissions granted to the custom role are automatically applied not just to the targeted object, but also to all objects within its sub-tree. This includes VMs, datastores, networks, and other resources nested under the selected resource.

  1. Create a secret using the following template. Retrieve the credentials from your own environment and specify them under the data section:

    apiVersion: v1
    kind: Secret
    metadata:
    name: px-vsphere-secret
    namespace: portworx
    type: Opaque
    data:
    VSPHERE_USER: <your-vcenter-server-user>
    VSPHERE_PASSWORD: <your-vcenter-server-password>
    • VSPHERE_USER: to find your base64-encoded vSphere user, enter the following command:

      echo '<vcenter-server-user>' | base64
    • VSPHERE_PASSWORD: to find your base64-encoded vSphere password, enter the following command:

      echo '<vcenter-server-password>' | base64

    Once you've updated the template with your user and password, apply the spec:

    oc apply -f <your-spec-name>
  2. Ensure ports 17001-17020 on worker nodes are reachable from the control plane node and other worker nodes.

  3. If you're running a Portworx Essentials cluster, then create the following secret with your Essential Entitlement ID:

    oc -n portworx create secret generic px-essential \
    --from-literal=px-essen-user-id=YOUR_ESSENTIAL_ENTITLEMENT_ID \
    --from-literal=px-osb-endpoint='https://pxessentials.portworx.com/osb/billing/v1/register'

Generate the StorageCluster spec

To install Portworx with OpenShift, you must generate a StorageCluster spec that you will deploy in your cluster.

  1. Navigate to Portworx Central and log in, or create an account.

  2. Select Portworx Enterprise from the Product Catalog page.

  3. On the Product Line page, choose any option depending on which license you intend to use, then click Continue to start the spec generator.

  4. Choose Portworx Version and select vSphere from the Platform dropdown.

  5. Specify your hostname or the IP address of the vSphere server in the vCenter endpoint field.

  6. Specify the datastore name(s) or datastore cluster name(s) available for Portworx in the vCenter datastore prefix field. To specify multiple datastore names or datastore cluster names, enter a generic prefix common to all the datastores or datastore clusters. For example, if you want Portworx to use three datastores named px-datastore-01, px-datastore-02, and px-datastore-03, specify px or px-datastore.

  7. Click Customize at the bottom of the Summary section.

  8. Navigate to the Storage window by clicking Next. Select PX-StoreV2 check box in the Configure storage devices section.

  9. Navigate to the Customize window and click Finish to generate the specs.

Apply the spec

  1. Once the Operator is installed successfully, create a StorageCluster object from the OpenShidt UI by clicking the Create StorageCluster button on the same page:

    Portworx Operator

  2. The spec displayed here represents a very basic default spec. Copy the spec you created with the spec generator and paste it over the default spec in the YAML view, and click Create to deploy Portworx:

    Portworx Operator

  3. Verify that Portworx has deployed successfully by navigating to the Storage Cluster tab of the Installed Operators page:

    Portworx Operator

    Once Portworx has fully deployed, the status will show as Online:

    Portworx Operator

  4. Refresh your browser to see the Portworx option in the left pane. Click the Cluster sub-tab to access the Portworx dashboard.

Pre-flight check

After you applying the specs, the Portworx Operator performs a pre-flight check across the cluster and must pass for each node. This check determines whether each node in your cluster is compatible with the PX-StoreV2 datastore. If each node in the cluster meets the following hardware and software requirements, PX-StoreV2 will be automatically set as your default datastore during Portworx installation:

  • Hardware:
    • CPU: A minimum of 8 cores CPU per node.
    • Drive type: SD/NVME drive with a memory of more than 8 GB per node.
    • Metadata device: A minimum of 64 GB system metadata device on each node.
  • Software:
    • Linux kernel version: 4.20 or newer with the Rhel packages:
      • device-mapper mdadm lvm2 device-mapper-persistent-data augeas

Verify your Portworx installation

Once you've installed Portworx, you can perform the following tasks to verify that Portworx is correctly installed and using the PX-StoreV2 datastore.

Verify if all pods are running

Enter the following command to list and filter the results for Portworx pods and specify the namespace where you have deployed Portworx:

oc get pods -n <px-namespace> -o wide | grep -e portworx -e px
NAME                                                    READY   STATUS    RESTARTS         AGE     IP              NODE                         NOMINATED NODE   READINESS GATES
portworx-api-8scq2 1/1 Running 0 5h1m xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
portworx-api-f24b9 1/1 Running 0 5h1m xx.xx.xxx.xxx username-vms-silver-sight-3 <none> <none>
portworx-api-f95z5 1/1 Running 0 5h1m xx.xx.xxx.xxx username-vms-silver-sight-2 <none> <none>
portworx-kvdb-558g5 1/1 Running 0 3m46s xx.xx.xxx.xxx username-vms-silver-sight-2 <none> <none>
portworx-kvdb-9tfjd 1/1 Running 0 2m57s xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
portworx-kvdb-cjcxg 1/1 Running 0 3m7s xx.xx.xxx.xxx username-vms-silver-sight-3 <none> <none>
portworx-operator-548b8d4ccc-qgnkc 1/1 Running 0 5h2m xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
portworx-pvc-controller-ff669698-62ngd 1/1 Running 0 5h1m xx.xx.xxx.xxx username-vms-silver-sight-3 <none> <none>
portworx-pvc-controller-ff669698-6b4zj 1/1 Running 0 5h1m xx.xx.xxx.xxx username-vms-silver-sight-2 <none> <none>
portworx-pvc-controller-ff669698-pffvl 1/1 Running 0 5h1m xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
prometheus-px-prometheus-0 2/2 Running 0 5h xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
px-cluster-378d7ae1-f4ca-xxxx-xxxx-xxxxxxxxxxxx-2qsp4 2/2 Running 0 3h20m xx.xx.xxx.xxx username-vms-silver-sight-3 <none> <none>
px-cluster-378d7ae1-f4ca-xxxx-xxxx-xxxxxxxxxxxx-5vnzv 2/2 Running 0 3h20m xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
px-cluster-378d7ae1-f4ca-xxxx-xxxx-xxxxxxxxxxxx-lxzd5 2/2 Running 0 3h20m xx.xx.xxx.xxx username-vms-silver-sight-2 <none> <none>
px-csi-ext-77fbdcdcc9-7hkpm 4/4 Running 0 3h19m xx.xx.xxx.xxx username-vms-silver-sight-3 <none> <none>
px-csi-ext-77fbdcdcc9-9ck26 4/4 Running 0 3h18m xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
px-csi-ext-77fbdcdcc9-ddmjr 4/4 Running 0 3h20m xx.xx.xxx.xxx username-vms-silver-sight-2 <none> <none>
px-prometheus-operator-7d884bc8bc-5sv9r 1/1 Running 0 5h1m xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>

Note the name of one of your px-cluster pods. You'll run pxctl commands from these pods in following steps.

Verify Portworx cluster status

You can find the status of the Portworx cluster by running pxctl status commands from a pod. Enter the following oc exec command, specifying the pod name you retrieved in the previous section:

oc exec <px-pod>  -n <px-namespace> -- /opt/pwx/bin/pxctl status
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
Status: PX is operational
Telemetry: Disabled or Unhealthy
Metering: Disabled or Unhealthy
License: Trial (expires in 31 days)
Node ID: 24508311-e2fe-xxxx-xxxx-xxxxxxxxxxxx
IP: xx.xx.xxx.xxx
Local Storage Pool: 1 pool
POOL IO_PRIORITY RAID_LEVEL USABLE USED STATUS ZONE REGION
0 HIGH raid0 25 GiB 33 MiB Online default default
Local Storage Devices: 1 device
Device Path Media Type Size Last-Scan
0:0 /dev/sda STORAGE_MEDIUM_SSD 32 GiB 10 Oct 22 23:45 UTC
total - 32 GiB
Cache Devices:
* No cache devices
Kvdb Device:
Device Path Size
/dev/sdc 1024 GiB
* Internal kvdb on this node is using this dedicated kvdb device to store its data.
Metadata Device:
1 /dev/sdd STORAGE_MEDIUM_SSD 64 GiB
Cluster Summary
Cluster ID: px-cluster-378d7ae1-f4ca-xxxx-xxxx-xxxxxxxxxxxx
Cluster UUID: 482b18b1-2a8b-xxxx-xxxx-xxxxxxxxxxxx
Scheduler: kubernetes
Nodes: 3 node(s) with storage (3 online)
IP ID SchedulerNodeName Auth StorageNode Used Capacity Status StorageStatus Version Kernel OS
xx.xx.xxx.xxx 24508311-e2fe-xxxx-xxxx-xxxxxxxxxxxx username-vms-silver-sight-3 Disabled Yes(PX-StoreV2) 33 MiB 25 GiB Online Up (This node) 3.2.0-28944c8 5.4.217-1.el7.elrepo.x86_64 CentOS Linux 7 (Core)
xx.xx.xxx.xxx 1e89102f-0510-xxxx-xxxx-xxxxxxxxxxxx username-vms-silver-sight-0 Disabled Yes(PX-StoreV2) 33 MiB 25 GiB Online Up 3.2.0-28944c8 5.4.217-1.el7.elrepo.x86_64 CentOS Linux 7 (Core)
xx.xx.xxx.xxx 0c99e1f2-9d49-xxxx-xxxx-xxxxxxxxxxxx username-vms-silver-sight-2 Disabled Yes(PX-StoreV2) 33 MiB 25 GiB Online Up 3.2.0-28944c8 5.4.217-1.el7.elrepo.x86_64 CentOS Linux 7 (Core)
Global Storage Pool
Total Used : 99 MiB
Total Capacity : 74 GiB

The Portworx status will display PX is operational, and the StorageNode entries for each node will read Yes(PX-StoreV2).

Verify Portworx pool status

Run the following command to view the Porworx drive configurations for your pod:

oc exec <px-pod>  -n <px-namespace> -- /opt/pwx/bin/pxctl service pool show
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
PX drive configuration:
Pool ID: 0
Type: PX-StoreV2
UUID: 58ab2e3f-a22e-xxxx-xxxx-xxxxxxxxxxxx
IO Priority: HIGH
Labels: kubernetes.io/arch=amd64,kubernetes.io/hostname=username-vms-silver-sight-3,kubernetes.io/os=linux,medium=STORAGE_MEDIUM_SSD,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,iopriority=HIGH
Size: 25 GiB
Status: Online
Has metadata: No
Balanced: Yes
Drives:
0: /dev/sda, Total size 32 GiB, Online
Cache Drives:
No Cache drives found in this pool
Metadata Device:
1: /dev/sdd, STORAGE_MEDIUM_SSD

The Type: PX-StoreV2 output shows that your pod is using the PX-StoreV2 datastore.

Verify pxctl cluster provision status

  • Find the storage cluster using the following command; the status should show the cluster is Online:

    oc -n <px-namespace> get storagecluster
    NAME                                              CLUSTER UUID                           STATUS   VERSION          AGE
    px-cluster-378d7ae1-f4ca-xxxx-xxxx-xxxxxxxxxxxx 482b18b1-2a8b-xxxx-xxxx-xxxxxxxxxxxx Online 3.2.0-dev-rc1 5h6m
  • Find the storage nodes, whose statuses should show as Online:

    oc -n <px-namespace> get storagenodes
    NAME                          ID                                     STATUS   VERSION          AGE
    username-vms-silver-sight-0 1e89102f-0510-xxxx-xxxx-xxxxxxxxxxxx Online 3.2.0-28944c8 3h25m
    username-vms-silver-sight-2 0c99e1f2-9d49-xxxx-xxxx-xxxxxxxxxxxx Online 3.2.0-28944c8 3h25m
    username-vms-silver-sight-3 24508311-e2fe-xxxx-xxxx-xxxxxxxxxxxx Online 3.2.0-28944c8 3h25m
  • Verify the Portworx cluster provision status. Enter the following oc exec command, specifying the pod name you retrieved in the previous section:

    oc exec <px-pod> -n <px-namespace> -- /opt/pwx/bin/pxctl cluster provision-status
    NODE					                NODE STATUS	 POOL						              POOL STATUS  IO_PRIORITY	SIZE	AVAILABLE	USED   PROVISIONED ZONE REGION	RACK
    0c99e1f2-9d49-xxxx-xxxx-xxxxxxxxxxxx Up 0 ( 8ec9e6aa-7726-xxxx-xxxx-xxxxxxxxxxxx ) Online HIGH 32 GiB 32 GiB 33 MiB 0 B default default default
    1e89102f-0510-xxxx-xxxx-xxxxxxxxxxxx Up 0 ( 06fcc73a-7e2f-xxxx-xxxx-xxxxxxxxxxxx ) Online HIGH 32 GiB 32 GiB 33 MiB 0 B default default default
    24508311-e2fe-xxxx-xxxx-xxxxxxxxxxxx Up 0 ( 58ab2e3f-a22e-xxxx-xxxx-xxxxxxxxxxxx ) Online HIGH 32 GiB 32 GiB 33 MiB 0 B default default default

Create your first PVC

For your apps to use persistent volumes powered by Portworx, you must use a StorageClass that references Portworx as the provisioner. Portworx includes a number of default StorageClasses, which you can reference with PersistentVolumeClaims (PVCs) you create. For a more general overview of how storage works within Kubernetes, refer to the Persistent Volumes section of the Kubernetes documentation.

Perform the following steps to create a PVC:

  1. Create a PVC referencing the px-csi-db default StorageClass and save the file:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
    name: px-check-pvc
    spec:
    storageClassName: px-csi-db
    accessModes:
    - ReadWriteOnce
    resources:
    requests:
    storage: 2Gi
  2. Run the oc apply command to create a PVC:

    oc apply -f <your-pvc-name>.yaml
    persistentvolumeclaim/example-pvc created

Verify your StorageClass and PVC

  1. Enter the oc get storageclass command:

    oc get storageclass
    NAME                                 PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    px-csi-db pxd.portworx.com Delete Immediate true 43d
    px-csi-db-cloud-snapshot pxd.portworx.com Delete Immediate true 43d
    px-csi-db-cloud-snapshot-encrypted pxd.portworx.com Delete Immediate true 43d
    px-csi-db-encrypted pxd.portworx.com Delete Immediate true 43d
    px-csi-db-local-snapshot pxd.portworx.com Delete Immediate true 43d
    px-csi-db-local-snapshot-encrypted pxd.portworx.com Delete Immediate true 43d
    px-csi-replicated pxd.portworx.com Delete Immediate true 43d
    px-csi-replicated-encrypted pxd.portworx.com Delete Immediate true 43d
    px-db kubernetes.io/portworx-volume Delete Immediate true 43d
    px-db-cloud-snapshot kubernetes.io/portworx-volume Delete Immediate true 43d
    px-db-cloud-snapshot-encrypted kubernetes.io/portworx-volume Delete Immediate true 43d
    px-db-encrypted kubernetes.io/portworx-volume Delete Immediate true 43d
    px-db-local-snapshot kubernetes.io/portworx-volume Delete Immediate true 43d
    px-db-local-snapshot-encrypted kubernetes.io/portworx-volume Delete Immediate true 43d
    px-replicated kubernetes.io/portworx-volume Delete Immediate true 43d
    px-replicated-encrypted kubernetes.io/portworx-volume Delete Immediate true 43d
    stork-snapshot-sc stork-snapshot Delete Immediate true 43d

    oc returns details about the StorageClasses available to you. Verify that px-csi-db appears in the list.

  2. Enter the oc get pvc command. If this is the only StorageClass and PVC that you've created, you should see only one entry in the output:

    oc get pvc <your-pvc-name>
    NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS           AGE
    example-pvc Bound pvc-dce346e8-ff02-xxxx-xxxx-xxxxxxxxxxxx 2Gi RWO example-storageclass 3m7s

    oc returns details about your PVC if it was created correctly. Verify that the configuration details appear as you intended.

Was this page helpful?