Skip to main content
Version: 3.4

Installation on vSphere Cluster using Helm

This topic provides instructions for installing Portworx on a VMware vSphere cluster using the Portworx helm chart.

The following collection of tasks describe how to install Portworx:

Complete all the tasks to install Portworx.

Create a monitoring ConfigMap

note

Use this procedure only to install Portworx on a vSphere OpenShift cluster.

Enable monitoring for user-defined projects before installing the Portworx Operator. Use the instructions in this section to configure the OpenShift Prometheus deployment to monitor Portworx metrics.

To integrate OpenShift’s monitoring and alerting system with Portworx, create a cluster-monitoring-config ConfigMap in the openshift-monitoring namespace:

kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
enableUserWorkload: true

The enableUserWorkload parameter enables monitoring for user-defined projects in the OpenShift cluster. This creates a prometheus-operated service in the openshift-user-workload-monitoring namespace.

Configure Storage DRS settings

Portworx does not support the movement of VMDK files from the datastores on which they were created.

warning

Do not move the VMDK files manually or have any settings that would result in a movement of VMDK files.

To prevent Storage DRS from moving VMDK files, configure the Storage DRS settings as follows using your vSphere console.

From the Edit Storage DRS Settings window of your selected datastore cluster, edit the following settings:

  • For Storage DRS automation, choose the No Automation (Manual Mode) option, and set the same for other settings, as shown in the following screencapture:

    VsphereDRS1

  • For Runtime Settings, clear the Enable I/O metric for SDRS recommendations option.

    VsphereDRS1

  • For Advanced options, clear the Keep VMDKs together by default options.

    VsphereDRS1

Grant the required cloud permissions

Grant the permissions that Portworx requires by creating a secret with the necessary user credentials:

  1. Using your vSphere console, provide Portworx with a vCenter server user account that has the following minimum vSphere privileges at vCenter datacenter level:
  • Datastore

    • Allocate space
    • Browse datastore
    • Low level file operations
    • Remove file
  • Host

    • Local operations
    • Reconfigure virtual machine
  • Virtual machine

    • Change Configuration
    • Add existing disk
    • Add new disk
    • Add or remove device
    • Advanced configuration
    • Change Settings
    • Extend virtual disk
    • Modify device settings
    • Remove disk

    If you create a custom role as above, make sure to select Propagate to children when assigning the user to the role.

    Why select Propagate to Children ?

    In vSphere, resources are organized hierarchically. By selecting "Propagate to Children," you ensure that the permissions granted to the custom role are automatically applied not just to the targeted object, but also to all objects within its sub-tree. This includes VMs, datastores, networks, and other resources nested under the selected resource.

  1. Create a secret using the following steps.

Retrieve the credentials from your own environment and specify them under the data section:

 apiVersion: v1
kind: Secret
metadata:
name: px-vsphere-secret
namespace: portworx
type: Opaque
data:
VSPHERE_USER: <your-vcenter-server-user>
VSPHERE_PASSWORD: <your-vcenter-server-password>
  • VSPHERE_USER: to find your base64-encoded vSphere user, enter the following command:

    echo '<vcenter-server-user>' | base64
  • VSPHERE_PASSWORD: to find your base64-encoded vSphere password, enter the following command:

    echo '<vcenter-server-password>' | base64

After you update the template with your user and password, apply the spec:

oc apply -f <your-spec-name>
important
  • Ensure that the ports 17001-17020 on worker nodes are reachable from the control plane node and other worker nodes.

Deploy Portworx using Helm

This procedure assumes that Portworx is installed in the portworx namespace. If you want to install it in a different namespace, use the -n <px-namespace> flag.

  1. To install Portworx, add the portworx/helm repository to your local Helm repository.

    helm repo add portworx https://raw.githubusercontent.com/portworx/helm/master/stable/
    "portworx" has been added to your repositories
  2. Verify that the repository has been successfully added.

    helm repo list
    NAME    	URL                                                           
    portworx https://raw.githubusercontent.com/portworx/helm/master/stable/
  3. Create a px_install_values.yaml file and add the following parameters.

    openshiftInstall: true
    drives: 'type=thin,size=150'
    envs:
    - name: VSPHERE_INSECURE
    value: 'true'
    - name: VSPHERE_USER
    valueFrom:
    secretKeyRef: null
    name: px-vsphere-secret
    key: VSPHERE_USER
    - name: VSPHERE_PASSWORD
    valueFrom:
    secretKeyRef: null
    name: px-vsphere-secret
    key: VSPHERE_PASSWORD
    - name: VSPHERE_VCENTER
    value: <your-vCenter-Endpoint>
    - name: VSPHERE_VCENTER_PORT
    value: <your-vCenter-Port>
    - name: VSPHERE_DATASTORE_PREFIX
    value: <your-vCenter-Datastore-prefix>
    - name: VSPHERE_INSTALL_MODE
    value: shared
  4. (Optional) In many cases, you may want to customize Portworx configurations, such as enabling monitoring or specifying specific storage devices. You can pass the custom configuration to the px_install_values.yaml yaml file.

    note
    • You can refer to the Portworx Helm chart parameters for a list of configurable parameters and values.yaml file for configuration file template.
    • The default clusterName is mycluster. However, it's recommended to change it to a unique identifier to avoid conflicts in multi-cluster environments.
  5. Install Portworx.

    note

    To install a specific version of Helm chart, you can use the --version flag. Example: helm install <px-release> portworx/portworx --version <helm-chart-version>.

    helm install <px-release> portworx/portworx -n portworx -f px_install_values.yaml --debug
  6. Check the status of your Portworx installation.

    helm status <px-release> -n portworx
    NAME: px-release
    LAST DEPLOYED: Thu Sep 26 05:53:17 2024
    NAMESPACE: portworx
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    Your Release is named "px-release"
    Portworx Pods should be running on each node in your cluster.

    Portworx would create a unified pool of the disks attached to your Kubernetes nodes.
    No further action should be required and you are ready to consume Portworx Volumes as part of your application data requirements.

Verify Portworx Pod Status

Run the following command to list and filter the results for Portworx pods and specify the namespace where you have deployed Portworx:

oc get pods -n <px-namespace> -o wide | grep -e portworx -e px
NAME                                                    READY   STATUS    RESTARTS         AGE     IP              NODE                         NOMINATED NODE   READINESS GATES
portworx-api-8scq2 1/1 Running 0 5h1m xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
portworx-api-f24b9 1/1 Running 0 5h1m xx.xx.xxx.xxx username-vms-silver-sight-3 <none> <none>
portworx-api-f95z5 1/1 Running 0 5h1m xx.xx.xxx.xxx username-vms-silver-sight-2 <none> <none>
portworx-kvdb-558g5 1/1 Running 0 3m46s xx.xx.xxx.xxx username-vms-silver-sight-2 <none> <none>
portworx-kvdb-9tfjd 1/1 Running 0 2m57s xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
portworx-kvdb-cjcxg 1/1 Running 0 3m7s xx.xx.xxx.xxx username-vms-silver-sight-3 <none> <none>
portworx-operator-548b8d4ccc-qgnkc 1/1 Running 0 5h2m xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
portworx-pvc-controller-ff669698-62ngd 1/1 Running 0 5h1m xx.xx.xxx.xxx username-vms-silver-sight-3 <none> <none>
portworx-pvc-controller-ff669698-6b4zj 1/1 Running 0 5h1m xx.xx.xxx.xxx username-vms-silver-sight-2 <none> <none>
portworx-pvc-controller-ff669698-pffvl 1/1 Running 0 5h1m xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
prometheus-px-prometheus-0 2/2 Running 0 5h xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
px-cluster-378d7ae1-f4ca-xxxx-xxxx-xxxxxxxxxxxx-2qsp4 2/2 Running 0 3h20m xx.xx.xxx.xxx username-vms-silver-sight-3 <none> <none>
px-cluster-378d7ae1-f4ca-xxxx-xxxx-xxxxxxxxxxxx-5vnzv 2/2 Running 0 3h20m xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
px-cluster-378d7ae1-f4ca-xxxx-xxxx-xxxxxxxxxxxx-lxzd5 2/2 Running 0 3h20m xx.xx.xxx.xxx username-vms-silver-sight-2 <none> <none>
px-csi-ext-77fbdcdcc9-7hkpm 4/4 Running 0 3h19m xx.xx.xxx.xxx username-vms-silver-sight-3 <none> <none>
px-csi-ext-77fbdcdcc9-9ck26 4/4 Running 0 3h18m xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>
px-csi-ext-77fbdcdcc9-ddmjr 4/4 Running 0 3h20m xx.xx.xxx.xxx username-vms-silver-sight-2 <none> <none>
px-prometheus-operator-7d884bc8bc-5sv9r 1/1 Running 0 5h1m xx.xx.xxx.xxx username-vms-silver-sight-0 <none> <none>

Note the name of a px-cluster pod. You will run pxctl commands from these pods in Verify Portworx Cluster Status.

Verify Portworx Cluster Status

You can find the status of the Portworx cluster by running pxctl status commands from a pod.
Enter the following command, specifying the pod name you retrieved in Verify Portworx Pod Status:

oc exec <px-pod-name>  -n <px-namespace> -- /opt/pwx/bin/pxctl status
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
Status: PX is operational
Telemetry: Disabled or Unhealthy
Metering: Disabled or Unhealthy
License: Trial (expires in 31 days)
Node ID: 24508311-e2fe-xxxx-xxxx-xxxxxxxxxxxx
IP: xx.xx.xxx.xxx
Local Storage Pool: 1 pool
POOL IO_PRIORITY RAID_LEVEL USABLE USED STATUS ZONE REGION
0 HIGH raid0 25 GiB 33 MiB Online default default
Local Storage Devices: 1 device
Device Path Media Type Size Last-Scan
0:0 /dev/sda STORAGE_MEDIUM_SSD 32 GiB 10 Oct 22 23:45 UTC
total - 32 GiB
Cache Devices:
* No cache devices
Kvdb Device:
Device Path Size
/dev/sdc 1024 GiB
* Internal kvdb on this node is using this dedicated kvdb device to store its data.
Metadata Device:
1 /dev/sdd STORAGE_MEDIUM_SSD 64 GiB
Cluster Summary
Cluster ID: px-cluster-378d7ae1-f4ca-xxxx-xxxx-xxxxxxxxxxxx
Cluster UUID: 482b18b1-2a8b-xxxx-xxxx-xxxxxxxxxxxx
Scheduler: kubernetes
Nodes: 3 node(s) with storage (3 online)
IP ID SchedulerNodeName Auth StorageNode Used Capacity Status StorageStatus Version Kernel OS
xx.xx.xxx.xxx 24508311-e2fe-xxxx-xxxx-xxxxxxxxxxxx username-vms-silver-sight-3 Disabled Yes(PX-StoreV2) 33 MiB 25 GiB Online Up (This node) 3.2.0-28944c8 5.4.217-1.el7.elrepo.x86_64 CentOS Linux 7 (Core)
xx.xx.xxx.xxx 1e89102f-0510-xxxx-xxxx-xxxxxxxxxxxx username-vms-silver-sight-0 Disabled Yes(PX-StoreV2) 33 MiB 25 GiB Online Up 3.2.0-28944c8 5.4.217-1.el7.elrepo.x86_64 CentOS Linux 7 (Core)
xx.xx.xxx.xxx 0c99e1f2-9d49-xxxx-xxxx-xxxxxxxxxxxx username-vms-silver-sight-2 Disabled Yes(PX-StoreV2) 33 MiB 25 GiB Online Up 3.2.0-28944c8 5.4.217-1.el7.elrepo.x86_64 CentOS Linux 7 (Core)
Global Storage Pool
Total Used : 99 MiB
Total Capacity : 74 GiB

Status displays PX is operational when the cluster is running as expected. If the cluster is using the PX-StoreV2 datastore, the StorageNode entries for each node displays Yes(PX-StoreV2).

Verify Portworx Pool Status

note

This procedure is applicable for clusters with PX-StoreV2 datastore.

Run the following command to view the Portworx drive configurations for your pod:

oc exec <px-pod>  -n <px-namespace> -- /opt/pwx/bin/pxctl service pool show
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
PX drive configuration:
Pool ID: 0
Type: PX-StoreV2
UUID: 58ab2e3f-a22e-xxxx-xxxx-xxxxxxxxxxxx
IO Priority: HIGH
Labels: kubernetes.io/arch=amd64,kubernetes.io/hostname=username-vms-silver-sight-3,kubernetes.io/os=linux,medium=STORAGE_MEDIUM_SSD,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,iopriority=HIGH
Size: 25 GiB
Status: Online
Has metadata: No
Balanced: Yes
Drives:
0: /dev/sda, Total size 32 GiB, Online
Cache Drives:
No Cache drives found in this pool
Metadata Device:
1: /dev/sdd, STORAGE_MEDIUM_SSD

The output Type: PX-StoreV2 ensures that the pod uses the PX-StoreV2 datastore.

Verify pxctl Cluster Provision Status

  1. Access the Portworx CLI.

  2. Run the following command to find the storage cluster:

    oc -n <px-namespace> get storagecluster
    NAME                                              CLUSTER UUID                           STATUS   VERSION          AGE
    px-cluster-378d7ae1-f4ca-xxxx-xxxx-xxxxxxxxxxxx 482b18b1-2a8b-xxxx-xxxx-xxxxxxxxxxxx Online 3.2.0-dev-rc1 5h6m

    The status must display the cluster is Online.

  3. Run the following command to find the storage nodes:

    oc -n <px-namespace> get storagenodes
    NAME                          ID                                     STATUS   VERSION          AGE
    username-vms-silver-sight-0 1e89102f-0510-xxxx-xxxx-xxxxxxxxxxxx Online 3.2.0-28944c8 3h25m
    username-vms-silver-sight-2 0c99e1f2-9d49-xxxx-xxxx-xxxxxxxxxxxx Online 3.2.0-28944c8 3h25m
    username-vms-silver-sight-3 24508311-e2fe-xxxx-xxxx-xxxxxxxxxxxx Online 3.2.0-28944c8 3h25m

    The status must display the nodes are Online.

  4. Verify the Portworx cluster provision status by running the following command.
    Specify the pod name you retrieved in Verify Portworx Pod Status.

    oc exec <px-pod> -n <px-namespace> -- /opt/pwx/bin/pxctl cluster provision-status
    NODE					                NODE STATUS	 POOL						              POOL STATUS  IO_PRIORITY	SIZE	AVAILABLE	USED   PROVISIONED ZONE REGION	RACK
    0c99e1f2-9d49-xxxx-xxxx-xxxxxxxxxxxx Up 0 ( 8ec9e6aa-7726-xxxx-xxxx-xxxxxxxxxxxx ) Online HIGH 32 GiB 32 GiB 33 MiB 0 B default default default
    1e89102f-0510-xxxx-xxxx-xxxxxxxxxxxx Up 0 ( 06fcc73a-7e2f-xxxx-xxxx-xxxxxxxxxxxx ) Online HIGH 32 GiB 32 GiB 33 MiB 0 B default default default
    24508311-e2fe-xxxx-xxxx-xxxxxxxxxxxx Up 0 ( 58ab2e3f-a22e-xxxx-xxxx-xxxxxxxxxxxx ) Online HIGH 32 GiB 32 GiB 33 MiB 0 B default default default

What to do next

Create a PVC. For more information, see Create your first PVC.

(Optional) Update Portworx Configuration using Helm

To update the configuration of Portworx, modify the parameters in the px_install_values.yaml file specified during the Helm installation. This allows you to change the values of configuration parameters.

  1. Create or edit the px_install_values.yaml file to update the desired parameters.

    vim px_install_values.yaml
    monitoring:
    telemetry: false
    grafana: true
  2. Apply the changes.

    helm upgrade <px-release> portworx/portworx -n <portworx> -f px_install_values.yaml
    Release "px-release" has been upgraded. Happy Helming!
    NAME: px-release
    LAST DEPLOYED: Thu Sep 26 06:42:20 2024
    NAMESPACE: portworx
    STATUS: deployed
    REVISION: 2
    TEST SUITE: None
    NOTES:
    Your Release is named "px-release"
    Portworx Pods should be running on each node in your cluster.

    Portworx would create a unified pool of the disks attached to your Kubernetes nodes.
    No further action should be required and you are ready to consume Portworx Volumes as part of your application data requirements.
  3. Verify that the new values have taken effect.

    helm get values <px-release> -n <portworx>

    You should see all the custom configurations passed using the px_install_values.yaml file.