Install Portworx on OpenShift GCP
This page provides instructions for installing Portworx on OpenShift running on Google Cloud Platform (GCP).
Prerequisites
- A supported version of OpenShift cluster running on Google Cloud that meets the minimum requirements for Portworx with at least 3 worker nodes.
- Google Cloud command line tool (
gcloud
) to configure permissions to deploy Portworx. - Network:
- Ports 17001 - 17020 opened for Portworx node to node communication
- Ports 111, 2049, and 20048 opened for sharedv4 volumes support (NFSv3)
- Port 2049 (NFS server) opened only if using sharedv4 services (NFSv4)
Grant the necessary permissions to Portworx
Create the following default roles for Portworx to manage GCP disks:
- Compute Admin/ custom IAM role
- Service Account User
Create a custom Google IAM role
If you prefer that Portworx has minimal access required, then create a custom IAM role providing the compute permissions. This role allows Portworx to have set of permissions to create, attach, or manage disks on VM instances.
- Create the following
portworx-role.yaml
file with the following minimum permissions:title: "Portworx role"
description: "Portworx role for managed disks"
stage: "GA"
includedPermissions:
- compute.disks.addResourcePolicies
- compute.disks.create
- compute.disks.createSnapshot
- compute.disks.delete
- compute.disks.get
- compute.disks.getIamPolicy
- compute.disks.list
- compute.disks.removeResourcePolicies
- compute.disks.resize
- compute.disks.setIamPolicy
- compute.disks.setLabels
- compute.disks.update
- compute.disks.use
- compute.disks.useReadOnly
- compute.instances.attachDisk
- compute.instances.detachDisk
- compute.instances.get
- compute.nodeGroups.get
- compute.nodeGroups.getIamPolicy
- compute.nodeGroups.list
- compute.zoneOperations.get
- container.clusters.get - Create your custom role for Portworx using the
portworx-role.yaml
file:Once you have created the custom IAM role, you need to bind it with a service account using which you can manage GCP disks.gcloud iam roles create portworx_role --project=<your-gcp-project> \
--file=portworx-role.yaml
Create a Service Account
Create a service account with the appropriate permissions. This account is required to interact with GCP services.
-
Run the following command to create a Service account:
gcloud iam service-accounts create <your-service-account-name>
-
Grant the required permissions to the Service Account using your custom role created previously:
gcloud projects add-iam-policy-binding <your-gcp-project> \
--member="serviceAccount:<your-custom-role>@<your-gcp-project>.iam.gserviceaccount.com" \
--role=projects/<your-gcp-project>/roles/portworx_roleThis command grants the service account the permissions defined in the
portworx_role
file. -
Grant your service account the Service account user permissions:
gcloud projects add-iam-policy-binding <your-gcp-project> \
--member="serviceAccount:<your-custom-role>@<your-gcp-project>.iam.gserviceaccount.com" \
--role="roles/iam.serviceAccountUser"
- Run the following command to get the JSON output of your service account:
gcloud iam service-accounts keys create gcloud.json \
--iam-account=<@<your-gcp-project>.iam.gserviceaccount.com
Create a secret
Create a secret called px-gcloud
in the Portworx namespace (namespace where you will deploy Portworx) using the previously generated JSON file. This will be used for authenticating your service account:
oc -n portworx create secret generic px-gcloud --from-file=gcloud.json
Create a monitoring ConfigMap
Newer OpenShift versions do not support the Portworx Prometheus deployment. As a result, you must enable monitoring for user-defined projects before installing the Portworx Operator. Use the instructions in this section to configure the OpenShift Prometheus deployment to monitor Portworx metrics.
To integrate OpenShift’s monitoring and alerting system with Portworx, create a cluster-monitoring-config
ConfigMap in the openshift-monitoring
namespace:
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
enableUserWorkload: true
The enableUserWorkload
parameter enables monitoring for user-defined projects in the OpenShift cluster. This creates a prometheus-operated
service in the openshift-user-workload-monitoring
namespace.
Install Portworx using OCP console
Follow the instructions in this section to deploy Portworx.
Generate the specs
-
Navigate to Portworx Central and log in, or create an account.
-
Select Portworx Enterprise from the Product Catalog page.
-
On the Product Line page, choose any option depending on which license you intend to use, then click Continue to start the spec generator.
-
For Platform, choose Google Cloud. Select OpenShift4+ for Distribution Name.
-
In the Summary section, check the Kubernetes version, edit it if required. Click Save spec to generate the spec file.
Install the Portworx Operator
Before you can install Portworx on your OpenShift cluster, you must first install the Portworx Operator. Perform the following steps to prepare your OpenShift cluster by installing the Operator.
-
From your OpenShift UI, select OperatorHub in the left pane.
-
On the OperatorHub page, search for Portworx and select either the Portworx Enterprise or Portworx Essentials Operator:
-
Click Install to install Portworx Operator:
-
The Portworx Operator begins to install and takes you to the Install Operator page. On this page, select A specific namespace on the cluster option for Installation mode. Choose the Create Project option from the Installed Namespace dropdown:
-
On the Create Project window, enter the name
portworx
and click Create to create a namespace called portworx. -
Click Install to deploy Portworx Operator in the
portworx
namespace.
Apply the StorageCluster spec
-
Once the Operator is installed successfully, create a StorageCluster object by clicking the Create StorageCluster button on the same page:
-
Copy the spec you created with the spec generator and paste it over the default spec in the YAML view.
-
Add the following to your StorageCluster, and click Create. This will ensure that Portworx has the proper authorization to manage GCP disks:
volumes:
- name: gcloud
mountPath: /etc/pwx/gce
secret:
secretName: px-gcloud
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: "/etc/pwx/gce/gcloud.json"The spec displayed here represents a very basic default spec:
-
Verify that Portworx has deployed successfully by navigating to the Storage Cluster tab of the Installed Operators page:
Once Portworx has fully deployed, the status will show as Online:
Verify your Portworx installation
Once you've installed Portworx, you can perform the following tasks to verify that Portworx has installed correctly.
Verify if all pods are running
Enter the following oc get pods
command to list and filter the results for Portworx pods:
oc get pods -n portworx -o wide | grep -e portworx -e px
portworx-api-774c2 1/1 Running 0 2m55s 192.168.121.196 username-k8s1-node0 <none> <none>
portworx-api-t4lf9 1/1 Running 0 2m55s 192.168.121.99 username-k8s1-node1 <none> <none>
portworx-kvdb-94bpk 1/1 Running 0 4s 192.168.121.196 username-k8s1-node0 <none> <none>
portworx-operator-xxxx-xxxxxxxxxxxxx 1/1 Running 0 4m1s 10.244.1.99 username-k8s1-node0 <none> <none>
prometheus-px-prometheus-0 2/2 Running 0 2m41s 10.244.1.105 username-k8s1-node0 <none> <none>
px-cluster-1c3edc42-4541-48fc-b173-xxxx-xxxxxxxxxxxxx 2/2 Running 0 2m55s 192.168.121.196 username-k8s1-node0 <none> <none>
px-cluster-1c3edc42-4541-48fc-b173-xxxx-xxxxxxxxxxxxx 1/2 Running 0 2m55s 192.168.121.99 username-k8s1-node1 <none> <none>
px-csi-ext-868fcb9fc6-xxxxx 4/4 Running 0 3m5s 10.244.1.103 username-k8s1-node0 <none> <none>
px-csi-ext-868fcb9fc6-xxxxx 4/4 Running 0 3m5s 10.244.1.102 username-k8s1-node0 <none> <none>
px-csi-ext-868fcb9fc6-xxxxx 4/4 Running 0 3m5s 10.244.3.107 username-k8s1-node1 <none> <none>
px-prometheus-operator-59b98b5897-xxxxx 1/1 Running 0 3m3s 10.244.1.104 username-k8s1-node0 <none> <none>
Note the name of one of your px-cluster
pods. You'll run pxctl
commands from these pods in following steps.
Verify Portworx cluster status
You can find the status of the Portworx cluster by running pxctl status
commands from a pod. Enter the following oc exec
command, specifying the pod name you retrieved in the previous section:
oc exec px-cluster-1c3edc42-4541-48fc-b173-xxxx-xxxxxxxxxxxxx -n portworx -- /opt/pwx/bin/pxctl status
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
Status: PX is operational
Telemetry: Disabled or Unhealthy
Metering: Disabled or Unhealthy
License: Trial (expires in 31 days)
Node ID: 788bf810-57c4-4df1-xxxx-xxxxxxxxxxxxx
IP: 192.168.121.99
Local Storage Pool: 1 pool
POOL IO_PRIORITY RAID_LEVEL USABLE USED STATUS ZONE REGION
0 HIGH raid0 3.0 TiB 10 GiB Online default default
Local Storage Devices: 3 devices
Device Path Media Type Size Last-Scan
0:1 /dev/vdb STORAGE_MEDIUM_MAGNETIC 1.0 TiB 14 Jul 22 22:03 UTC
0:2 /dev/vdc STORAGE_MEDIUM_MAGNETIC 1.0 TiB 14 Jul 22 22:03 UTC
0:3 /dev/vdd STORAGE_MEDIUM_MAGNETIC 1.0 TiB 14 Jul 22 22:03 UTC
* Internal kvdb on this node is sharing this storage device /dev/vdc to store its data.
total - 3.0 TiB
Cache Devices:
* No cache devices
Cluster Summary
Cluster ID: px-cluster-1c3edc42-xxxx-xxxxxxxxxxxxx
Cluster UUID: 33a82fe9-d93b-435b-xxxx-xxxxxxxxxxxxx
Scheduler: kubernetes
Nodes: 2 node(s) with storage (2 online)
IP ID SchedulerNodeName Auth StorageNode Used Capacity Status StorageStatus Version Kernel OS
192.168.121.196 f6d87392-81f4-459a-xxxx-xxxxxxxxxxxxx username-k8s1-node0 Disabled Yes 10 GiB 3.0 TiB Online Up 2.11.0-81faacc 3.10.0-1127.el7.x86_64 CentOS Linux 7 (Core)
192.168.121.99 788bf810-57c4-4df1-xxxx-xxxxxxxxxxxxx username-k8s1-node1 Disabled Yes 10 GiB 3.0 TiB Online Up (This node) 2.11.0-81faacc 3.10.0-1127.el7.x86_64 CentOS Linux 7 (Core)
Global Storage Pool
Total Used : 20 GiB
Total Capacity : 6.0 TiB
The Portworx status will display PX is operational
if your cluster is running as intended.
Verify pxctl cluster provision status
-
Find the storage cluster, the status should show as
Online
:oc -n portworx get storagecluster
NAME CLUSTER UUID STATUS VERSION AGE
px-cluster-1c3edc42-4541-48fc-xxxx-xxxxxxxxxxxxx 33a82fe9-d93b-435b-xxxx-xxxxxxxxxxxx Online 2.11.0 10m -
Find the storage nodes, the statuses should show as
Online
:oc -n portworx get storagenodes
NAME ID STATUS VERSION AGE
username-k8s1-node0 f6d87392-81f4-459a-xxxx-xxxxxxxxxxxxx Online 2.11.0-81faacc 11m
username-k8s1-node1 788bf810-57c4-4df1-xxxx-xxxxxxxxxxxxx Online 2.11.0-81faacc 11m -
Verify the Portworx cluster provision status . Enter the following
oc exec
command, specifying the pod name you retrieved in the previous section:oc exec px-cluster-1c3edc42-4541-48fc-b173-xxxx-xxxxxxxxxxxxx -n portworx -- /opt/pwx/bin/pxctl cluster provision-status
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
NODE NODE STATUS POOL POOL STATUS IO_PRIORITY SIZE AVAILABLE USED PROVISIONED ZONE REGION RACK
788bf810-57c4-4df1-xxxx-xxxxxxxxxxxx Up 0 ( 96e7ff01-fcff-4715-xxxx-xxxxxxxxxxxx ) Online HIGH 3.0 TiB 3.0 TiB 10 GiB 0 B default default default
f6d87392-81f4-459a-xxxx-xxxxxxxxx Up 0 ( e06386e7-b769-xxxx-xxxxxxxxxxxxx ) Online HIGH 3.0 TiB 3.0 TiB 10 GiB 0 B default default default
Create your first PVC
For your apps to use persistent volumes powered by Portworx, you must use a StorageClass that references Portworx as the provisioner. Portworx includes a number of default StorageClasses, which you can reference with PersistentVolumeClaims (PVCs) you create. For a more general overview of how storage works within Kubernetes, refer to the Persistent Volumes section of the Kubernetes documentation.
Perform the following steps to create a PVC:
-
Create a PVC referencing the
px-csi-db
default StorageClass and save the file:kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: px-check-pvc
spec:
storageClassName: px-csi-db
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi -
Run the
oc apply
command to create a PVC:oc apply -f <your-pvc-name>.yaml
persistentvolumeclaim/px-check-pvc created
Verify your StorageClass and PVC
-
Enter the following
oc get storageclass
command, specify the name of the StorageClass you created in the steps above:oc get storageclass <your-storageclass-name>
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
px-csi-db pxd.portworx.com Delete Immediate false 24moc
will return details about your storageClass if it was created correctly. Verify the configuration details appear as you intended. -
Enter the
oc get pvc
command, if this is the only StorageClass and PVC you've created, you should see only one entry in the output:oc get pvc <your-pvc-name>
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
px-check-pvc Bound pvc-dce346e8-ff02-4dfb-xxxx-xxxxxxxxxxxxx 2Gi RWO example-storageclass 3m7soc
will return details about your PVC if it was created correctly. Verify the configuration details appear as you intended.