Installation on Red Hat OpenShift Service on AWS (ROSA) Cluster
This topic provides instructions for installing Portworx on Red Hat OpenShift Service on AWS (ROSA) cluster with console plugin.
The following collection of tasks describe how to install Portworx on an Red Hat OpenShift cluster:
- Create an AWS user
- Create AWS secure credentials
- Open ports for worker nodes
- Create a monitoring ConfigMap
- Generate the Portworx specification
- Install the Portworx Operator using the OpenShift console
- Deploy Portworx using the OpenShift console
- Verify Portworx cluster status
- Verify Portworx pod status
- Verify Portworx pool status
- Verify pxctl cluster provision status
Complete all the tasks to install Portworx.
Create an AWS user
Create an AWS user with the required permissions for Portworx to work.
-
Sign in to the AWS Management Console.
-
Go to the IAM Console Home page, select Users in the left pane.
-
On the Users page, select Create user in the upper-right corner to create a new user.
-
Specify your user details and select Next.
-
On the Set permissions page, select Attach policies directly, and then select Create policy.
-
In the policy editor, open the JSON tab, paste the following policy, and select Next:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": [
"ec2:AttachVolume",
"ec2:ModifyVolume",
"ec2:DetachVolume",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:DeleteTags",
"ec2:DeleteVolume",
"ec2:DescribeTags",
"ec2:DescribeVolumeAttribute",
"ec2:DescribeVolumesModifications",
"ec2:DescribeVolumeStatus",
"ec2:DescribeVolumes",
"ec2:DescribeInstances",
"autoscaling:DescribeAutoScalingGroups"
],
"Resource": ["*"]
}
]
} -
Provide a name for the policy and select Create policy. After the policy is created and attached to your user account, it appears in the Permissions policies section.
-
Select your policy in the Permissions policies section and select Next.
-
Review your user details and permissions, and then select Create user.
Create AWS secure credentials
After you create the policy in the previous section, create credentials for secure access to your AWS account.
-
From the User page, find the user you created in the previous section using the search bar, and then select the user name.
-
On the user information page, select the Security credentials tab under Summary.
-
In the Access keys section, select Create access key.
-
Select Application running on an AWS compute service, and then select Next.
-
Provide a description tag value and select Create access key.
-
Select Download .csv file to save your credentials.
-
Use the values from the CSV file to create an AWS secret in OpenShift:
oc create secret generic -n portworx aws-creds \
--from-literal=aws-key=XXXXXXXXXXXXXX \
--from-literal=aws-secret=XXXXXXXXXXXX
Open ports for worker nodes
Add inbound rules so that the AWS EC2 instances in your worker security group allow the required incoming traffic.
-
From the EC2 page of the AWS console, select Security Groups under Network & Security in the left pane.
-
On the Security Groups page, enter your ROSA cluster name in the search bar and press Enter. A list of security groups associated with your cluster appears. Select the link under Security group ID for the worker security group.
-
On the security group page, select Actions, and then select Edit inbound rules.
-
Select Add rule to add each of the following rules:
- Allow inbound Custom TCP traffic with Protocol: TCP on ports 17001–17022.
- Allow inbound Custom TCP traffic with Protocol: TCP on port 20048.
- Allow inbound Custom TCP traffic with Protocol: TCP on port 111.
- Allow inbound Custom UDP traffic with Protocol: UDP on port 17002.
- Allow inbound NFS traffic with Protocol: TCP on port 2049.
Make sure to specify the security group ID of the same worker security group mentioned in step 2.
-
Select Save.
Create a monitoring ConfigMap
Recent OpenShift versions don’t support the Portworx Prometheus deployment. You must enable monitoring for user-defined projects before installing the Portworx Operator. Use this section to configure the OpenShift Prometheus deployment to monitor Portworx metrics.
To integrate OpenShift monitoring and alerting with Portworx, create a cluster-monitoring-config
ConfigMap in the openshift-monitoring
namespace:
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
enableUserWorkload: true
The enableUserWorkload
parameter enables monitoring for user-defined projects in the OpenShift cluster. This creates a prometheus-operated
service in the openshift-user-workload-monitoring
namespace.
Generate the Portworx specification
-
Sign in to the Portworx Central console.
The system displays the Welcome to Portworx Central! page. -
From the Portworx Version drop-down menu, select the Portworx version to install.
-
From the Platform drop-down menu, select AWS.
-
From the Distribution Name drop-down menu, select Red Hat OpenShift Service on AWS (ROSA).
-
(Optional) To customize configuration options and generate a custom specification, select Customize, and then perform the following steps:
-
Basic tab:
-
To use an existing etcd cluster:
-
Select Your etcd details.
-
Enter the host name or IP and port number. For example,
http://test.com.net:1234
. -
Select one of the following authentication methods:
- Disable HTTPS — Use HTTP for etcd communication.
- Certificate Auth — Use HTTPS with an SSL certificate. For more information, see Secure your etcd communication.
- Password Auth — Use HTTPS with user name and password authentication.
-
-
To use an internal Portworx-managed key-value store (kvdb):
-
Select the Built-in option.
noteTo restrict Portworx to run internal KVDB only on specific nodes, label those nodes with:
kubectl label nodes node1 node2 node3 px/metadata-node=true
-
To enable TLS-encrypted communication among kvdb nodes and between Portworx nodes and the kvdb cluster, select Enable TLS for internal kvdb.
-
If your cluster doesn’t already have a cert-manager, select Deploy Cert-Manager for TLS certificates.
-
-
Select Next.
-
-
Storage tab: Note: Don’t add volumes of different types when configuring storage devices. For example, don’t add both GP2 and GP3 or IO1. This can cause performance issues and errors.
-
To enable Portworx to provision and manage drives automatically:
-
Select Create Using a Spec.
-
The selection between PX-StoreV2 and PX-StoreV1 is automatic. The default datastore is determined by a preflight check that runs across the cluster to assess whether it can deploy Portworx with the PX-StoreV2 datastore. If the preflight check passes for all nodes, PX-StoreV2 is selected as the default.
-
From the Add Drive Type drop-down menu, select the EBS volume type to use for data storage. For PX-StoreV2, four drives are recommended for optimal performance. When specifying EBS volume types, provide:
-
Size (GB)
-
If applicable, the IOPS required from EBS volume
-
If applicable, the Throughput for EBS volume
-
From Encryption, select:
- None — Create an unencrypted EBS volume.
- BYOB — Provide an Encryption Key. For more information, see AWS KMS.
-
Drive Tags — Enter key-value pairs to apply to the disks created by Portworx. For more information, see How to assign custom labels to device pools.
You can add multiple drive types by selecting Add Drive Type, remove a drive type by selecting Remove, or add the same drive type with different configurations using + Add Drive.
-
-
(Optional) Enter the Max storage nodes per availability zone.
-
From the Default IO Profile drop-down menu, select Auto. Portworx automatically chooses the optimal I/O profile based on detected workload patterns.
-
From the Journal Device drop-down menu, select:
-
None — Use the default journaling setting.
-
Auto — Automatically allocate journal devices.
-
Custom — Manually define a journal device:
-
Select the EBS Volume Type.
-
Specify the IOPS required from EBS volume and Throughput for EBS volume.
-
From Encryption, select:
- None — Create an unencrypted EBS volume.
- BYOB — Provide an Encryption Key. For more information, see AWS KMS.
-
Drive Tags — Enter key-value pairs to apply to the disks created by Portworx. For more information, see How to assign custom labels to device pools.
-
-
-
Select Enable Workload Identity to deploy Portworx on an EKS-D cluster, and enter the AWS Workload Identity IAM Role ARN to allow Portworx to access AWS resources. Ensure you created the IAM role with the correct AWS Workload Identity Key.
-
-
To enable Portworx to use all available, unused, and unmounted drives on the node:
-
Select Consume Unused.
-
Select PX-StoreV2 to enable the PX-StoreV2 datastore.
-
If you select PX-StoreV2, in Metadata Path, enter a preprovisioned path for Portworx metadata (at least 64 GiB).
-
In Drive/Device, specify the block drive(s) that Portworx uses for data storage.
-
From Journal Device, select:
- None — Use the default journaling setting.
- Auto — Automatically allocate journal devices.
- Custom — Manually enter a journal device path in Journal Device Path.
-
Select Use unmounted disks even if they have a partition or filesystem on it. Portworx will never use a drive or partition that is mounted to use unmounted disks even if they contain a partition or file system.
Portworx won’t use any mounted drive or partition.
-
-
To manually specify the drives on the node for Portworx to use:
-
Select Use Existing Drives.
-
Select PX-StoreV2 to enable the PX-StoreV2 datastore.
-
If you select PX-StoreV2, in Metadata Path, enter a preprovisioned path for Portworx metadata (at least 64 GiB).
-
In Drive/Device, specify the block drive(s) that Portworx uses for data storage.
-
In Pool Label, assign a custom label in
key:value
format to identify and categorize storage pools. -
From Journal Device, select:
- None — Use the default journaling setting.
- Auto — Automatically allocate journal devices.
- Custom — Manually enter a journal device path in Journal Device Path.
-
-
Select Next.
-
-
Network tab:
-
In Interface(s):
- Enter the Data Network Interface to use for data traffic.
- Enter the Management Network Interface to use for management traffic.
-
In Advanced Settings:
- Enter the Starting port for Portworx services.
-
Select Next.
-
-
Customize tab:
-
Choose the Kubernetes platform in Customize.
-
In Environment Variables, enter name–value pairs.
- For a disaggregated installation, set node labels and set the
ENABLE_ASG_STORAGE_PARTITIONING
environment variable totrue
. For more information, see Deployment planning.
- For a disaggregated installation, set node labels and set the
-
In Registry and Image Settings:
- Enter the Custom Container Registry Location to download the container images.
- Enter the Kubernetes Docker Registry Secret that authenticates to the custom container registry.
- From Image Pull Policy, select Default, Always, IfNotPresent, or Never. This policy influences how images are managed on the node and when updates are applied.
-
In Security Settings, select Enable Authorization to enable role-based access control (RBAC) and secure access to storage resources in your cluster.
-
In Advanced Settings:
- Select Enable Stork.
- Select Enable CSI.
- Select Enable Monitoring to enable monitoring for user-defined projects before installing the Portworx Operator.
- Select Enable Telemetry to enable telemetry in the
StorageCluster
spec. For more information, see Enable Pure1 integration for upgrades on ROSA clusters. - Enter the prefix for the Portworx cluster name in Cluster Name Prefix.
- From Secrets Store Type, select the store used to manage secure information for features such as CloudSnaps and encryption.
-
Select Finish.
-
On the summary page, enter a name in Spec Name and tags in Spec Tags.
-
Select Download .yaml to download the YAML file with the customized specification or Save Spec to save the specification.
-
-
-
Select Save & Download to generate the specification.
Install the Portworx Operator using the OpenShift console
-
Sign in to the OpenShift console by following the quick access instructions on the Accessing your cluster quickly page in the Red Hat OpenShift Service on AWS documentation.
-
From the left navigation pane, select OperatorHub. The OperatorHub page appears.
-
Search for Portworx and select Portworx Enterprise. The Portworx Enterprise page appears.
-
Select Install. The Install Operator page appears.
-
In Installation mode, select A specific namespace on the cluster.
-
From Installed Namespace, choose Create Project. The Create Project window appears.
-
Enter
portworx
and select Create to create a namespace namedportworx
. -
In Console plugin, select Enable to manage your Portworx cluster using the Portworx dashboard within the OpenShift console.
If the Portworx Operator is installed but the OpenShift Console plugin is not enabled, or was previously disabled, you can re-enable it by running the following command.
oc patch console.operator cluster --type=json -p='[{"op":"add","path":"/spec/plugins/-","value":"portworx"}]'
-
Select Install to deploy the Portworx Operator in the
portworx
namespace.
After installation, the Create StorageCluster option appears. -
Click Install to deploy Portworx Operator in the
portworx
namespace.
After you successfully install Portworx Operator, the system displays the Create StorageCluster option.
Deploy Portworx using the OpenShift console
-
Click Create StorageCluster.
The system displays the Create StorageCluster page. -
Select YAML view.
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
key: aws-key
name: aws-creds
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
key: aws-secret
name: aws-creds -
Click Create.
The system deploys Portworx, and displays the Portworx instance in the Storage Cluster tab of the Installed Operators page.
Verify Portworx cluster status
- After you create the
StorageCluster
, a Portworx option appears in the left pane of the OpenShift UI. Select the Cluster sub-tab to view the Portworx dashboard. - If Portworx is installed successfully, the cluster status is Running. The dashboard also shows details about telemetry, monitoring, and the versions of Portworx and its components installed in your cluster.
- In Node Summary, verify that all Portworx nodes show Online.
Alternatively, you can verify Portworx cluster status using the command line. For more information, see Verify Portworx cluster status.
Verify Portworx pod status
- From the left navigation pane of the OpenShift console, select Workloads, and then select Pods.
- In the Project drop-down, choose portworx.
- The list of pods in the
portworx
namespace appears. - If Portworx is installed successfully, all pods show Running.
Alternatively, you can verify Portworx pod status using the command line. For more information, see Verify Portworx pod status.
Verify Portworx pool status
Run the following command to view the Portworx drive configuration for your pod:
oc exec <px-pod> -n <px-namespace> -- /opt/pwx/bin/pxctl service pool show
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
PX drive configuration:
Pool ID: 0
Type: PX-StoreV2
UUID: 58ab2e3f-a22e-xxxx-xxxx-xxxxxxxxxxxx
IO Priority: HIGH
Labels: kubernetes.io/arch=amd64,kubernetes.io/hostname=username-vms-silver-sight-3,kubernetes.io/os=linux,medium=STORAGE_MEDIUM_SSD,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,iopriority=HIGH
Size: 25 GiB
Status: Online
Has metadata: No
Balanced: Yes
Drives:
0: /dev/sda, Total size 32 GiB, Online
Cache Drives:
No Cache drives found in this pool
Metadata Device:
1: /dev/sdd, STORAGE_MEDIUM_SSD
The output Type: PX-StoreV2
indicates that the pod uses the PX-StoreV2 datastore.
Verify pxctl cluster provision status
-
Access the Portworx CLI.
-
Run the following command to find the storage cluster:
oc -n <px-namespace> get storagecluster
NAME CLUSTER UUID STATUS VERSION AGE
px-cluster-378d7ae1-f4ca-xxxx-xxxx-xxxxxxxxxxxx 482b18b1-2a8b-xxxx-xxxx-xxxxxxxxxxxx Online 3.2.0-dev-rc1 5h6mThe status must be
Online
. -
Run the following command to find the storage nodes:
oc -n <px-namespace> get storagenodes
NAME ID STATUS VERSION AGE
username-vms-silver-sight-0 1e89102f-0510-xxxx-xxxx-xxxxxxxxxxxx Online 3.2.0-28944c8 3h25m
username-vms-silver-sight-2 0c99e1f2-9d49-xxxx-xxxx-xxxxxxxxxxxx Online 3.2.0-28944c8 3h25m
username-vms-silver-sight-3 24508311-e2fe-xxxx-xxxx-xxxxxxxxxxxx Online 3.2.0-28944c8 3h25mThe nodes must be
Online
. -
Verify the Portworx cluster provision status by running the following command. Specify the pod name you retrieved in Verify Portworx pod status.
oc exec <px-pod> -n <px-namespace> -- /opt/pwx/bin/pxctl cluster provision-status
NODE NODE STATUS POOL POOL STATUS IO_PRIORITY SIZE AVAILABLE USED PROVISIONED ZONE REGION RACK
0c99e1f2-9d49-xxxx-xxxx-xxxxxxxxxxxx Up 0 ( 8ec9e6aa-7726-xxxx-xxxx-xxxxxxxxxxxx ) Online HIGH 32 GiB 32 GiB 33 MiB 0 B default default default
1e89102f-0510-xxxx-xxxx-xxxxxxxxxxxx Up 0 ( 06fcc73a-7e2f-xxxx-xxxx-xxxxxxxxxxxx ) Online HIGH 32 GiB 32 GiB 33 MiB 0 B default default default
24508311-e2fe-xxxx-xxxx-xxxxxxxxxxxx Up 0 ( 58ab2e3f-a22e-xxxx-xxxx-xxxxxxxxxxxx ) Online HIGH 32 GiB 32 GiB 33 MiB 0 B default default default
What to do next
Create a PVC. For more information, see Create your first PVC.