Installation on an Azure Red Hat OpenShift Cluster
This topic provides instructions for installing Portworx on an Azure Red Hat OpenShift cluster using the OpenShift Container Platform web console.
The following collection of tasks describe how to install Portworx on an Azure Red Hat OpenShift cluster:
- Prepare your Azure Red Hat OpenShift Cluster
- Find the Azure Red Hat OpenShift Service Principal
- Create the px-azure secret with Service Principal credentials
- Create a monitoring ConfigMap
- Install Portworx Operator using OpenShift Console
- Install Portworx
- Verify Portworx Cluster Status
- Verify Portworx Pod Status
Complete all the tasks to install Portworx.
Prepare your Azure Red Hat OpenShift Cluster
- Create an Azure Red Hat OpenShift cluster. For more information, see Microsoft documentation.
- Connect to the Azure Red Hat OpenShift cluster. For more information, see Microsoft documentation.
Find the Azure Red Hat OpenShift Service Principal
When deploying Portworx on an Azure Red Hat Openshift cluster, the virtual machines are created in a resource group with a Deny Assignment
role that prevents any service principal from accessing virtual machines except the service principal created for the resource group. In this task, you identify the service principal for the resource group that has access, and configure it to pass on the credentials (Azure Client ID, Azure Client Secret, and Tenant ID) via the Portworx cluster spec. Portworx will fetch the px-azure
secret object file to authenticate.
Perform the following steps from your Azure Web UI:
- Select Virtual Machines from the top navigation menu.
- From the Virtual machines page, select the Resource Group associated with your cluster.
- From the left panel on the Resource Group page, select Access control (IAM).
- On the Access control (IAM) subpage of your resource group, select Deny assignments from the toolbar in the center of the page, then select the link under the Name column (this will likely be an autogenerated string of letters and numbers).
This page shows that all principals are denied access, except for your resource group. - Select your resource group's name.
- From the application page, copy and save the following values:
- Name
- Application ID
- Object ID
Use these to create the
px-azure
secret.
- From the home page, select All services and search for Microsoft Entra ID.
- From the Microsoft Entra ID page, select App registrations on the left pane.
- In the search bar in the center of the page, paste the application name you saved in the previous step and press the enter key.
Select the application link that shows in the results to open the next page. - From your application's page, select Certificates & secrets under Manage from the left pane.
- From the Certificates & secrets page, select + New client secret to create a new secret.
- On the Add a client secret page, provide the description and expiry date of your secret and click Add.
- You can see the newly created secret listed on the Client secret subpage. Copy and save the following values of your newly created secret:
- Value
- Secret ID
Create the px-azure Secret with Service Principal credentials
Create a secret to give Portworx access to Azure APIs.
- Create the
portworx
namespace, if it does not exist:oc create namespace portworx
- Create the secret called
px-azure
by updating the following fields with the associated fields from the service principal you created in the previous section.
./oc create secret generic -n portworx px-azure\
--from-literal=AZURE_TENANT_ID=<tenant> \
--from-literal=AZURE_CLIENT_ID=<appId> \
--from-literal=AZURE_CLIENT_SECRET=<value>
secret/px-azure created
AZURE_TENANT_ID
: Run theaz login
command to get this valueAZURE_CLIENT_ID
: Provide the Application ID associated with your cluster's resource group, which you saved in step 6 of the previous sectionAZURE_CLIENT_SECRET
: Provide the Value of your secret, which you saved in the step 10 of the previous section
Create a monitoring ConfigMap
Enable monitoring for user-defined projects before installing the Portworx Operator. Use the instructions in this section to configure the OpenShift Prometheus deployment to monitor Portworx metrics.
To integrate OpenShift’s monitoring and alerting system with Portworx, create a cluster-monitoring-config ConfigMap in the openshift-monitoring namespace:
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
enableUserWorkload: true
The enableUserWorkload parameter enables monitoring for user-defined projects in the OpenShift cluster. This creates a prometheus-operated service in the openshift-user-workload-monitoring namespace.
Install Portworx Operator using OpenShift Console
-
Sign in to the OpenShift Container Platform web console.
-
From the left navigation pane, select OperatorHub.
The system displays the OperatorHub page. -
Search for Portworx and select Portworx Enterprise.
The system displays the Portworx Enterprise page. -
Click Install.
The system initiates the Portworx Operator installation and displays the Install Operator page. -
In the Installation mode section, select A specific namespace on the cluster.
-
From the Installed Namespace dropdown, choose Create Project.
The system displays the Create Project window. -
Provide the name
portworx
and click Create to create a namespace called portworx. -
In the Console plugin section, select Enable to manage your Portworx cluster using the Portworx dashboard within the OpenShift console.
noteIf the Portworx Operator is installed but the OpenShift Console plugin is not enabled, or was previously disabled, you can re-enable it by running the following command.
oc patch console.operator cluster --type=json -p='[{"op":"add","path":"/spec/plugins/-","value":"portworx"}]'
-
Click Install to deploy Portworx Operator in the
portworx
namespace.
After you successfully install Portworx Operator, the system displays the Create StorageCluster option.
Install Portworx
You can install Portworx using one of the following methods:
Install Portworx using Specification
-
Sign in to the Portworx Central console.
The system displays the Welcome to Portworx Central! page. -
In the Portworx Enterprise section, select Generate Cluster Spec.
The system displays the Generate Spec page. -
From the Portworx Version dropdown menu, select the Portworx version to install.
-
From the Platform dropdown menu, select Azure.
-
From the Distribution Name dropdown menu, select Azure Red Hat OpenShift (ARO)+.
-
(Optional) To customize the configuration options and generate a custom specification, click Customize and perform the following steps:
noteTo continue without customizing the default configuration or generating a custom specification, proceed to Step 7.
- Basic tab:
- To use an existing etcd cluster, do the following:
- Select the Your etcd details option.
- In the field provided, enter the host name or IP and port number.
For example,http://test.com.net:1234
. - Select one of the following authentication methods:
- Disable HTTPS – To use HTTP for etcd communication.
- Certificate Auth – To use HTTPS with an SSL certificate.
For more information, see Secure your etcd communication. - Password Auth – To use HTTPS with username and password authentication.
- To use an internal Portworx-managed key-value store (kvdb), do the following:
- Select the Built-in option.
- To enable TLS encrypted communication among KVDB nodes and between Portworx nodes and the KVDB cluster, select the Enable TLS for internal kvdb checkbox.
- If your cluster does not already have a cert-manager, select the Deploy Cert-Manager for TLS certificates checkbox.
- Select Next.
- To use an existing etcd cluster, do the following:
- Storage tab:
- To enable Portworx to use all available, unused, and unmounted drives on the node, do the following:
- Select the Create Using a Spec option.
- Select one of the following:
- PX-StoreV1 – To designate PX-Store V1 as the datastore.
- PX-StoreV2 – To designate PX-Store V2 as the datastore.
- To add one or more cloud storage drive types for Portworx to use, click + Add Drive and select one of the following types of drives:
- Standard HDD
- Standard SSD
- Premium SSD
- Premium SSD v2
- Ultra disk
note- To use an Ultra disk, the Azure instance must have the Ultra SSD feature enabled.
- The system automatically selects the minimum number of drives to ensure optimal performance.
- After selecting a drive type, configure the following fields for the drive:
- Size (GB) - Specify the size of each drive in gigabytes.
- IOPS (only for Premium SSD v2 and Ultra disks) - Enter the number of input/output operations per second to provision.
- Throughput (only for Premium SSD v2 and Ultra disks) - Define the maximum data transfer rate (in MB/s).
- Encryption - Select whether to enable encryption. Options may include None or provider-managed keys.
- Encryption Key - If encryption is enabled, specify the key ID or URI to use.
- Drive Tags - Add labels in key:value format to organize and identify drives. Useful for policies and workload mapping.
- Action - Use the trash icon to remove a drive type from the configuration.
- Max storage nodes per availability zone (Optional): Enter the maximum number of storage nodes that can exist within a single availability zone (failure domain) in your cluster.
- From the Default IO Profile dropdown menu, select Auto.
This enables Portworx to automatically choose the best I/O profile based on detected workload patterns. - From the Journal Device dropdown menu, select one of the following:
- None – To use the default journaling setting.
- Auto – To automatically allocate journal devices.
- Custom – To manually enter a journal device path.
Enter the path of the journal device in the Journal Device Path field.
- Managed Identity - Select this checkbox to use Azure’s managed identity feature for authentication, instead of static credentials.
- To enable Portworx to use all available, unused, and unmounted drives on the node, do the following:
- Select the Consume Unused option.
- From the Journal Device dropdown menu, select one of the following:
- None – To use the default journaling setting.
- Auto – To automatically allocate journal devices.
- Custom – To manually enter a journal device path.
Enter the path of the journal device in the Journal Device Path field.
- Select the PX-StoreV2 checkbox to enable the PX-StoreV2 datastore.
- If you select the PX-StoreV2 checkbox, in the Metadata Path field, enter a pre-provisioned path for storing the Portworx metadata.
The path must be at least 64 GB in size. - Select the Use unmounted disks even if they have a partition or filesystem on it. Portworx will never use a drive or partition that is mounted checkbox to use unmounted disks, even if they contain a partition or filesystem.
Portworx will not use any mounted drive or partition.
- To enable Portworx to use existing drives on a node, do the following:
- Select the Use Existing Drives option.
- In the Drive/Device field, specify the block drive(s) that Portworx uses for data storage.
- In the Pool Label field, assign a custom label in
key:value
format to identify and categorize storage pools. - Select the PX-StoreV2 checkbox to enable the PX-StoreV2 datastore.
- If you select the PX-StoreV2 checkbox, in the Metadata Path field, enter a pre-provisioned path for storing the Portworx metadata.
The path must be at least 64 GB in size. - From the Journal Device dropdown menu, select one of the following:
- None – To use the default journaling setting.
- Auto – To automatically allocate journal devices.
- Custom – To manually enter a journal device path.
Enter the path of the journal device in the Journal Device Path field.
- Select Next.
- To enable Portworx to use all available, unused, and unmounted drives on the node, do the following:
- Network tab:
- In the Interface(s) section, do the following:
- Enter the Data Network Interface to be used for data traffic.
- Enter the Management Network Interface to be used for management traffic.
- In the Advanced Settings section, do the following:
- Enter the Starting port for Portworx services.
- Select Next.
- In the Interface(s) section, do the following:
- Customize tab:
- Choose the Kubernetes platform in the Customize section.
- In the Environment Variables section, enter name-value pairs in the respective fields.
note
For deploying Portworx on an Azure Sovereign cloud, specify the value of the
AZURE_ENVIRONMENT
variable in the Name and Value fields. - In the Registry and Image Settings section:
- Enter the Custom Container Registry Location to download the Docker images.
- Enter the Kubernetes Docker Registry Secret that serves as the authentication to access the custom container registry.
- From the Image Pull Policy dropdown menu, select Default, Always, IfNotPresent, or Never.
This policy influences how images are managed on the node and when updates are applied.
- In the Security Settings section, select the Enable Authorization checkbox to enable Role-Based Access Control (RBAC) and secure access to storage resources in your cluster.
- In the Advanced Settings section:
- Select the Enable Stork checkbox to enable Stork.
- Select the Enable CSI checkbox to enable CSI.
- Select the Enable Monitoring checkbox to enable monitoring for user-defined projects before installing Portworx Operator.
- Select the Enable Telemetry checkbox to enable telemetry in the StorageCluster spec.
For more information, see Enable Pure1 integration for upgrades on an Azure Red Hat OpenShift cluster. - Enter the prefix for the Portworx cluster name in the Cluster Name Prefix field.
- Select the Secrets Store Type from the dropdown menu to store and manage secure information for features such as CloudSnaps and Encryption.
- Click Finish.
- In the summary page, enter a name for the specification in the Spec Name field, and tags in the Spec Tags field.
- Click Download .yaml to download the yaml file with the customized specification or Save Spec to save the specification.
- Basic tab:
-
Click Save & Download to generate the specification.
-
Use the generated specification to install Portworx in your cluster.
Install Portworx using OpenShift Console
-
Click Create StorageCluster.
The system displays the Create StorageCluster page. -
Select YAML view.
-
Copy and paste the specification that you generated in the Install Portworx using Specification section into the text editor.
-
Click Create.
The system deploys Portworx, and displays the Portworx instance in the Storage Cluster tab of the Installed Operators page.noteFor clusters with PX-StoreV2 datastores, after you deploy Portworx, the Portworx Operator performs a pre-flight check across the cluster, and the check must pass on each node. This check determines whether each node in the cluster is compatible with the PX-StoreV2 datastore. If each node meets the following hardware and software requirements, PX-StoreV2 is automatically set as the default datastore during Portworx installation.
- Hardware:
- CPU: A minimum of 8 cores CPU per node.
- Drive type: SD/NVME drive with a memory of more than 8 GB per node.
- Metadata device: A minimum of 64 GB system metadata device on each node.
- Software:
- Linux kernel version: 4.20 or later with the Rhel packages device-mapper mdadm lvm2 device-mapper-persistent-data augeas
- Hardware:
Verify Portworx Cluster Status
After you install Portworx Operator and create the StorageCluster, you can see the Portworx option in the left pane of the OpenShift UI.
- Click the Cluster sub-tab to view the Portworx dashboard. If Portworx is installed correctly, the status will be displayed as Running. You can also see the information about the status of Telemetry, Monitoring, and the version of Portworx and its components installed in your cluster.
- Navigate to the Node Summary section. If your cluster is running as intended, the status of all Portworx nodes displays Online.
Verify Portworx Pod Status
- From the left pane of the OpenShift UI, click Pods under the Workload option.
- To check the status of all pods in the
portworx
namespace, select portworx from the Project drop-down. If Portworx is installed correctly, then all pods should be in Running status.
What to do next
Create a PVC. For more information, see Create your first PVC.