Install Portworx on Oracle Cloud Infrastructure
You can install Portworx on Oracle Cloud Infrastructure (OCI) to enable enterprise-grade cloud-native storage for your Kubernetes workloads. Portworx supports standard Kubernetes deployments running on OCI. Portworx integrates natively with Kubernetes Engine (OKE), ensuring persistent storage for demanding applications.
Portworx installation on an OKE cluster is managed via Kubernetes manifests, generated through Portworx Central. You can apply a Portworx Operator and StorageCluster manifest to your OKE cluster, which automates and orchestrates the installation across all nodes. The operator-based deployment is recommended for ease of management and updates.
- Portworx Backup is not supported when using Portworx on OKE.
- When using Async DR with Portworx on OKE, you must configure migrations to use service accounts. For more information, see Configure Migrations to use Service Accounts.
Prerequisites
In addition to the System Requirements, ensure that your OKE cluster meets the following requirements before installing Portworx Enterprise:
- An Oracle API signing key and fingerprint.
Download and store the key with the name oci_api_key.pem, and record the fingerprint of the key for use during this procedure. - Your Oracle user OCID.
Record the OCID for use during this procedure. - Portworx by Everpure recommends you set initial storage nodes. When specified, Portworx will ensure that the desired number of storage nodes that need to be created across zones and node pools.
The following collection of tasks describe how to install Portworx on an OKE cluster:
- Create an Oracle Cloud Credentials Kubernetes Secret
- Generate Portworx Specification
- Deploy Portworx Operator
- Deploy StorageCluster
- Verify Portworx Pod Status
- Verify Portworx Cluster Status
- Verify pxctl Cluster Provision Status
Complete all the tasks to install Portworx.
Create an Oracle Cloud credentials Kubernetes secret
Create a Kubernetes secret named ociapikey in the namespace where you will install Portworx. Use the following command:
kubectl create secret generic ociapikey \
--namespace <namespace> \
--from-file=oci_api_key.pem=oci_api_key.pem \
--from-literal=PX_ORACLE_user_ocid="<ocid>" \
--from-literal=PX_ORACLE_fingerprint="<fingerprint>"
Replace:
<namespace>- with the namespace where you will install Portworx<ocid>- with your Oracle user OCID<fingerprint>- with the fingerprint for your Oracle API signing key
Generate Portworx Specification
-
Sign in to the Portworx Central console.
The system displays the Welcome to Portworx Central! page. -
In the Portworx Enterprise section, select Generate Cluster Spec.
The system displays the Generate Spec page. -
From the Portworx Version dropdown menu, select the Portworx version to install.
-
From the Platform dropdown menu, select Oracle.
-
From the Distribution Name dropdown menu, select Oracle Kubernetes Engine (OKE).
-
(Optional) To customize the configuration options and generate a custom specification, click Customize and perform the following steps:
noteTo continue without customizing the default configuration or generating a custom specification, proceed to Step 7.
-
Basic tab:
- To use an existing etcd cluster, do the following:
- Select the Your etcd details option.
- In the field provided, enter the host name or IP and port number. For example,
http://test.com.net:1234.
To add another etcd cluster, click the + icon.noteYou can add up to three etcd clusters.
- Select one of the following authentication methods:
- Disable HTTPS – To use HTTP for etcd communication.
- Certificate Auth – To use HTTPS with an SSL certificate.
For more information, see Secure your etcd communication. - Password Auth – To use HTTPS with username and password authentication.
- To use an internal Portworx-managed key-value store (kvdb), do the following:
- Select the Built-in option.
- To enable TLS encrypted communication among KVDB nodes and between Portworx nodes and the KVDB cluster, select the Enable TLS for internal kvdb checkbox.
- If your cluster does not already have a cert-manager, select the Deploy Cert-Manager for TLS certificates checkbox.
- Select Next.
- To use an existing etcd cluster, do the following:
-
Storage tab:
- To enable Portworx to provision drives using a specification, do the following:
- Select the Create Using a Spec option.
- Configure the following fields for the drive:
- Select number of VPUs - From the dropdown menu select the number of VPUs for the Oracle block volume.
- Size (GB) - Specify the size of the drive in gigabytes.
- Encryption - Select whether to enable encryption. Options may include None or provider-managed keys.
- Encryption Key - If encryption is enabled, specify the key ID or URI to use.
- Drive Tags - Add labels in key:value format to organize and identify drives. Useful for policies and workload mapping.
- Action - Use the trash icon to remove a drive type from the configuration.
- To add more cloud storage drive types for Portworx to use, click + Add Drive.
note
The system automatically selects the minimum number of drives to ensure optimal performance.
- Initial Storage Nodes (Optional): Enter the number of storage nodes that need to be created across zones and node pools.
- From the Default IO Profile dropdown menu, select Auto.
This enables Portworx to automatically choose the best I/O profile based on detected workload patterns. - From the Journal Device dropdown menu, select one of the following:
- None – To use the default journaling setting.
- Auto – To automatically allocate journal devices.
- Custom – To manually enter a journal device path.
Enter the path of the journal device in the Journal Device Path field.
- To enable Portworx to use all available, unused, and unmounted drives on the node, do the following:
- Select the Consume Unused option.
- From the Journal Device dropdown menu, select one of the following:
- None – To use the default journaling setting.
- Auto – To automatically allocate journal devices.
- Custom – To manually enter a journal device path.
Enter the path of the journal device in the Journal Device Path field.
- Select the Use unmounted disks even if they have a partition or filesystem on it. Portworx will never use a drive or partition that is mounted checkbox to use unmounted disks, even if they contain a partition or filesystem.
Portworx will not use any mounted drive or partition.
- To enable Portworx to use existing drives on a node, do the following:
- Select the Use Existing Drives option.
- In the Drive/Device field, specify the block drive(s) that Portworx uses for data storage.
To add another block drive, click the + icon. - (Optional) In the Pool Label field, assign a custom label in key:value format to identify and categorize storage pools.
- From the Journal Device dropdown menu, select one of the following:
- None – To use the default journaling setting.
- Auto – To automatically allocate journal devices.
- Custom – To manually enter a journal device path.
Enter the path of the journal device in the Journal Device Path field.
- Select Next.
- To enable Portworx to provision drives using a specification, do the following:
-
Network tab:
- In the Interface(s) section, do the following:
- Enter the Data Network Interface to be used for data traffic.
- Enter the Management Network Interface to be used for management traffic.
- In the Advanced Settings section, do the following:
- Enter the Starting port for Portworx services.
- Select Next.
- In the Interface(s) section, do the following:
-
Deployment tab:
- In the Kubernetes Distribution section, under Are you running on either of these?, choose the Kubernetes distribution.
- In the Component Settings section:
- Select the Enable Stork checkbox to enable Stork.
- Select the Enable Monitoring checkbox to enable Prometheus-based monitoring of Portworx components and resources.
- To configure how Prometheus is deployed and managed in your cluster, choose one of the following:
- Portworx Managed - To enable Portworx to install and manage Prometheus and Operator automatically.
Ensure that no another Prometheus Operator instance already running on the cluster. - User Managed - To manage your own Prometheus stack.
You must enter a valid URL of the Prometheus instance in the Prometheus URL field.
- Portworx Managed - To enable Portworx to install and manage Prometheus and Operator automatically.
- Select the Enable Autopilot checkbox to enable Portworx Autopilot.
For more information on Autopilot, see Expanding your Storage Pool with Autopilot. - Select the Enable Telemetry checkbox to enable telemetry in the StorageCluster spec.
For more information, see Enable Pure1 integration for upgrades on an OKE cluster. - Enter the prefix for the Portworx cluster name in the Cluster Name Prefix field.
- Select the Secrets Store Type from the dropdown menu to store and manage secure information for features such as CloudSnaps and Encryption.
- In the Environment Variables section, enter name-value pairs in the respective fields.
- In the Registry and Image Settings section:
- Enter the Custom Container Registry Location to download the Docker images.
- Enter the Kubernetes Docker Registry Secret that serves as the authentication to access the custom container registry.
- From the Image Pull Policy dropdown menu, select Default, Always, IfNotPresent, or Never.
This policy influences how images are managed on the node and when updates are applied.
- In the Security Settings section, select the Enable Authorization checkbox to enable Role-Based Access Control (RBAC) and secure access to storage resources in your cluster.
- Click Finish.
- In the summary page, enter a name for the specification in the Spec Name field, and tags in the Spec Tags field.
- Click Download .yaml to download the yaml file with the customized specification or Save Spec to save the specification.
- Click Save & Download to generate the specification.