Install Portworx on Mirantis Kubernetes Engine
You can install Portworx on a Mirantis Kubernetes Engine (MKE) cluster to enable enterprise-grade cloud-native storage for your Kubernetes workloads. Portworx supports standard Kubernetes deployments running on MKE. Portworx integrates natively with MKE, ensuring persistent storage for demanding applications.
Portworx installation on an MKE cluster is managed via Kubernetes manifests, generated through Portworx Central. You can apply a Portworx Operator and StorageCluster manifest to your MKE cluster, which automates and orchestrates the installation across all nodes. The operator-based deployment is recommended for ease of management and updates.
Prerequisites
In addition to the System Requirements, ensure that your cluster meets the following requirements before installing Portworx Enterprise:
- Provision virtual machines and use them as nodes to create a Kubernetes cluster managed by MKE.
For more information, see Mirantis documentation. - Allocate a dedicated disk for the internal KVDB.
- Configure the KVDB device on only three nodes, and assign it a unique device name on each of those nodes.
The following collection of tasks describe how to install Portworx on an MKE cluster:
- Prepare Your Environment
- Generate Portworx Specification
- Deploy Portworx Operator
- Deploy StorageCluster
- Monitor Portworx Nodes
- Verify Portworx Pod Status
- Verify Portworx Cluster Status
- Verify pxctl Cluster Provision Status
Complete all the tasks to install Portworx.
Prepare your environment
Portworx Service Accounts, which are non-admin, should be granted access to use privileged attributes on Kubernetes Pods. This enables Portworx to create/execute tasks that would ordinarily require administrators or cluster-admins permissions to execute.
Portworx needs access to the following privileged attributes:
priv_attributes_allowed_for_service_accounts = ["hostBindMounts", "privileged", "kernelCapabilities", "hostNetwork"]
You must also configure the following Service Accounts:
priv_attributes_service_accounts = ["<pxNamespace>:portworx-operator","<pxNamespace>:portworx","<pxNamespace>:autopilot", "<pxNamespace>:px-csi", "<pxNamespace>:portworx-pvc-controller"]
Replace <pxNamespace> with namespace where Portworx is installed.
You can grant Portworx access to use privileged attributes by adding privileged attributes using one of the following options:
- When you install a new MKE cluster: Grant Portworx access to use privileged attributes during the MKE cluster installation process. For more information, see Mirantis documentation.
- Modify an existing MKE cluster: Update the configuration of your current MKE cluster to allow Portworx Service Accounts to use the required privileged attributes. For more information, see Mirantis documentation.
Generate Portworx Specification
-
Sign in to the Portworx Central console.
The system displays the Welcome to Portworx Central! page. -
In the Portworx Enterprise section, select Generate Cluster Spec.
The system displays the Generate Spec page. -
From the Portworx Version dropdown menu, select the Portworx version to install.
-
From the Platform dropdown menu, select one of the following depending on your environment:
- DAS/SAN
- Pure FlashArray
- vSphere
note- For DAS/SAN, you must have pre-provisioned disks.
- For Pure FlashArray or vSphere, you can specify the disk when you generate the Portworx spec.
After you apply the Portworx StorageCluster spec, these disks will be created in your environment.
-
From the Distribution Name dropdown menu, select None.
-
(Optional) To customize the configuration options and generate a custom specification, click Customize and perform the following steps:
noteTo continue without customizing the default configuration or generating a custom specification, proceed to Step 7.
-
Basic tab:
- To use an existing etcd cluster, do the following:
- Select the Your etcd details option.
- In the field provided, enter the host name or IP and port number. For example,
http://test.com.net:1234.
To add another etcd cluster, click the + icon.noteYou can add up to three etcd clusters.
- Select one of the following authentication methods:
- Disable HTTPS – To use HTTP for etcd communication.
- Certificate Auth – To use HTTPS with an SSL certificate.
For more information, see Secure your etcd communication. - Password Auth – To use HTTPS with username and password authentication.
- To use an internal Portworx-managed key-value store (kvdb), do the following:
- Select the Built-in option.
- To enable TLS encrypted communication among KVDB nodes and between Portworx nodes and the KVDB cluster, select the Enable TLS for internal kvdb checkbox.
- If your cluster does not already have a cert-manager, select the Deploy Cert-Manager for TLS certificates checkbox.
- Select Next.
- To use an existing etcd cluster, do the following:
-
Storage tab:
noteThe following steps apply only if you selected DAS/SAN as the storage platform. If you are using Pure FlashArray or vSphere, follow the corresponding procedure for your platform. For more information, see Install Portworx with Pure Storage FlashArray or Installation on Non-Air-Gapped vSphere Kubernetes Cluster.
-
To enable Portworx to use all available, unused, and unmounted drives on the node, do the following:
- Select the Automatically scan disks option.
- From the Default IO Profile dropdown menu, select Auto.
This enables Portworx to automatically choose the best I/O profile based on detected workload patterns. - Select the Use unmounted disks even if they have a partition or filesystem on it. Portworx will never use a drive or partition that is mounted checkbox to use unmounted disks, even if they contain a partition or filesystem.
Portworx will not use any mounted drive or partition.
-
To manually specify the drives on the node for Portworx to use, do the following:
- Select the Manually specify disks option.
- In the Drive/Device field, specify the block drive(s) that Portworx uses for data storage.
To add another block drive, click the + icon. - (Optional) In the Pool Label field, assign a custom label in
key:valueformat to identify and categorize storage pools.
-
Select PX-Store Version: (Optional) To designate PX-StoreV1 as the datastore, select PX-StoreV1 if you are on platform vSphere or Pure FlashArray.
In case of DAS/SAN platform, to designate PX-StoreV1 as the datastore, clear the PX-StoreV2 checkbox. By default, the system selects PX-StoreV2 as the datastore.
-
For PX-StoreV2, in Metadata Path field, enter a pre-provisioned path for storing the Portworx metadata.
The path must be at least 64 GB in size. -
From the Journal Device dropdown menu, select one of the following:
- None – To use the default journaling setting.
- Auto – To automatically allocate journal devices.
- Custom – To manually enter a journal device path.
Enter the path of the journal device in the Journal Device Path field.
-
Skip KVDB device - This checkbox is selected by default and appears only if you choose the Built-in option in the Basic tab.
Keep it selected to use the same device for KVDB and storage I/O. This configuration is suitable for test or development clusters but not recommended for production clusters. For production clusters, clear the checkbox and provide a separate device to store internal KVDB data. This separates KVDB I/O from storage I/O and improves performance. -
KVDB device - Enter the block device path to be used exclusively for KVDB data.
This field appears only if you clear the Skip KVDB device checkbox. The KVDB device must be present on at least three nodes in the cluster to ensure high availability.noteTo restrict Portworx to run internal KVDB only on specific nodes, label those nodes with:
kubectl label nodes node1 node2 node3 px/metadata-node=true -
Click Next.
-
-
Network tab:
- In the Interface(s) section, do the following:
- Enter the Data Network Interface to be used for data traffic.
- Enter the Management Network Interface to be used for management traffic.
- In the Advanced Settings section, do the following:
- Enter the Starting port for Portworx services.
- Select Next.
- In the Interface(s) section, do the following:
-
Deployment tab:
- In the Kubernetes Distribution section, under Are you running on either of these?, choose the Kubernetes distribution.
- In the Component Settings section:
- Select the Enable Stork checkbox to enable Stork.
- Select the Enable Monitoring checkbox to enable Prometheus-based monitoring of Portworx components and resources.
- To configure how Prometheus is deployed and managed in your cluster, choose one of the following:
- Portworx Managed - To enable Portworx to install and manage Prometheus and Operator automatically.
Ensure that no another Prometheus Operator instance already running on the cluster. - User Managed - To manage your own Prometheus stack.
You must enter a valid URL of the Prometheus instance in the Prometheus URL field.
- Portworx Managed - To enable Portworx to install and manage Prometheus and Operator automatically.
- Select the Enable Autopilot checkbox to enable Portworx Autopilot.
For more information on Autopilot, see Expanding your Storage Pool with Autopilot. - Select the Enable Telemetry checkbox to enable telemetry in the StorageCluster spec.
For more information, see Enable Pure1 integration for upgrades on an MKE cluster. - Enter the prefix for the Portworx cluster name in the Cluster Name Prefix field.
- Select the Secrets Store Type from the dropdown menu to store and manage secure information for features such as CloudSnaps and Encryption.
- In the Environment Variables section, enter name-value pairs in the respective fields.
- In the Registry and Image Settings section:
- Enter the Custom Container Registry Location to download the Docker images.
- Enter the Kubernetes Docker Registry Secret that serves as the authentication to access the custom container registry.
- From the Image Pull Policy dropdown menu, select Default, Always, IfNotPresent, or Never.
This policy influences how images are managed on the node and when updates are applied.
- In the Security Settings section, select the Enable Authorization checkbox to enable Role-Based Access Control (RBAC) and secure access to storage resources in your cluster.
- Click Finish.
- In the summary page, enter a name for the specification in the Spec Name field, and tags in the Spec Tags field.
- Click Download .yaml to download the yaml file with the customized specification or Save Spec to save the specification.
- Click Save & Download to generate the specification.