Skip to main content
Version: 3.2

Install Portworx with Pure Storage FlashArray enabled with secure multi-tenancy

This page guides you through installing Portworx with PX-StoreV2 using a FlashArray enabled with secure multi-tenancy as your storage provider.

Prerequisites

  • Have an OpenShift cluster with FlashArray that meets the minimum requirements for Portworx with additional following requirements:
    • Linux kernel version: 4.20 or newer (minimum), 5.0
    • Required packages for Rhel: device-mapper mdadm lvm2 device-mapper-persistent-data augeas
      note

      During installation, Portworx will automatically try to pull the required packages from distribution specific repositories. This is a mandatory requirement and installation will fail if this prerequisite is not met.

  • SD/NVME drive with a memory of more than 8 GB per node.
  • A minimum of 64 GB system metadata device on each node
  • Have a Pure Storage FlashArray with Purity version 6.6.11 or newer.
  • Use the FC, iSCSI, or NVMe/RoCE protocol.
  • Create a Pure secret px-pure-secret under the same namespace as Storage Cluster before installing Portworx.
  • Enable CSI for Portworx.
  • Install the latest Linux multipath software package on your operating system that include these fixes. This package also must include kpartx.
  • Have the latest Filesystem utilities/drivers.
  • For red Hat only, ensure that the second action CAPACITY_DATA_HAS_CHANGED is uncommented in the 90-scsi-ua.rules file and you have restarted the udev service.
  • Have the latest FC initiator software for your operating system (Optional; required for FC connectivity).

Configure your physical environment

Before you install Portworx, ensure that your physical network is configured appropriately and that you meet the prerequisites. You must provide Portworx with your FlashArray configuration details during installation.

  • Each FlashArray management IP address can be accessed by each node.
  • Your cluster contains an up-and-running FlashArray with an existing dataplane connectivity layout (iSCSI, Fibre Channel).
  • If you're using iSCSI, the storage node iSCSI initiators are on the same VLAN as the FlashArray iSCSI target ports.
  • If you are using multiple network interface cards (NICs) to connect to an iSCSI host, then all of them must be accessible from the FlashArray management IP address.
  • If you're using Fibre Channel, the storage node Fibre Channel WWNs have been correctly zoned to the FlashArray Fibre Channel WWN ports.
  • You have an API token for a user on your FlashArray with at least storage_admin permissions. Check the documentation on your device for information on generating an API token.

(Optional) Set iSCSI interfaces on FlashArray

If you are using iSCSI protocol, you can its interfaces on FlashArray using the following steps:

  1. Run the following command to get the available iSCSI interface within your environment:
    iscsiadm -m iface
    You can use the output in the next step.
  2. Run the following command to specify which network interfaces on the FlashArray system are allowed to handle iSCSI traffic. Replace <interface-value> by the value you retried in the previous step:
    pxctl cluster options update --flasharray-iscsi-allowed-ifaces <interface-value>

Configure your software environment

Configure your software environment within a computing infrastructure. It involves preparing both the operating system and the underlying network and storage configurations.

Follow the instructions below to set up CSI snapshot feature, disable secure boot mode, and configure the multipath.conf file appropriately. These configurations ensure that the system's software environment is properly set up to allow Portworx to interact correctly with the hardware components, like storage devices (using protocols such as iSCSI or Fibre Channel), and to function correctly within the network infrastructure.

Set up your environment to use CSI snapshot feature

To use the CSI snapshot feature, install the following:

  • Snapshot V1 CRDs

  • Snapshot controller

    • You can also install the snapshot controller by adding the following lines to your StorageCluster:

        csi:
      enabled: true
      installSnapshotController: true

Create a monitoring ConfigMap

Newer OpenShift versions do not support the Portworx Prometheus deployment. As a result, you must enable monitoring for user-defined projects before installing the Portworx Operator. Use the instructions in this section to configure the OpenShift Prometheus deployment to monitor Portworx metrics.

To integrate OpenShift’s monitoring and alerting system with Portworx, create a cluster-monitoring-config ConfigMap in the openshift-monitoring namespace:

apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
enableUserWorkload: true

The enableUserWorkload parameter enables monitoring for user-defined projects in the OpenShift cluster. This creates a prometheus-operated service in the openshift-user-workload-monitoring namespace.

Disable secure boot mode

Portworx requires the secure boot mode to be disabled to ensure it can operate without restrictions. Here's how to disable secure boot mode across different platforms:

For REHL/CentOS you can perform the following steps to check and disable the secure boot mode:

  1. Check the status of secure boot mode:

    /usr/bin/mokutil --sb-state
  2. If secure boot is enabled, disable it:

    /usr/bin/mokutil --disable-validation
  3. Apply changes by rebooting your system:

    reboot 

Verify the status of the secure boot mode

Run the following command to ensure that the secure boot mode is off:

/usr/bin/mokutil --sb-state
SecureBoot disabled

Configure the multipath.conf file

  • For defaults:
    • FlashArray and Portworx does not support user friendly names, disable it and set it to no before installing Portworx on your cluster. This ensures Portworx and FlashArray use consistent device naming conventions.
    • Add polling 10 as per the RHEL Linux recommended settings. This defines how often the system checks for path status updates.
  • To prevent any interference from multipathd service on Portworx volume operations, set the pxd device denylist rule.

Your multipath.conf file should resemble the following structure:

defaults {
user_friendly_names no
enable_foreign "^$"
polling_interval 10
}

devices {
device {
vendor "NVME"
product "Pure Storage FlashArray"
path_selector "queue-length 0"
path_grouping_policy group_by_prio
prio ana
failback immediate
fast_io_fail_tmo 10
user_friendly_names no
no_path_retry 0
features 0
dev_loss_tmo 60
find_multipaths yes
}
device {
vendor "PURE"
product "FlashArray"
path_selector "service-time 0"
hardware_handler "1 alua"
path_grouping_policy group_by_prio
prio alua
failback immediate
path_checker tur
fast_io_fail_tmo 10
user_friendly_names no
no_path_retry 0
features 0
dev_loss_tmo 600
find_multipaths yes
}
}

blacklist_exceptions {
property "(SCSI_IDENT_|ID_WWN)"
}

blacklist {
devnode "^pxd[0-9]*"
devnode "^pxd*"
device {
vendor "VMware"
product "Virtual disk"
}
}

Apply Multipath and Udev configs

Use a MachineConfig in OpenShift to apply multipath and udev configuration files consistently across all nodes.

  1. Convert the configuration files to base64 format and add them to the MachineConfig, as shown in the following example:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
    creationTimestamp:
    labels:
    machineconfiguration.openshift.io/role: worker
    name: <your-machine-config-name>
    spec:
    config:
    ignition:
    version: 3.2.0
    storage:
    files:
    - contents:
    source: data:,<base64-encoded-multipath-conf>
    filesystem: root
    mode: 0644
    overwrite: true
    path: /etc/multipath.conf
    - contents:
    source: data:,<base64-encoded-udev_conf>
    filesystem: root
    mode: 0644
    overwrite: true
    path: /etc/udev/rules.d/99-pure-storage.rules
    systemd:
    units:
    - enabled: true
    name: iscsid.service
    - enabled: true
    name: multipathd.service
  2. Apply the MachineConfig to your cluster:

    oc apply -f <your-machine-config-name>.yaml

Set up user access in FlashArray

Follow this sections to set up user access for your FlashArray.

Create realms in FlashArray

When multiple clusters are attached to a FlashArray, it's essential to define specific realms for organizing and separating storage. When multiple clusters are attached to a FlashArray, admin can specify the realm, and FlashArray volumes from Portworx install will be placed inside the realm. This way different users having access to the array and cluster, can only see their storage volumes. This method is particularly useful in multi-tenant environments where different customers share the same FlashArray.

To set up realms for different customers, follow these steps as an admin:

  1. Create a realm for each customer. All volumes from the Portworx installation will be placed within this realm, ensuring customer-specific data isolation:
    purerealm create <customer1-realm>
    Name                Quota Limit
    <customer1-realm> -
  2. A pod in FlashArray defines a boundary where specific volumes are placed. Create a pod inside the realm you just defined:
    purepod create <customer1-realm>::<fa-pod-name>
note

Stretched FlashArray pods (pods spanning multiple FlashArrays) are not supported.

By assigning realms and pods in a FlashArray, you can ensure that different users only interact with the specific storage resources allocated to them.

Create a realm policy

After defining realms, you need to bind users to those realms by creating policies. Policies specify the level of access a user has within a realm. These policies ensure that users only have the necessary permissions to perform their tasks.

  1. Create a policy for a realm. Ensure that you have administrative privileges on FlashArray before proceeding. This policy grants users access to their respective realms with defined capabilities:
    purepolicy management-access create --realm <customer1-realm> --role storage --aggregation-strategy all-permissions <realm-policy>
    For basic privileges, use the following command:
    purepolicy management-access create --realm <customer1-realm> --role storage --aggregation-strategy least-common-permissions <realm-policy>

  2. Verify the created policy. This step ensures that the policy has been set up correctly with the right permissions:
    purepolicy management-access list 
    Name             Type         Enabled  Capability  Aggregation Strategy      Resource Name    Resource Type  
    <realm-policy> admin-access True all all-permissions <customer1-realm> realms

This policy ensures that users linked to the specified realm can perform storage operations within their allocated realm.

Create users

Once a policy is created, you need to create users who can access the FlashArray through that policy. These users will be bound to the realms and policies you previously configured, controlling their access and operations on the FlashArray.

Create a user linked to a policy. This command creates a user with the access rights defined by the policy. You must create a password that the user can use to log in to FlashArray, as shown in the output:

pureadmin create --access-policy <realm-policy> <flasharray-user>
Enter password: 
Retype password:
Name Type Access Policy
<flasharray-user> local <realm-policy>

This step ensures that users are securely connected to their designated realms with appropriate access.

Generate an API Token

An API token is essential for enabling secure communication between Portworx and FlashArray. The token serves as a key, authorizing Portworx to interact with the FlashArray on behalf of the user. It’s necessary to generate this token so that Portworx can authenticate and perform tasks like provisioning and managing storage.

Generate an API token for the user. This token is required for the user to authenticate with the FlashArray:

  • Sign in as the newly created user in the FlashArray CLI
  • Run pureadmin create --api-token and copy the created token. By completing these steps, you ensure that Portworx can securely manage storage resources within the FlashArray environment.

Create a JSON configuration file

For Portworx to integrate with FlashArray, it requires a JSON configuration file containing essential information about the FlashArray environment. This file, typically named pure.json, includes the management endpoints and the newly generated API token.

  • Management endpoints: The management endpoints are URLs or IP addresses that Portworx will use to send API calls to FlashArray. Find these by going to Settings and selecting Network within your FlashArray dashboard. Note the IP addresses or hostnames of your management interfaces, usually identified by a vir prefix, indicating virtual interfaces. Now for one array you can add two comma separated management endpoints: flassharray endpoints
  • API token: Generated in the previous section.
  • Realm: Realms are the objects which define the boundaries of a tenant. When multiple FlashArrays are attached to a cluster, admin can specify the realm and FlashArray volumes from Portworx install will be placed inside the realm. This way different users having access to the array can only see their storage volumes. Only one realm is supported per cluster per array, which means you can't have the same Portworx deployment use two realms on the same array.

Use the above information to create JSON file. Below is a template for the configuration content, which you should populate with your specific information:

{
"FlashArrays": [
{
"MgmtEndPoint": "<first-fa-management-endpoint1>",
"APIToken": "<first-fa-api-token>",
"Realm": "<first-fa-realm>"
},
{
"MgmtEndPoint": "<second-fa-management-endpoint2>",
"APIToken": "<second-fa-api-token>",
"Realm": "<second-fa-realm>"
},
...
]
}

Create a Kubernetes Secret

The specific name px-pure-secret is required so that Portworx can correctly identify and access the Kubernetes secret upon startup. This secret securely stores the FlashArray configuration details and allows Portworx to access this information within the Kubernetes environment.

Enter the following kubctl create command to create a Kubernetes secret called px-pure-secret:

oc create secret generic px-pure-secret --namespace <stc-namespace> --from-file=pure.json
secret/px-pure-secret created

Verify the iSCSI Connection with FlashArray

The instructions in this section are using iSCSI network.

  1. Run the following command to discover your iSCSI targets. Replace <flash-array-interface-endpoint> with your FlashArray's interface, as shown in the following screenshot: Flash Array interface
iscsiadm -m discovery -t st -p <flash-array-interface-end-point>
10.13.xx.xx0:3260,207 iqn.2010-06.com.purestorage:flasharray.xxxxxxx
10.13.xx.xx1:3260,207 iqn.2010-06.com.purestorage:flasharray.xxxxxxx
  1. Verify that each node has a unique initiator. Run the following command on each node:
cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:xxxxx
  1. If the initiator names are not unique, it's necessary to assign a new unique initiator name. To do this, execute the following command:
echo "InitiatorName=`/sbin/iscsi-iname`" > /etc/iscsi/initiatorname.iscsi

Replace the initiator names on any nodes that have duplicates with the newly generated unique names.

Once you've configured your environment and ensured that you meet the prerequisites, you're ready to deploy Portworx.

Generate specs

To install Portworx with Kubernetes, you must first generate Kubernetes manifests that you will deploy in your cluster:

  1. Navigate to Portworx Central and log in, or create an account.

  2. In the Portworx section, select Get Started.

  3. On the Product Line page, choose any option depending on which license you intend to use, then click Continue to start the spec generator.

  4. In the Generate Spec page:

    1. Select 3.2 or newer from the Portworx Version dropdown.
    2. For Platform, select Pure FlashArray.
    3. Select None for Distribution Name, then click Customize at the bottom of Summary section.
    4. Navigate to the Storage window, select the checkboxes for Enable multitenancy and PX-StoreV2 in the Configure Storage Devices section, and enter a FlashArray pod name in the Pure FA Pod Name field.
    5. Secelet
    6. By default, iSCSI is set as your protocol for data transfer. Select a different option from the Select type of storage area network dropdown.
    7. Click Next to complete the spec gen flow and click Finish to generate the specs.
    note

    By default, iSCSI is set as your protocol for data transfer. To change this option, click Customize and navigate to the Storage window. Select a different option from the Select type of storage area network dropdown.

  5. (Optional) If you are using multiple NICs for iSCSI host, then add the following environment variable to your StorageCluster spec. Replace <nic-interface-names> with comma-separated names of NICs such as "eth1,eth2":

    env:
    - name: flasharray-iscsi-allowed-ifaces
    value: "<nic-interface-names>"
note

If you have multiple NICs on your virtual machine, then FlashArray does not distinguish the NICs that include iSCSI and the others without iSCSI. This list must be provided, otherwise Portworx may potentially use only one of the provided interfaces.

Modify the spec

Modify the cloudStorage section of the spec to include FlashArray pod information. This ensures that when Portworx is deployed, and it will create volumes in the pods within the realm for a particular user:

cloudStorage:
deviceSpecs:
- size=150,pod=<fa-pod-name>

Replace <fa-pod-name> with the FlashArray pod name defined in this section.

Apply specs

Apply the Operator and StorageCluster specs you generated in the section above using the oc apply command:

  1. Deploy the Operator:

    oc apply -f 'https://install.portworx.com/<version-number>?comp=pxoperator&kbver=1.25.0&ns=portworx'
    serviceaccount/portworx-operator created
    podsecuritypolicy.policy/px-operator created
    clusterrole.rbac.authorization.k8s.io/portworx-operator created
    clusterrolebinding.rbac.authorization.k8s.io/portworx-operator created
    deployment.apps/portworx-operator created
  2. Deploy the StorageCluster:

    oc apply -f 'https://install.portworx.com/<version-number>?operator=true&mc=false&kbver=1.25.0&ns=portworx&b=true&iop=6&c=px-cluster-17efb9e2-xxx-xxx&stork=true&csi=true&mon=true&tel=true&st=k8s&promop=true'
    storagecluster.core.libopenstorage.org/px-cluster-17efb9e2-xxx-xxx created

Once deployed, Portworx detects that the FlashArray secret is present when it starts up and can use the specified FlashArray volumes.

Pre-flight check

After you applying the specs, the Portworx Operator performs a pre-flight check across the cluster and must pass for each node. This check determines whether each node in your cluster is compatible with the PX-StoreV2 datastore. If each node in the cluster meets the following hardware and software requirements, PX-StoreV2 will be automatically set as your default datastore during Portworx installation:

  • Hardware:
    • CPU: A minimum of 8 cores CPU per node.
    • Drive: SD/NVME drive with a memory of more than 8 GB per node.
    • Metadata device: A minimum of 64 GB system metadata device on each node.
  • Software:
    • Linux kernel version: 4.20 or newer with the following packages:
      • Rhel: device-mapper mdadm lvm2 device-mapper-persistent-data augeas

Once the check is successful, Portworx will be deployed on your cluster.