Skip to main content
Version: 3.4

Prepare FlashBlade Environment for Installation of Portworx with Pure Cloud Block Store

This page includes detailed system requirements that are specific to Pure Cloud Block Store (CBS) to ensure a seamless deployment and optimal performance of Portworx Enterprise in your Kubernetes environment.

Before you begin preparing your environment, ensure that you have an Azure Kubernetes Service (AKS) Cluster that meets the system requirements for installing Portworx.

The following collection of tasks describe how to prepare your environment for Portworx installation.

Complete all the tasks to prepare your environment for installation.

Software requirements

Install the following system packages on all nodes, including the control plane node, to support storage provisioning and data path operations when using CBS.

CategoryRequirement
PackagesEnsure that the latest versions of the following packages are installed on nodes where you plan to run Portworx Enterprise:Note: For NVMe-oF/TCP protocol, you need multipath version 0.8.7 or later.
Red Hat SystemsEnsure that the second action, CAPACITY_DATA_HAS_CHANGED, is uncommented in the 90-scsi-ua.rules file, and restart the udev service.
CSI Snapshot FeatureTo use the CSI snapshot feature, install the Snapshot controller and deploy the CRDs available here in your Kubernetes cluster.

Physical network requirements

This section outlines the physical network prerequisites for Portworx to communicate with CBS.

Ensure proper connectivity and protocol configuration for optimal performance and compatibility

  • Ensure the Pure Cloud Block Store management IP address is accessible by all nodes.
  • Verify that your cluster has an operational CBS with a configured dataplane connectivity layout.
  • Use one of the following storage networking protocols supported by Portworx Enterprise:
    • iSCSI: For block storage over IP networks.
    • NVMe-oF RoCE or NVMe-oF TCP: For high-performance and low-latency storage access.
    • Fibre Channel (FC): For dedicated storage area networks.
  • If using iSCSI:
    • Ensure that the storage node iSCSI initiators are on the same VLAN as the CBS iSCSI target ports.
    • If using multiple NICs to connect to an iSCSI host, ensure all NICs are accessible from the CBS management IP address.
  • If using Fibre Channel:
    • Verify that the storage node Fibre Channel WWNs are correctly zoned to the CBS Fibre Channel WWN ports.

Disable secure boot mode

Portworx Enterprise requires the secure boot mode to be disabled to ensure it can operate without restrictions. Here's how to disable secure boot mode across different platforms:

For REHL/CentOS you can perform the following steps to check and disable the secure boot mode:

  1. Check the status of secure boot mode:

    /usr/bin/mokutil --sb-state
  2. If secure boot is enabled, disable it:

    /usr/bin/mokutil --disable-validation
  3. Apply changes by rebooting your system:

    reboot 

Verify the status of the secure boot mode

Run the following command to ensure that the secure boot mode is off:

/usr/bin/mokutil --sb-state
SecureBoot disabled

Multipath configuration

  • CBS and Portworx Enterprise do not support user-friendly names. Set user_friendly_names to no before installing Portworx Enterprise on your cluster. This ensures consistent device naming conventions between Portworx and CBS.
  • Add polling_interval 10 as recommended by RHEL Linux settings. This defines how often the system checks for path status updates.
  • To avoid interference from the multipathd service during Portworx volume operations, set the pxd device denylist rule.

Your /etc/multipath.conf file should follow this structure:

important

Set find_multipaths to no in the defaults section because each controller has only one iSCSI path.

blacklist {
devnode "^pxd[0-9]*"
devnode "^pxd*"
device {
vendor "VMware"
product "Virtual disk"
}
}

defaults {
polling_interval 10
find_multipaths no
}

devices {
device {
vendor "NVME"
product "Pure Storage FlashArray"
path_selector "queue-length 0"
path_grouping_policy group_by_prio
prio ana
failback immediate
fast_io_fail_tmo 10
user_friendly_names no
no_path_retry 0
features 0
dev_loss_tmo 60
}
device {
vendor "PURE"
product "FlashArray"
path_selector "service-time 0"
hardware_handler "1 alua"
path_grouping_policy group_by_prio
prio alua
failback immediate
path_checker tur
fast_io_fail_tmo 10
user_friendly_names no
no_path_retry 0
features 0
dev_loss_tmo 600
}
}

Configure Udev rules

Configure queue settings with Udev rules on all nodes. For recommended settings for Pure Cloud Block Store, refer to Applying Queue Settings with Udev.

Apply Multipath and Udev configurations

Apply the Multipath and Udev configurations created in the previous sections for the changes to take effect.

Use a MachineConfig in OpenShift to apply multipath and Udev configuration files consistently across all nodes.

  1. Encode the configuration files in base64 format and add them to the MachineConfig, as shown in the following example:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
    creationTimestamp:
    labels:
    machineconfiguration.openshift.io/role: worker
    name: <your-machine-config-name>
    spec:
    config:
    ignition:
    version: 3.2.0
    storage:
    files:
    - contents:
    source: data:text/plain;charset=utf-8;base64,<base64-encoded-multipath-conf>
    filesystem: root
    mode: 0644
    overwrite: true
    path: /etc/multipath.conf
    - contents:
    source: data:text/plain;charset=utf-8;base64,<base64-encoded-udev_conf>
    filesystem: root
    mode: 0644
    overwrite: true
    path: /etc/udev/rules.d/99-pure-storage.rules
    systemd:
    units:
    - enabled: true
    name: iscsid.service
    - enabled: true
    name: multipathd.service
  2. Apply the MachineConfig to your cluster:

    oc apply -f <your-machine-config-name>.yaml

Set up user access in Pure Cloud Block Store

To establish secure communication between Portworx and CBS, you should create a user account and generate an API token. This token acts as an authentication key, allowing Portworx to interact with CBS and perform storage operations on behalf of the authorized user. This section provides the steps to generate an API token, which serves as your authorization within the CBS environment.

Secure multi-tenancy

If multiple users share a single CBS, you can enable secure multi-tenancy using CBS realms and pods. A realm isolates tenant-specific storage, and a pod groups volumes within that realm.

To enable this feature:

  1. Create a realm and pod on the CBS.
  2. Add the realm to the px-pure-secret file.
  3. Reference the pod name in the StorageCluster specification.
note

A CBS pod is a logical grouping on the storage array and is not related to Kubernetes pods.

This configuration ensures that each tenant can access only their assigned storage volumes.

  1. Create a user:

    1. In your CBS dashboard, select Settings in the left pane.
    2. On the Settings page, select Access.
    3. In the Users section, click the vertical ellipsis in the top-right corner and select Create User: CBS create user
    4. In the Create User window, enter your details and set the role to Storage Admin.
    5. Select Create to add the new user.
  2. Generate an API token:

    1. To create a token for the user you created, select the user from the Users list, click the vertical ellipsis in the right-hand corner of the username, and select Create API Token: Generate an API token
    2. In the API Token window, leave the Expires in field blank if you want to create a token that never expires, and click Create.
    3. Save this information to avoid the need to recreate the token.

Create pure.json file

To integrate Portworx Enterprise with CBS, create a JSON configuration file (named pure.json) containing essential information about the CBS environment. This file should include the management endpoints and the API token you generated.

  • Management endpoints: These are URLs or IP addresses that Portworx uses to communicate with CBS through API calls. To locate these, go to Settings > Network in your CBS dashboard. Note the IP addresses or hostnames of your management interfaces, prefixed with vir, indicating virtual interfaces.
    important
    • For an IPv6 address, ensure that the IP address is enclosed in square brackets. For example: "MgmtEndPoint": "[XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX]".
  • API token: Generated in the previous section.
  • Realm (secure multi-tenancy only): Realms define tenant boundaries within a secure multi-tenancy setup. When multiple CBS are attached to a cluster, the admin can specify a realm to ensure that storage volumes are isolated for each tenant. CBS volumes created through Portworx will be placed within the specified realm.
    note

    Each cluster can only support one realm per array, meaning a single Portworx deployment cannot use multiple realms on the same CBS.

Use the information above to create a JSON file. Below is a template for the configuration content, which you should populate with your specific information:

{
"FlashArrays": [
{
"MgmtEndPoint": "<fa-management-endpoint>",
"APIToken": "<fa-api-token>",
}
]
}

Add CBS configuration to a kubernetes secret

To enable Portworx Enterprise to access the CBS configuration, add the pure.json file to a Kubernetes secret by running the following command to create a secret named px-pure-secret:

oc create secret generic px-pure-secret --namespace <stc-namespace> --from-file=pure.json=<file path>
secret/px-pure-secret created
important
  • The specific name px-pure-secret is required so that Portworx Enterprise can correctly identify and access the Kubernetes secret upon startup. This secret securely stores the CBS configuration details and allows Portworx Enterprise to access this information within the Kubernetes environment.
  • Ensure that the px-pure-secret is in the same namespace where you plan to install Portworx Enterprise.

Verify the iSCSI Connection with Cloud Block Store

Follow the instructions below to verify the iSCSI setup:

  1. Run the following command from the node to discover your iSCSI targets:

    iscsiadm -m discovery -t st -p <flash-array-interface-endpoint>
    10.13.xx.xx0:3260,207 iqn.2010-06.com.purestorage:flasharray.xxxxxxx
    10.13.xx.xx1:3260,207 iqn.2010-06.com.purestorage:flasharray.xxxxxxx
  2. Run the following command on each node to verify if each node has a unique initiator:

    cat /etc/iscsi/initiatorname.iscsi
    InitiatorName=iqn.1994-05.com.redhat:xxxxx
  3. If the initiator names are not unique, assign a new unique initiator name using the following command:

    echo "InitiatorName=`/sbin/iscsi-iname`" > /etc/iscsi/initiatorname.iscsi
    important

    Replace the initiator names on any nodes that have duplicates with the newly generated unique names.

  4. After making changes to the initiator names, restart the iSCSI service to apply the changes:

    systemctl restart iscsid