Skip to main content
Version: 25.6

Use FlashArray as backend storage for Kubernetes applications

This section provides instructions for configuring your environment to use FlashArray as backend storage for Kubernetes applications, including both single-tenant and multi-tenant setups.

Before you begin preparing your environment, ensure that all system requirements are met.

Configure multipath.conf file

  • FlashArray and Portworx do not support user-friendly names. Set user_friendly_names to no before installing Portworx CSI on your cluster. This ensures consistent device naming conventions between Portworx CSI and FlashArray.
  • Add polling_interval 10 as recommended by RHEL Linux settings. This defines how often the system checks for path status updates.
  • To avoid interference from the multipathd service during Portworx CSI volume operations, set the pxd device denylist rule.

Your /etc/multipath.conf file should follow this structure:

important

If you are configuring your environment to use Pure Cloud Block Store (CBS) for Azure, make sure that find_multipaths is set to no in the defaults section because each controller has only one iSCSI path.

blacklist {
devnode "^pxd[0-9]*"
devnode "^pxd*"
device {
vendor "VMware"
product "Virtual disk"
}
}
defaults {
polling_interval 10
find_multipaths yes
}


devices {
device {
vendor "NVME"
product "Pure Storage FlashArray"
path_selector "queue-length 0"
path_grouping_policy group_by_prio
prio ana
failback immediate
fast_io_fail_tmo 10
user_friendly_names no
no_path_retry 0
features 0
dev_loss_tmo 60
}
device {
vendor "PURE"
product "FlashArray"
path_selector "service-time 0"
hardware_handler "1 alua"
path_grouping_policy group_by_prio
prio alua
failback immediate
path_checker tur
fast_io_fail_tmo 10
user_friendly_names no
no_path_retry 0
features 0
dev_loss_tmo 600
}
}

Configure Udev rules

Configure queue settings with Udev rules on all nodes. For recommended settings for Pure Storage FlashArray, refer to Applying Queue Settings with Udev.

Apply Multipath and Udev configurations

Apply the Multipath and Udev configurations created in the previous sections for the changes to take effect.

  1. Update the multipath.conf file as described in the Configure multipath.conf file section and restart the multipathd service on all nodes:
    systemctl restart multipathd.service
  2. Create the Udev rules as described in the Configure Udev rules section and apply them on all nodes:
    udevadm control --reload-rules && udevadm trigger

Configure FlashArray File Services

PX-CSI supports FlashArray File Services. If you plan to configure FlashArray File Services as backend storage for Kubernetes applications, ensure that the FlashArray is configured to meet the following prerequisites:

  • Verify that FlashArray File Services are activated on the Pure FlashArray.

  • Configure a virtual interface (VIF) for use with file services.

  • Create a file system to serve as the top-level directory for FA file volumes.

  • Create an NFS policy. This policy is required to create exports for FA file volumes.

    By default, when you create FlashArray File Services, the nfs-simple policy is available and has user mapping enabled. If you plan to use this policy, edit it to disable user mapping.

    When user mapping is enabled and the policy uses AUTH_SYS security, anonymous users are mapped to UID and GID 65534 unless overridden. This can cause permission issues for workloads using fixed user identities. AUTH_SYS is the default NFS authentication mode and uses numeric UID and GID values passed from the client.

    If you plan to use the same policy with root_squash for KubeVirt:

    • Add a user (for example, Kubevirt) to FlashArray File Services.
    • KubeVirt VMs use QEMU, which runs with UID 107 and GID 107.
    • Set both the UID and GID to 107. KubeVirt UID 107 KubeVirt GID 107

    Alternatively, you can create a new policy with user mapping disabled and no_root_squash access.

note

FlashArray does not support secure multi-tenancy for FA file services.

Set up user access in FlashArray

To establish secure communication between Portworx CSI and FlashArray, you should create a user account and generate an API token. This token acts as an authentication key, allowing Portworx CSI to interact with FlashArray and perform storage operations on behalf of the authorized user. This section provides the steps to generate an API token, which serves as your authorization within the FlashArray environment.

  1. Create a user:

    1. In your FlashArray dashboard, select Settings in the left pane.
    2. On the Settings page, select Access.
    3. In the Users section, click the vertical ellipsis in the top-right corner and select Create User: FlashArray create user
    4. In the Create User window, enter your details and set the role to Storage Admin.
    5. Select Create to add the new user.
  2. Generate an API token:

    1. To create a token for the user you created, select the user from the Users list, click the vertical ellipsis in the right-hand corner of the username, and select Create API Token: Generate an API token
    2. In the API Token window, leave the Expires in field blank if you want to create a token that never expires, and click Create.
    3. Save this information to avoid the need to recreate the token.

Create pure.json file

To integrate Portworx CSI with FlashArray, create a JSON configuration file (named pure.json) containing essential information about the FlashArray environment. This file should include the management endpoints and the API token you generated.

  • Management endpoints: These are URLs or IP addresses that Portworx CSI uses to communicate with FlashArray through API calls. To locate these, go to Settings > Network in your FlashArray dashboard. Note the IP addresses or hostnames of your management interfaces, prefixed with vir, indicating virtual interfaces.
    important
    • For an IPv6 address, ensure that the IP address is enclosed in square brackets. For example: "MgmtEndPoint": "[XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX]".
    • If you're using FlashArray file services, the fileVIF can be used as a management endpoint. It accepts HTTPS traffic and is created using subinterfaces on both controllers, making it a floating management virtual IP (VIP). This configuration enables high availability. You can configure fileVIF as the mgmtEndpoint in PX-CSI if you want to consolidate management and data communication onto the same interface.
  • API token: Generated in the previous section.
  • Realm (secure multi-tenancy only): Realms define tenant boundaries within a secure multi-tenancy setup. When multiple FlashArrays are attached to a cluster, the admin can specify a realm to ensure that storage volumes are isolated for each tenant. FlashArray volumes created through Portworx CSI will be placed within the specified realm.
    note

    Each cluster can only support one realm per array, meaning a single Portworx CSI deployment cannot use multiple realms on the same FlashArray.

  • NFSEndPoint (FlashArray file services only): Specify the NFSEndPoint of the FlashArray file services. Note that secure multi-tentancy is not supported with file services.
    important

    If you are using an IPv6 address for NFSEndPoint, ensure that the IP address is enclosed in square brackets, for example: "NFSEndpoint": "[XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX]".

  • VLAN (only for VLAN binding): Specify the VLAN ID to which the host should be bound.
    note

    VLAN binding is supported on Purity version 6.4.1 or later

Use the information above to create a JSON file. Below is a template for the configuration content, which you should populate with your specific information:

note

If you are configuring both FlashArray and FlashBlade, you can add FlashBlade configuration information in the same file. Refer to the JSON file for more information.

{
"FlashArrays": [
{
"MgmtEndPoint": "<fa-management-endpoint>",
"APIToken": "<fa-api-token>",
"NFSEndPoint": "<nfs-endpoints-of-fa>", ## This field is required only for FA file services.
"VLAN": "<vlan-id>" ## This field is required only VLAN binding.
}
]
}

(Optional) CSI topology feature

Portworx CSI supports topology-aware storage provisioning for Kubernetes applications. By specifying topology information, such as node, zone, or region, you can control where volumes are provisioned. This ensures that storage aligns with your application's requirements for availability, performance, and fault tolerance. Portworx CSI optimizes storage placement, improving efficiency and resilience in multi-zone or multi-region Kubernetes environments. For more information, see CSI topology.

To prepare your environment for using the topology-aware provisioning feature, follow these steps:

  1. Edit the pure.json file created in the previous section to define the topology for each FlashArray. For more information, refer to the pure.json with CSI topology.

  2. Label your Kubernetes nodes with values that correspond to the labels defined in the pure.json file. For example:

    kubectl label node <nodeName> topology.portworx.io/zone=zone-0
    kubectl label node <nodeName> topology.portworx.io/region=region-0

Add FlashArray configuration to a kubernetes secret

To enable Portworx CSI to access the FlashArray configuration, add the pure.json file to a Kubernetes secret by running the following command to create a secret named px-pure-secret:

kubectl create secret generic px-pure-secret --namespace <stc-namespace> --from-file=pure.json=<file path>
secret/px-pure-secret created
important
  • The specific name px-pure-secret is required so that Portworx CSI can correctly identify and access the Kubernetes secret upon startup. This secret securely stores the FlashArray configuration details and allows Portworx CSI to access this information within the Kubernetes environment.
  • Ensure that the px-pure-secret is in the same namespace where you plan to install Portworx CSI.

Configure FlashArray connectivity

If you are using the iSCSI protocol, follow the instructions below to verify the iSCSI setup:

  1. Run the following command from the node to discover your iSCSI targets:

    iscsiadm -m discovery -t st -p <flash-array-interface-endpoint>
    10.13.xx.xx0:3260,207 iqn.2010-06.com.purestorage:flasharray.xxxxxxx
    10.13.xx.xx1:3260,207 iqn.2010-06.com.purestorage:flasharray.xxxxxxx
  2. Run the following command on each node to verify if each node has a unique initiator:

    cat /etc/iscsi/initiatorname.iscsi
    InitiatorName=iqn.1994-05.com.redhat:xxxxx
  3. If the initiator names are not unique, assign a new unique initiator name using the following command:

    echo "InitiatorName=`/sbin/iscsi-iname`" > /etc/iscsi/initiatorname.iscsi
    important

    Replace the initiator names on any nodes that have duplicates with the newly generated unique names.

  4. After making changes to the initiator names, restart the iSCSI service to apply the changes:

    systemctl restart iscsid

KVDB for PX-CSI

When you install PX-CSI with FlashArray, it enables the built-in internal KVDB by default, removing the need for an external KVDB. It automatically deploys and manages the KVDB cluster on three nodes. A 32 GB KVDB drive is automatically created on the FlashArray for each of the three KVDB nodes. For more information, see Internal KVDB for Portworx CSI