Use FlashArray as backend storage for Kubernetes applications
This section provides instructions for configuring your environment to use FlashArray as backend storage for Kubernetes applications, including both single-tenant and multi-tenant setups.
Before you begin preparing your environment, ensure that all system requirements are met.
Disable secure boot
You must disable secure boot mode to ensure that Portworx CSI operates without any restrictions. This can be done using tools like mokutil
or through your BIOS/UEFI
settings. For VMware virtualization, you can disable secure boot directly from the VMware UI.
- RHEL/Ubuntu
- VMware
To disable secure bootmode on RHEL/CentOS:
- Check the status of secure boot mode:
/usr/bin/mokutil --sb-state
- If secure boot is enabled, disable it:
/usr/bin/mokutil --disable-validation
- Reboot your system to apply the changes:
reboot
For VMware, navigate to the Edit Settings window of the virtual machine where you plan to deploy Portworx CSI. Ensure that the Secure Boot option under VM Options is unchecked, as shown in the following screenshot:
Configure multipath.conf
file
- FlashArray and Portworx do not support user-friendly names. Set
user_friendly_names
tono
before installing Portworx CSI on your cluster. This ensures consistent device naming conventions between Portworx CSI and FlashArray. - Add
polling_interval 10
as recommended by RHEL Linux settings. This defines how often the system checks for path status updates. - To avoid interference from the multipathd service during Portworx CSI volume operations, set the pxd device denylist rule.
Your /etc/multipath.conf
file should follow this structure:
- RHEL
- Ubuntu
defaults {
user_friendly_names no
enable_foreign "^$"
polling_interval 10
}
devices {
device {
vendor "NVME"
product "Pure Storage FlashArray"
path_selector "queue-length 0"
path_grouping_policy group_by_prio
prio ana
failback immediate
fast_io_fail_tmo 10
user_friendly_names no
no_path_retry 0
features 0
dev_loss_tmo 60
find_multipaths yes
}
device {
vendor "PURE"
product "FlashArray"
path_selector "service-time 0"
hardware_handler "1 alua"
path_grouping_policy group_by_prio
prio alua
failback immediate
path_checker tur
fast_io_fail_tmo 10
user_friendly_names no
no_path_retry 0
features 0
dev_loss_tmo 600
find_multipaths yes
}
}
blacklist_exceptions {
property "(SCSI_IDENT_|ID_WWN)"
}
blacklist {
devnode "^pxd[0-9]*"
devnode "^pxd*"
device {
vendor "VMware"
product "Virtual disk"
}
}
defaults {
user_friendly_names no
find_multipaths yes
}
devices {
device {
vendor "NVME"
product "Pure Storage FlashArray"
path_selector "queue-length 0"
path_grouping_policy group_by_prio
prio ana
failback immediate
fast_io_fail_tmo 10
user_friendly_names no
no_path_retry 0
features 0
dev_loss_tmo 60
find_multipaths yes
}
device {
vendor "PURE"
product "FlashArray"
path_selector "service-time 0"
hardware_handler "1 alua"
path_grouping_policy group_by_prio
prio alua
failback immediate
path_checker tur
fast_io_fail_tmo 10
user_friendly_names no
no_path_retry 0
features 0
dev_loss_tmo 600
find_multipaths yes
}
}
blacklist {
devnode "^pxd[0-9]*"
devnode "^pxd*"
device {
vendor "VMware"
product "Virtual disk"
}
}
Configure Udev rules
Configure queue settings with Udev rules on all nodes. For recommended settings for Pure Storage FlashArray, refer to Applying Queue Settings with Udev.
Apply Multipath and Udev configurations
Apply the Multipath and Udev configurations created in the previous sections for the changes to take effect.
- Kubernetes
- OpenShift
- Update the
multipath.conf
file as described in the Configuremultipath.conf
file section and restart themultipathd
service on all nodes:systemctl restart multipathd.service
- Create the Udev rules as described in the Configure Udev rules section and apply them on all nodes:
udevadm control --reload-rules && udevadm trigger
Use a MachineConfig in OpenShift to apply multipath and Udev configuration files consistently across all nodes.
-
Encode the configuration files in base64 format and add them to the MachineConfig, as shown in the following example:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
creationTimestamp:
labels:
machineconfiguration.openshift.io/role: worker
name: <your-machine-config-name>
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:,<base64-encoded-multipath-conf>
filesystem: root
mode: 0644
overwrite: true
path: /etc/multipath.conf
- contents:
source: data:,<base64-encoded-udev_conf>
filesystem: root
mode: 0644
overwrite: true
path: /etc/udev/rules.d/99-pure-storage.rules
systemd:
units:
- enabled: true
name: iscsid.service
- enabled: true
name: multipathd.service -
Apply the MachineConfig to your cluster:
oc apply -f <your-machine-config-name>.yaml
Set up user access in FlashArray
To establish secure communication between Portworx CSI and FlashArray, you should create a user account and generate an API token. This token acts as an authentication key, allowing Portworx CSI to interact with FlashArray and perform storage operations on behalf of the authorized user. This section provides the steps to generate an API token, which serves as your authorization within the FlashArray environment.
- FlashArray without secure multi-tenancy
- FlashArray with secure multi-tenancy
-
Create a user:
- In your FlashArray dashboard, select Settings in the left pane.
- On the Settings page, select Access.
- In the Users section, click the vertical ellipsis in the top-right corner and select Create User:
- In the Create User window, enter your details and set the role to Storage Admin.
- Select Create to add the new user.
-
Generate an API token:
- To create a token for the user you created, select the user from the Users list, click the vertical ellipsis in the right-hand corner of the username, and select Create API Token:
- In the API Token window, leave the Expires in field blank if you want to create a token that never expires, and click Create.
- Save this information to avoid the need to recreate the token.
The following steps must be performed on the FlashArray CLI.
-
Create a realm for each customer: All volumes from the Portworx CSI installation will be placed within this realm, ensuring customer-specific data isolation.
purerealm create <customer1-realm>
Name Quota Limit
<customer1-realm> - -
Create a pod inside the realm: A pod in FlashArray defines a boundary where specific volumes are placed.
purepod create <customer1-realm>::<fa-pod-name>
noteStretched FlashArray pods (pods spanning multiple FlashArrays) are not supported.
By assigning realms and pods in FlashArray, you ensure that different users interact only with the specific storage resources allocated to them.
-
Create a policy for a realm: Ensure that you have administrative privileges on FlashArray before proceeding. This policy grants users access to their respective realms with defined capabilities.
purepolicy management-access create --realm <customer1-realm> --role storage --aggregation-strategy all-permissions <realm-policy>
For basic privileges, use the following command:
purepolicy management-access create --realm <customer1-realm> --role storage --aggregation-strategy least-common-permissions <realm-policy>
-
Verify the created policy: This step ensures that the policy has been set up correctly with the right permissions.
purepolicy management-access list
Name Type Enabled Capability Aggregation Strategy Resource Name Resource Type
<realm-policy> admin-access True all all-permissions <customer1-realm> realmsThis policy ensures that users linked to the specified realm can perform storage operations within their allocated realm.
-
Create a user linked to a policy: This command creates a user with the access rights defined by the policy. You must create a password that the user can use to log in to FlashArray, as shown in the output:
pureadmin create --access-policy <realm-policy> <flasharray-user>
Enter password:
Retype password:
Name Type Access Policy
<flasharray-user> local <realm-policy>This step ensures that users are securely connected to their designated realms with appropriate access.
-
Sign in as the newly created user in the FlashArray CLI.
-
Run
pureadmin create --api-token
and copy the created token.
Create pure.json
file
To integrate Portworx CSI with FlashArray, create a JSON configuration file (named pure.json) containing essential information about the FlashArray environment. This file should include the management endpoints and the API token you generated.
- Management endpoints: These are URLs or IP addresses that Portworx CSI uses to communicate with FlashArray through API calls. To locate these, go to Settings > Network in your FlashArray dashboard. Note the IP addresses or hostnames of your management interfaces, prefixed with vir, indicating virtual interfaces.
important
For IPv6 address, ensure that the IP address is enclosed in square brackets, for example:
"MgmtEndPoint": "[XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX]"
. - API token: Generated in the previous section.
- Realm (secure multi-tenancy only): Realms define tenant boundaries within a secure multi-tenancy setup. When multiple FlashArrays are attached to a cluster, the admin can specify a realm to ensure that storage volumes are isolated for each tenant. FlashArray volumes created through Portworx CSI will be placed within the specified realm.
note
Each cluster can only support one realm per array, meaning a single Portworx CSI deployment cannot use multiple realms on the same FlashArray.
Use the information above to create a JSON file. Below is a template for the configuration content, which you should populate with your specific information:
If you are configuring both FlashArray and FlashBlade, you can add FlashBlade configuration information in the same file. Refer to the JSON file for more information.
- FlashArray without secure multi-tenancy
- FlashArray with secure multi-tenancy
{
"FlashArrays": [
{
"MgmtEndPoint": "<fa-management-endpoint>",
"APIToken": "<fa-api-token>"
}
]
}
{
"FlashArrays": [
{
"MgmtEndPoint": "<first-fa-management-endpoint1>",
"APIToken": "<first-fa-api-token>",
"Realm": "<first-fa-realm>"
}
]
}
(Optional) CSI topology feature
Portworx CSI supports topology-aware storage provisioning for Kubernetes applications. By specifying topology information, such as node, zone, or region, you can control where volumes are provisioned. This ensures that storage aligns with your application's requirements for availability, performance, and fault tolerance. Portworx CSI optimizes storage placement, improving efficiency and resilience in multi-zone or multi-region Kubernetes environments. For more information, see CSI topology.
To prepare your environment for using the topology-aware provisioning feature, follow these steps:
-
Edit the
pure.json
file created in the previous section to define the topology for each FlashArray. For more information, refer to thepure.json
with CSI topology. -
Label your Kubernetes nodes with values that correspond to the labels defined in the
pure.json
file. For example:kubectl label node <nodeName> topology.portworx.io/zone=zone-0
kubectl label node <nodeName> topology.portworx.io/region=region-0
Add FlashArray configuration to a kubernetes secret
To enable Portworx CSI to access the FlashArray configuration, add the pure.json
file to a Kubernetes secret by running the following command to create a secret named px-pure-secret
:
- Kubernetes
- OpenShift
kubectl create secret generic px-pure-secret --namespace <stc-namespace> --from-file=pure.json=<file path>
secret/px-pure-secret created
oc create secret generic px-pure-secret --namespace <stc-namespace> --from-file=pure.json=<file path>
secret/px-pure-secret created
- The specific name
px-pure-secret
is required so that Portworx CSI can correctly identify and access the Kubernetes secret upon startup. This secret securely stores the FlashArray configuration details and allows Portworx CSI to access this information within the Kubernetes environment. - Ensure that the
px-pure-secret
is in the same namespace where you plan to install Portworx CSI.
(Optional) Verify the iSCSI Connection with FlashArray
If you are using the iSCSI protocol, follow the instructions below to verify the iSCSI setup:
-
Run the following command from the node to discover your iSCSI targets:
iscsiadm -m discovery -t st -p <flash-array-interface-endpoint>
10.13.xx.xx0:3260,207 iqn.2010-06.com.purestorage:flasharray.xxxxxxx
10.13.xx.xx1:3260,207 iqn.2010-06.com.purestorage:flasharray.xxxxxxx -
Run the following command on each node to verify if each node has a unique initiator:
cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:xxxxx
-
If the initiator names are not unique, assign a new unique initiator name using the following command:
echo "InitiatorName=`/sbin/iscsi-iname`" > /etc/iscsi/initiatorname.iscsi
importantReplace the initiator names on any nodes that have duplicates with the newly generated unique names.
-
After making changes to the initiator names, restart the iSCSI service to apply the changes:
systemctl restart iscsid