Install Portworx with Pure Storage FlashArray with PX-StoreV2
This page guides you through installing Portworx with PX-StoreV2 using a standard FlashArray as your storage provider.
Prerequisites
- Have an OpenShift cluster with FlashArray that meets the minimum requirements for Portworx with additional following requirements:
- Linux kernel version: 4.20 or newer (minimum), 5.0
- Required packages for Rhel: device-mapper mdadm lvm2 device-mapper-persistent-data augeas
note
During installation, Portworx will automatically try to pull the required packages from distribution specific repositories. This is a mandatory requirement and installation will fail if this prerequisite is not met.
- SD/NVME drive with a memory of more than 8 GB per node.
- A minimum of 64 GB system metadata device on each node where you want to deploy Portworx. If you do not provide a metadata device, one will be automatically added to the spec.
-
Have a Pure Storage FlashArray with Purity version 5.3.0 or newer.
-
Use the iSCSI or RoCE protocol.
-
Create a Pure secret
px-pure-secret
under the STC namespace before installing Portworx. -
Enable CSI for Portworx.
-
Have the latest Filesystem utilities/drivers.
-
For Red Hat only, ensure that the second action
CAPACITY_DATA_HAS_CHANGED
is uncommented in the 90-scsi-ua.rules file and you have restarted theudev
service. -
Have the latest FC initiator software for your operating system (Optional; required for FC connectivity).
Configure your physical environment
Before you install Portworx, ensure that your physical network is configured appropriately and that you meet the prerequisites. You must provide Portworx with your FlashArray configuration details during installation.
- Each FlashArray management IP address can be accessed by each node.
- Your cluster contains an up-and-running FlashArray with an existing dataplane connectivity layout (iSCSI, Fibre Channel).
- If you're using iSCSI, the storage node iSCSI initiators are on the same VLAN as the FlashArray iSCSI target ports.
- If you are using multiple network interface cards (NICs) to connect to an iSCSI host, then all of them must be accessible from the FlashArray management IP address.
- If you're using Fibre Channel, the storage node Fibre Channel WWNs have been correctly zoned to the FlashArray Fibre Channel WWN ports.
- You have an API token for a user on your FlashArray with at least
storage_admin
permissions. Check the documentation on your device for information on generating an API token.
(Optional) Set iSCSI interfaces on FlashArray
If you are using iSCSI protocol, you can its interfaces on FlashArray using the following steps:
- Run the following command to get the available iSCSI interface within your environment:
You can use the output in the next step.
iscsiadm -m iface
- Run the following command to specify which network interfaces on the FlashArray system are allowed to handle iSCSI traffic. Replace
<interface-value>
by the value you retried in the previous step:pxctl cluster options update --flasharray-iscsi-allowed-ifaces <interface-value>
Configure your software environment
Configure your software environment within a computing infrastructure. It involves preparing both the operating system and the underlying network and storage configurations.
Follow the instructions below to set up CSI snapshot feature, disable secure boot mode, and configure the multipath.conf
file appropriately. These configurations ensure that the system's software environment is properly set up to allow Portworx to interact correctly with the hardware components, like storage devices (using protocols such as iSCSI or Fibre Channel), and to function correctly within the network infrastructure.
Set up your environment to use CSI snapshot feature
To use the CSI snapshot feature, install the following:
-
-
You can also install the snapshot controller by adding the following lines to your StorageCluster:
csi:
enabled: true
installSnapshotController: true
-
Create a monitoring ConfigMap
Newer OpenShift versions do not support the Portworx Prometheus deployment. As a result, you must enable monitoring for user-defined projects before installing the Portworx Operator. Use the instructions in this section to configure the OpenShift Prometheus deployment to monitor Portworx metrics.
To integrate OpenShift’s monitoring and alerting system with Portworx, create a cluster-monitoring-config
ConfigMap in the openshift-monitoring
namespace:
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
enableUserWorkload: true
The enableUserWorkload
parameter enables monitoring for user-defined projects in the OpenShift cluster. This creates a prometheus-operated
service in the openshift-user-workload-monitoring
namespace.
Disable secure boot mode
Portworx requires the secure boot mode to be disabled to ensure it can operate without restrictions. Here's how to disable secure boot mode across different platforms:
- RHEL/CentOS
- VMware
For REHL/CentOS you can perform the following steps to check and disable the secure boot mode:
-
Check the status of secure boot mode:
/usr/bin/mokutil --sb-state
-
If secure boot is enabled, disable it:
/usr/bin/mokutil --disable-validation
-
Apply changes by rebooting your system:
reboot
For VMware, navigate to the Edit Setting window of the virtual machine on which you are planning to deploy Portworx. Ensure that the checkbox against the Secure Boot option under VM Options is not selected, as shown in the following screenshot:
Verify the status of the secure boot mode
Run the following command to ensure that the secure boot mode is off:
/usr/bin/mokutil --sb-state
SecureBoot disabled
Configure the multipath.conf
file
- For
defaults
:- FlashArray and Portworx does not support user friendly names, disable it and set it to
no
before installing Portworx on your cluster. This ensures Portworx and FlashArray use consistent device naming conventions. - Add
polling 10
as per the RHEL Linux recommended settings. This defines how often the system checks for path status updates.
- FlashArray and Portworx does not support user friendly names, disable it and set it to
- To prevent any interference from
multipathd
service on Portworx volume operations, set the pxd device denylist rule.
Your multipath.conf
file should resemble the following structure:
- RHEL/CentOS
- Ubuntu
defaults {
user_friendly_names no
enable_foreign "^$"
polling_interval 10
}
devices {
device {
vendor "NVME"
product "Pure Storage FlashArray"
path_selector "queue-length 0"
path_grouping_policy group_by_prio
prio ana
failback immediate
fast_io_fail_tmo 10
user_friendly_names no
no_path_retry 0
features 0
dev_loss_tmo 60
find_multipaths yes
}
device {
vendor "PURE"
product "FlashArray"
path_selector "service-time 0"
hardware_handler "1 alua"
path_grouping_policy group_by_prio
prio alua
failback immediate
path_checker tur
fast_io_fail_tmo 10
user_friendly_names no
no_path_retry 0
features 0
dev_loss_tmo 600
find_multipaths yes
}
}
blacklist_exceptions {
property "(SCSI_IDENT_|ID_WWN)"
}
blacklist {
devnode "^pxd[0-9]*"
devnode "^pxd*"
device {
vendor "VMware"
product "Virtual disk"
}
}
defaults {
user_friendly_names no
find_multipaths yes
}
devices {
device {
vendor "NVME"
product "Pure Storage FlashArray"
path_selector "queue-length 0"
path_grouping_policy group_by_prio
prio ana
failback immediate
fast_io_fail_tmo 10
user_friendly_names no
no_path_retry 0
features 0
dev_loss_tmo 60
find_multipaths yes
}
device {
vendor "PURE"
product "FlashArray"
path_selector "service-time 0"
hardware_handler "1 alua"
path_grouping_policy group_by_prio
prio alua
failback immediate
path_checker tur
fast_io_fail_tmo 10
user_friendly_names no
no_path_retry 0
features 0
dev_loss_tmo 600
find_multipaths yes
}
}
blacklist {
devnode "^pxd[0-9]*"
devnode "^pxd*"
device {
vendor "VMware"
product "Virtual disk"
}
}
Apply Multipath
Use a MachineConfig in OpenShift to apply multipath configuration files consistently across all nodes.
-
Convert the configuration files to base64 format and add them to the MachineConfig, as shown in the following example:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
creationTimestamp:
labels:
machineconfiguration.openshift.io/role: worker
name: <your-machine-config-name>
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:,<base64-encoded-multipath-conf>
filesystem: root
mode: 0644
overwrite: true
path: /etc/multipath.conf
systemd:
units:
- enabled: true
name: iscsid.service
- enabled: true
name: multipathd.service -
Apply the MachineConfig to your cluster:
oc apply -f <your-machine-config-name>.yaml
Set up user access in FlashArray
Follow the steps in this section to set up user access in FlashArray.
Generate an API token
To establish secure communication between Portworx and FlashArray, an API token is required. The token serves as a key for Portworx to authenticate with FlashArray and perform storage operations on behalf of authorized users. This section provides the steps to generate such a token, which encapsulates your authorization within the FlashArray environment.
Create a new user
- From your FlashArray dashboard, click Settings in the left pane. On the Settings page, click Access. Click the vertical ellipsis at the right corner of the Users section to select the Create User option, as shown in the folloiwng screenshot:
- In the Create User window, provide your information, set your role as Storage Admin, and click Create to add yourself as a user.
Generate an API token
- To create a token for the user you created, select the user from the Users list, click the vertical ellipsis in the right-hand corner of the username, and select Create API Token:
- In the API Token window, leave the Expires in field blank if you want to create a token that never expires, and click Create.
- Save this information to avoid the need to recreate the token.
Create a JSON configuration file
For Portworx to integrate with FlashArray, it requires a JSON configuration file containing essential information about the FlashArray environment. This file, typically named pure.json
, includes the management endpoints and the newly generated API token.
- Management endpoints: The management endpoints are URLs or IP addresses that Portworx will use to send API calls to FlashArray. Find these by going to Settings and selecting Network within your FlashArray dashboard. Note the IP addresses or hostnames of your management interfaces, usually identified by a vir prefix, indicating virtual interfaces:
- API token: Generated in the previous section.
Use the above information to create JSON file. Below is a template for the configuration content, which you should populate with your specific information:
{
"FlashArrays": [
{
"MgmtEndPoint": "<fa-management-endpoint>",
"APIToken": "<fa-api-token>"
}
]
}
You can add FlashBlade configuration information to this file if you're configuring both FlashArray and FlashBlade together. Refer to the JSON file reference for more information.
Create a Kubernetes Secret
The specific name px-pure-secret
is required so that Portworx can correctly identify and access the Kubernetes secret upon startup. This secret securely stores the FlashArray configuration details and allows Portworx to access this information within the Kubernetes environment.
Enter the following oc create
command to create a Kubernetes secret called px-pure-secret
:
oc create secret generic px-pure-secret --namespace <stc-namespace> --from-file=pure.json
secret/px-pure-secret created
Verify the iSCSI Connection with FlashArray
The instructions in this section are using iSCSI network.
- Run the following command to discover your iSCSI targets. Replace
<flash-array-interface-endpoint>
with your FlashArray's interface, as shown in the following screenshot:
iscsiadm -m discovery -t st -p <flash-array-interface-end-piont>
10.13.xx.xx0:3260,207 iqn.2010-06.com.purestorage:flasharray.xxxxxxx
10.13.xx.xx1:3260,207 iqn.2010-06.com.purestorage:flasharray.xxxxxxx
- Verify that each node has a unique initiator. Run the following command on each node:
cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:xxxxx
- If the initiator names are not unique, it's necessary to assign a new unique initiator name. To do this, execute the following command:
echo "InitiatorName=`/sbin/iscsi-iname`" > /etc/iscsi/initiatorname.iscsi
Replace the initiator names on any nodes that have duplicates with the newly generated unique names.
Deploy Portworx
Follow the instructions in this section to deploy Portworx.
Generate specs
To install Portworx, you must first generate manifests that you will deploy in your cluster:
- Navigate to Portworx Central and log in, or create an account.
- In the Portworx section, select Get Started.
- On the Product Line page, choose any option depending on which license you intend to use, then click Continue to start the spec generator.
- In the Generate Spec page:
- For Platform, select Pure FlashArray.
- Select OpenShift 4+ for Distribution Name.
- Click Customize at the bottom of the Summary section.
- Navigate to the Storage window by clicking Next. Select PX-StoreV2 check box in the Configure storage devices section.
- Navigate to the Customize window and click Finish to generate the specs.
By default, iSCSI is set as your protocol for data transfer. To change this option, click Customize and navigate to the Storage window. Select a different option from the Select type of storage area network dropdown.
- (Optional) If you are using multiple NICs for iSCSI host, then add the following environment variable to your StorageCluster spec. Replace
<nic-interface-names>
with comma-separated names of NICs such as"eth1,eth2"
:env:
- name: flasharray-iscsi-allowed-ifaces
value: "<nic-interface-names>"
If you have multiple NICs on your virtual machine, then FlashArray does not distinguish the NICs that include iSCSI and the others without iSCSI. This list must be provided, otherwise Portworx may potentially use only one of the provided interfaces.
Apply specs
Apply the Operator and StorageCluster specs you generated in the section above using the oc apply
command:
-
Deploy the Operator:
oc apply -f 'https://install.portworx.com/<version-number>?comp=pxoperator&kbver=1.25.0&ns=portworx'
serviceaccount/portworx-operator created
podsecuritypolicy.policy/px-operator created
clusterrole.rbac.authorization.k8s.io/portworx-operator created
clusterrolebinding.rbac.authorization.k8s.io/portworx-operator created
deployment.apps/portworx-operator created -
Deploy the StorageCluster:
oc apply -f 'https://install.portworx.com/<version-number>?operator=true&mc=false&kbver=1.25.0&ns=portworx&b=true&iop=6&c=px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-5db83030471e&stork=true&csi=true&mon=true&tel=true&st=k8s&promop=true'
storagecluster.core.libopenstorage.org/px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-5db83030471e created
Once deployed, Portworx detects that the FlashArray secret is present when it starts up and can use the specified FlashArray as storage provider.
Pre-flight check
After you applying the specs, the Portworx Operator performs a pre-flight check across the cluster and must pass for each node. This check determines whether each node in your cluster is compatible with the PX-StoreV2 datastore. If each node in the cluster meets the following hardware and software requirements, PX-StoreV2 will be automatically set as your default datastore during Portworx installation:
- Hardware:
- CPU: A minimum of 8 cores CPU per node.
- Drive: SD/NVME drive with a memory of more than 8 GB per node.
- Metadata device: A minimum of 64 GB system metadata device on each node.
- Software:
- Linux kernel version: 4.20 or newer with the following packages:
- Rhel: device-mapper mdadm lvm2 device-mapper-persistent-data augeas
- Linux kernel version: 4.20 or newer with the following packages:
Once the check is successful, Portworx will be deployed on your cluster.