Install Portworx with Pure Storage FlashArray
Prerequisites
- Have an on-premise Kubernetes cluster with FlashArray that meets the minimum requirements for Portworx.
- Have a Pure Storage FlashArray with Purity version 5.3.0 or newer.
- Use the FC, iSCSI, or NVMe/RoCE protocol.
- Create a Pure secret
px-pure-secret
under the STC namespace before installing Portworx. - Enable CSI for Portworx.
- Install the latest Linux multipath software package on your operating system that include these fixes. This package also must include
kpartx
. - Have the latest Filesystem utilities/drivers.
- For Red Hat only, ensure that the second action
CAPACITY_DATA_HAS_CHANGED
is uncommented in the 90-scsi-ua.rules file and you have restarted theudev
service. - Have the latest FC initiator software for your operating system (Optional; required for FC connectivity).
Configure your physical environment
Before you install Portworx, ensure that your physical network is configured appropriately and that you meet the prerequisites. You must provide Portworx with your FlashArray configuration details during installation.
- Each FlashArray management IP address can be accessed by each node.
- Your cluster contains an up-and-running FlashArray with an existing dataplane connectivity layout (iSCSI, Fibre Channel).
- If you're using iSCSI, the storage node iSCSI initiators are on the same VLAN as the FlashArray iSCSI target ports.
- If you are using multiple network interface cards (NICs) to connect to an iSCSI host, then all of them must be accessible from the FlashArray management IP address.
- If you're using Fibre Channel, the storage node Fibre Channel WWNs have been correctly zoned to the FlashArray Fibre Channel WWN ports.
- You have an API token for a user on your FlashArray with at least
storage_admin
permissions. Check the documentation on your device for information on generating an API token.
(Optional) Set iSCSI interfaces on FlashArray
If you are using iSCSI protocol, you can its interfaces on FlashArray using the following steps:
- Run the following command to get the available iSCSI interface within your environment:
You can use the output in the next step.
iscsiadm -m iface
- Run the following command to specify which network interfaces on the FlashArray system are allowed to handle iSCSI traffic. Replace
<interface-value>
by the value you retried in the previous step:pxctl cluster options update --flasharray-iscsi-allowed-ifaces <interface-value>
Configure your software environment
Configure your software environment within a computing infrastructure. It involves preparing both the operating system and the underlying network and storage configurations.
Follow the instructions below to set up CSI snapshot feature, disable secure boot mode, and configure the multipath.conf
file appropriately. These configurations ensure that the system's software environment is properly set up to allow Portworx to interact correctly with the hardware components, like storage devices (using protocols such as iSCSI or Fibre Channel), and to function correctly within the network infrastructure.
Set up your environment to use CSI snapshot feature
To use the CSI snapshot feature, install the following:
-
-
You can also install the snapshot controller by adding the following lines to your StorageCluster:
csi:
enabled: true
installSnapshotController: true
-
Create a monitoring ConfigMap
Newer OpenShift versions do not support the Portworx Prometheus deployment. As a result, you must enable monitoring for user-defined projects before installing the Portworx Operator. Use the instructions in this section to configure the OpenShift Prometheus deployment to monitor Portworx metrics.
To integrate OpenShift’s monitoring and alerting system with Portworx, create a cluster-monitoring-config
ConfigMap in the openshift-monitoring
namespace:
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
enableUserWorkload: true
The enableUserWorkload
parameter enables monitoring for user-defined projects in the OpenShift cluster. This creates a prometheus-operated
service in the openshift-user-workload-monitoring
namespace.
Disable secure boot mode
Portworx requires the secure boot mode to be disabled to ensure it can operate without restrictions. Here's how to disable secure boot mode across different platforms:
- RHEL/CentOS
- VMware
For REHL/CentOS you can perform the following steps to check and disable the secure boot mode:
-
Check the status of secure boot mode:
/usr/bin/mokutil --sb-state
-
If secure boot is enabled, disable it:
/usr/bin/mokutil --disable-validation
-
Apply changes by rebooting your system:
reboot
For VMware, navigate to the Edit Setting window of the virtual machine on which you are planning to deploy Portworx. Ensure that the checkbox against the Secure Boot option under VM Options is not selected, as shown in the following screenshot:
Verify the status of the secure boot mode
Run the following command to ensure that the secure boot mode is off:
/usr/bin/mokutil --sb-state
SecureBoot disabled
Configure the multipath.conf
file
- For
defaults
:- FlashArray and Portworx does not support user friendly names, disable it and set it to
no
before installing Portworx on your cluster. This ensures Portworx and FlashArray use consistent device naming conventions. - Add
polling 10
as per the RHEL Linux recommended settings. This defines how often the system checks for path status updates.
- FlashArray and Portworx does not support user friendly names, disable it and set it to
- To prevent any interference from
multipathd
service on Portworx volume operations, set the pxd device denylist rule.
Your multipath.conf
file should resemble the following structure:
- RHEL/CentOS
- Ubuntu
defaults {
user_friendly_names no
enable_foreign "^$"
polling_interval 10
}
devices {
device {
vendor "NVME"
product "Pure Storage FlashArray"
path_selector "queue-length 0"
path_grouping_policy group_by_prio
prio ana
failback immediate
fast_io_fail_tmo 10
user_friendly_names no
no_path_retry 0
features 0
dev_loss_tmo 60
find_multipaths yes
}
device {
vendor "PURE"
product "FlashArray"
path_selector "service-time 0"
hardware_handler "1 alua"
path_grouping_policy group_by_prio
prio alua
failback immediate
path_checker tur
fast_io_fail_tmo 10
user_friendly_names no
no_path_retry 0
features 0
dev_loss_tmo 600
find_multipaths yes
}
}
blacklist_exceptions {
property "(SCSI_IDENT_|ID_WWN)"
}
blacklist {
devnode "^pxd[0-9]*"
devnode "^pxd*"
device {
vendor "VMware"
product "Virtual disk"
}
}
defaults {
user_friendly_names no
find_multipaths yes
}
devices {
device {
vendor "NVME"
product "Pure Storage FlashArray"
path_selector "queue-length 0"
path_grouping_policy group_by_prio
prio ana
failback immediate
fast_io_fail_tmo 10
user_friendly_names no
no_path_retry 0
features 0
dev_loss_tmo 60
find_multipaths yes
}
device {
vendor "PURE"
product "FlashArray"
path_selector "service-time 0"
hardware_handler "1 alua"
path_grouping_policy group_by_prio
prio alua
failback immediate
path_checker tur
fast_io_fail_tmo 10
user_friendly_names no
no_path_retry 0
features 0
dev_loss_tmo 600
find_multipaths yes
}
}
blacklist {
devnode "^pxd[0-9]*"
devnode "^pxd*"
device {
vendor "VMware"
product "Virtual disk"
}
}
Apply Multipath and Udev configs
Use a MachineConfig in OpenShift to apply multipath and udev configuration files consistently across all nodes.
-
Convert the configuration files to base64 format and add them to the MachineConfig, as shown in the following example:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
creationTimestamp:
labels:
machineconfiguration.openshift.io/role: worker
name: <your-machine-config-name>
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:,<base64-encoded-multipath-conf>
filesystem: root
mode: 0644
overwrite: true
path: /etc/multipath.conf
- contents:
source: data:,<base64-encoded-udev_conf>
filesystem: root
mode: 0644
overwrite: true
path: /etc/udev/rules.d/99-pure-storage.rules
systemd:
units:
- enabled: true
name: iscsid.service
- enabled: true
name: multipathd.service -
Apply the MachineConfig to your cluster:
oc apply -f <your-machine-config-name>.yaml
Set up user access in FlashArray
Follow the steps in this section to set up user access in FlashArray.
Generate an API token
To establish secure communication between Portworx and FlashArray, an API token is required. The token serves as a key for Portworx to authenticate with FlashArray and perform storage operations on behalf of authorized users. This section provides the steps to generate such a token, which encapsulates your authorization within the FlashArray environment.
Create a new user
- From your FlashArray dashboard, click Settings in the left pane. On the Settings page, click Access. Click the vertical ellipsis at the right corner of the Users section to select the Create User option, as shown in the folloiwng screenshot:
- In the Create User window, provide your information, set your role as Storage Admin, and click Create to add yourself as a user.
Generate an API token
- To create a token for the user you created, select the user from the Users list, click the vertical ellipsis in the right-hand corner of the username, and select Create API Token:
- In the API Token window, leave the Expires in field blank if you want to create a token that never expires, and click Create.
- Save this information to avoid the need to recreate the token.
Create a JSON configuration file
For Portworx to integrate with FlashArray, it requires a JSON configuration file containing essential information about the FlashArray environment. This file, typically named pure.json
, includes the management endpoints and the newly generated API token.
- Management endpoints: The management endpoints are URLs or IP addresses that Portworx will use to send API calls to FlashArray. Find these by going to Settings and selecting Network within your FlashArray dashboard. Note the IP addresses or hostnames of your management interfaces, usually identified by a vir prefix, indicating virtual interfaces:
- API token: Generated in the previous section.
Use the above information to create JSON file. Below is a template for the configuration content, which you should populate with your specific information:
{
"FlashArrays": [
{
"MgmtEndPoint": "<fa-management-endpoint>",
"APIToken": "<fa-api-token>"
}
]
}
You can add FlashBlade configuration information to this file if you're configuring both FlashArray and FlashBlade together. Refer to the JSON file reference for more information.
Create a Kubernetes Secret
The specific name px-pure-secret
is required so that Portworx can correctly identify and access the Kubernetes secret upon startup. This secret securely stores the FlashArray configuration details and allows Portworx to access this information within the Kubernetes environment.
Enter the following oc create
command to create a Kubernetes secret called px-pure-secret
:
oc create secret generic px-pure-secret --namespace <stc-namespace> --from-file=pure.json
secret/px-pure-secret created
Verify the iSCSI Connection with FlashArray
The instructions in this section are using iSCSI network.
- Run the following command to discover your iSCSI targets. Replace
<flash-array-interface-endpoint>
with your FlashArray's interface, as shown in the following screenshot:
iscsiadm -m discovery -t st -p <flash-array-interface-end-piont>
10.13.xx.xx0:3260,207 iqn.2010-06.com.purestorage:flasharray.xxxxxxx
10.13.xx.xx1:3260,207 iqn.2010-06.com.purestorage:flasharray.xxxxxxx
- Verify that each node has a unique initiator. Run the following command on each node:
cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:xxxxx
- If the initiator names are not unique, it's necessary to assign a new unique initiator name. To do this, execute the following command:
echo "InitiatorName=`/sbin/iscsi-iname`" > /etc/iscsi/initiatorname.iscsi
Replace the initiator names on any nodes that have duplicates with the newly generated unique names.
Deploy Portworx
Depending upon how you want to install Portworx, select the appropriate tab:
- Spec Gen
- Helm
Generate specs
To install Portworx with Kubernetes, you must first generate Kubernetes manifests that you will deploy in your cluster:
-
Navigate to Portworx Central and log in, or create an account.
-
In the Portworx section, select Get Started.
-
On the Product Line page, choose any option depending on which license you intend to use, then click Continue to start the spec generator.
-
In the Generate Spec page:
- For Platform, select Pure FlashArray.
- Select None for Distribution Name, then click Save and Download to generate the specs.
noteBy default, iSCSI is set as your protocol for data transfer. To change this option, click Customize and navigate to the Storage window. Select a different option from the Select type of storage area network dropdown.
-
(Optional) If you are using multiple NICs for iSCSI host, then add the following environment variable to your StorageCluster spec. Replace
<nic-interface-names>
with comma-separated names of NICs such as"eth1,eth2"
:env:
- name: flasharray-iscsi-allowed-ifaces
value: "<nic-interface-names>"
If you have multiple NICs on your virtual machine, then FlashArray does not distinguish the NICs that include iSCSI and the others without iSCSI. This list must be provided, otherwise Portworx may potentially use only one of the provided interfaces.
Apply specs
Apply the Operator and StorageCluster specs you generated in the section above using the oc apply
command:
-
Deploy the Operator:
oc apply -f 'https://install.portworx.com/<version-number>?comp=pxoperator&kbver=1.25.0&ns=portworx'
serviceaccount/portworx-operator created
podsecuritypolicy.policy/px-operator created
clusterrole.rbac.authorization.k8s.io/portworx-operator created
clusterrolebinding.rbac.authorization.k8s.io/portworx-operator created
deployment.apps/portworx-operator created -
Deploy the StorageCluster:
oc apply -f 'https://install.portworx.com/<version-number>?operator=true&mc=false&kbver=1.25.0&ns=portworx&b=true&iop=6&c=px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-5db83030471e&stork=true&csi=true&mon=true&tel=true&st=k8s&promop=true'
storagecluster.core.libopenstorage.org/px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-5db83030471e created
Once deployed, Portworx detects that the FlashArray secret is present when it starts up and can use the specified FlashArray as a cloud storage provider.
Note that the following section is only applicable if you are using Direct Access volumes, and not cloud drives.
Install Portworx
For this example we will deploy Portworx in the portworx
namespace. If you want to install it in a different namespace, use the -n <px-namespace>
flag.
-
To install Portworx, add the
portworx/helm
repository to your local Helm repository.helm repo add portworx https://raw.githubusercontent.com/portworx/helm/master/stable/
"portworx" has been added to your repositories
-
Verify that the repository has been successfully added.
helm repo list
NAME URL
portworx https://raw.githubusercontent.com/portworx/helm/master/stable/ -
Create a
px_install_values.yaml
file and add the following parameters.openshiftInstall: true
drives: size=150
envs:
- name: PURE_FLASHARRAY_SAN_TYPE
value: ISCSI -
In many cases, you may want to customize Portworx configurations, such as enabling monitoring or specifying specific storage devices. You can pass the custom configuration to the
px_install_values.yaml
yaml file.note- You can refer to the Portworx Helm chart parameters for a list of configurable parameters and values.yaml file for configuration file template.
- The default clusterName is
mycluster
. However, it's recommended to change it to a unique identifier to avoid conflicts in multi-cluster environments.
-
Install Portworx using the following command:
noteTo install a specific version of Helm chart, you can use the
--version
flag. Example:helm install <px-release> portworx/portworx --version <helm-chart-version>
.helm install <px-release> portworx/portworx -n <portworx> -f px_install_values.yaml --debug
-
You can check the status of your Portworx installation.
helm status <px-release> -n portworx
NAME: px-release
LAST DEPLOYED: Thu Sep 26 05:53:17 2024
NAMESPACE: portworx
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Your Release is named "px-release"
Portworx Pods should be running on each node in your cluster.
Portworx would create a unified pool of the disks attached to your Kubernetes nodes.
No further action should be required and you are ready to consume Portworx Volumes as part of your application data requirements.
Update Portworx configuration
If you need to update the configuration of Portworx, you can modify the parameters in the px_install_values.yaml
file specified during the Helm installation. This allows you to change the values of configuration parameters.
-
Create or edit the
px_install_values.yaml
file to update the desired parameters.vim px_install_values.yaml
monitoring:
telemetry: false
grafana: true -
Apply the changes using the following command:
helm upgrade <px-release> portworx/portworx -n <portworx> -f px_install_values.yaml
Release "px-release" has been upgraded. Happy Helming!
NAME: px-release
LAST DEPLOYED: Thu Sep 26 06:42:20 2024
NAMESPACE: portworx
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
Your Release is named "px-release"
Portworx Pods should be running on each node in your cluster.
Portworx would create a unified pool of the disks attached to your Kubernetes nodes.
No further action should be required and you are ready to consume Portworx Volumes as part of your application data requirements. -
Verify that the new values have taken effect.
helm get values <px-release> -n <portworx>
You should see all the custom configurations passed using the
px_install_values.yaml
file.