Skip to main content
Version: 3.1

Configure Pure Storage FlashBlade as a Direct Access filesystem

On-premises users who want to use Pure Storage FlashBlade with Portworx on Kubernetes can attach FlashBlade as a Direct Access filesystem. Used in this way, Portworx directly provisions FlashBlade NFS filesystems, maps them to a user PVC, and mounts them to pods. Once mounted, Portworx writes data directly onto FlashBlade. As a result, this mounting method doesn't use storage pools.

FlashBlade Direct Access filesystems support the following:

  • Basic filesystem operations: create, mount, expand, unmount, delete
  • NFS export rules: Control which nodes can access an NFS filesystem
  • Mount options: Configure connection and protocol information
  • NFS v3 and v4.1
note
  • FlashBlade Direct Access filesystems do not support subpaths.
  • Autopilot does not support FlashBlade volumes.

Mount options

You specify mount options through the CSI mountOptions flag in the storageClass spec. If you do not specify any options, Portworx will use the default options from the client side command instead of its own default options.

Mount options rely on the underlying host operating system and Purity//FB version. Refer to the FlashBlade documentation for more information on specific mount options available to you.

NFS export rules

NFS export rules define access writes and privileges for a filesystem exported from FlashBlade to an NFS client.

Differences between FlashBlade Direct Access filesystems and proxy volumes

Direct Access dynamically creates filesystems on FlashBlade that are managed by Portworx on demand, while proxy volumes are created by users and then used by Portworx as required.

The following existing Portworx parameters don't apply to Pure Direct Access filesystems:

  • shared
  • sharedv4
  • secure
  • repl
  • scale should be 0
  • aggregation_level should be less than 2

Direct Access Architecture

Portworx runs on each node. When a user creates a PVC, Portworx provisions an NFS filesystem on FlashBlade and maps it directly to that PVC based on configuration information provided in the storageClass spec.

Portworx on FB

Install Portworx and configure FlashBlade

Before you install Portworx, ensure that your physical network is configured appropriately and that you meet the prerequisites. You must provide Portworx with your FlashBlade configuration details during installation.

Prerequisites

  • FlashBlade must be running Purity//FB version 2.2.0 or greater. Refer to the Supported models and versions topic for more information.
  • Your cluster must have local or cloud drives accessible on each node. Portworx needs local or cloud drives on the node (block devices) for the journal and for at least one storage pool.
  • The latest NFS software package installed on your operating system (nfs-utils or nfs-common)
  • FlashBade can be accessed as a shared resource from all the cluster nodes. Specifically, both NFSEndPoint and MgmtEndPoint IP addresses must be accessible from all nodes.
  • You've set up the secret, management endpoint, and API token on your FlashBlade.
  • If you want to use Stork as the main scheduler, you must use Stork version 2.12.0 or greater.

Create FlashBlade configuration file and Kubernetes secret

Create a JSON file, named pure.json containing details about FlashBlade instances. This configuration includes management endpoints, API tokens, NFS endpoints, and labeling for zones and regions to distinguish between different FlashBlades in various geographical locations. With zone and labels, the same FlashBlade array can be accessed across different zones using different NFS endpoints in the same cluster.

  1. Create a JSON file named pure.json that contains your FlashBlade information:

     {
    "FlashBlades": [
    {
    "MgmtEndPoint": "FB end point 1",
    "APIToken": "<api-token-for-fb-management-endpoint1>",
    "NFSEndPoint": "<fb-nfs-endpoint>",
    "Labels": {
    "topology.portworx.io/zone": "<zone-1>",
    "topology.portworx.io/region": "<region-1>"
    }
    },
    {
    "MgmtEndPoint": "FB end point 2",
    "APIToken": "<api-token-for-fb-management-endpoint2>",
    "NFSEndPoint": "<fb-nfs-endpoint2>",
    "Labels": {
    "topology.portworx.io/zone": "<zone-1>",
    "topology.portworx.io/region": "<region-2>"
    }
    }
    ]
    }
    note

    You can add FlashArray configuration information to this file if you're configuring both FlashBlade and FlashArray together. Refer to the JSON file reference for more information.

  2. Enter the following oc create command to create a Kubernetes secret called px-pure-secret in the namespace where you will install Portworx. This secret securely stores the FlashBlade configuration file, allowing Kubernetes to access this sensitive information in a secure manner.

    oc create secret generic px-pure-secret --namespace <px-namespace> --from-file=pure.json=<file path>
    note

    The secret must be named specifically as px-pure-secret because Portworx expects this name when integrating with FlashBlade.

Deploy Portworx

Deploy Portworx on your on-premises Kubernetes cluster. Ensure CSI is enabled.

Once deployed, Portworx detects that the FlashBlade secret is present when it starts up, and can use the specified FlashBlade as a Direct Access filesystem.

Use FlashBlade as a Direct Access filesystem

Once you've configured Portworx to work with your FlashBlade, you can create a StorageClass and reference it in any PVCs you create.

Create a StorageClass

The StorageClass describes how volumes should be created and managed in Kubernetes. It specifies the NFS endpoint, back-end type (pure_file for FlashBlade), and NFS export rules which control the access permissions for the mounted filesystem.

  1. Create a StorageClass spec, specifying your own values for the following:

    • parameters.pure_nfs_endpoint with endpoint of the FlashBlade you want to specify.
    • parameters.backend: with pure_file
    • parameters.pure_export_rules with any NFS export rules you desire
    • allowedTopologies with zone and region where you want the volume from this FlashBlade to be provisioned.
    • mountOptions with any CSI mount options you desire
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
    name: portworx-multiple-nfs
    provisioner: pxd.portworx.com
    parameters:
    pure_nfs_endpoint: "<nfs-endpoint-1>"
    backend: "pure_file"
    pure_export_rules: "*(rw)"
    allowedTopologies:
    - matchLabelExpressions:
    - key: topology.portworx.io/zone
    values:
    - <zone-1>
    - key: topology.portworx.io/region
    values:
    - <region-1>
    mountOptions:
    - nfsvers=3
    - tcp
    allowVolumeExpansion: true
  2. Apply this spec to create a StorageClass:

    note

    To ensure successful PVC creation, verify that the labels in the allowedTopologies section uniquely identify a single FlashBlade endpoint from the pure.json file. For example, if you specify topology.portworx.io/zone: <zone-1> in the StorageClass, and both FlashBlades listed in the pure.json file qualify, Portworx will fail the creation of PVCs for FlashBlade Direct Access volumes and display the following error message:


    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Normal Provisioning 1s (x4 over 10s) pxd.portworx.com_px-csi-ext-6f77f7c664-xxxx External provisioner is provisioning volume for claim "default/pure-multiple-nfs"
    Warning ProvisioningFailed 1s (x4 over 9s) pxd.portworx.com_px-csi-ext-6f77f7c664-xxx failed to provision volume with StorageClass "portworx-multiple-nfs": rpc error: code = Internal desc = Failed to create volume: multiple storage backends match volume provisioner, unable to determine which backend the provided NFSEndpoint matches to
    Normal ExternalProvisioning 0s (x3 over 10s) persistentvolume-controller Waiting for a volume to be created either by the external provisioner 'pxd.portworx.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

Create a PVC

  1. Reference the StorageClass you created by entering the StorageClass name in the spec.storageClassName field:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pure-multiple-nfs
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: portworx-multiple-nfs
  1. Apply the spec to create a PVC:
oc apply -f <fb-pvc.yml>
persistentvolumeclaim/pure-multiple-nfs created

Add or modify NFS endpoints for existing volumes

If you need to assign or change an NFS endpoint for existing FlashBlade Direct Access volumes that utilize a StorageClass where pure_nfs_endpoint was not specified, or if you want to modify the pure_nfs_endpoint after the volume has been created, run the following command. Replace <fb-management-endpoint> with the endpoint you wish to add or modify for a volume:

pxctl volume update --pure_nfs_endpoint "<fb-nfs-endpoint>" <existing-fb-pvc>
Update Volume: Volume update successful for volume pvc-80406c8d-xxx-xxxx
note

The mount over the newly assigned NFS endpoint will occur during the next mount cycle.

Was this page helpful?