Skip to main content
Version: 25.8

Dynamic Provisioning of FlashArray File Services

Use PX-CSI to dynamically provision file-based volumes using FlashArray file services. This page walks you through creating a StorageClass, provisioning a PersistentVolumeClaim (PVC), and mounting it to a pod.

Create a StorageClass

To enable dynamic provisioning on FlashArray file services, define a StorageClass with the appropriate backend and NFS configuration.

For FlashArray file services, set the backend type to "pure_fa_file". You can also configure parameters like quota policy, mount options, and topology settings.

note
  • Ensure that you have configured FlashArray to use file services. For more information, see Configure FlashArray file services

  • If you configure an NFS policy with root_squash and your pod specifies an fsGroup, you might see permission errors (for example, permission denied or lchown failed) during volume mount because the root user is mapped to nfsnobody. To avoid this:

    • Ensure that the NFS policy uses no_root_squash access.
    • Esure that the User Mapping Enabled field is set to Disabled when creating the NFS policy.

    For more information, see Configure FlashArray File Services.

  1. Define a StorageClass with the appropriate storage type and performance settings. For FlashArray file system, the backend type is pure_fa_file.

    Required parameters:

    • backend: "pure_fa_file" - Specifies that the volume is an FA file volume.
    • pure_nfs_policy - PX-CSI expects that the NFS policy is pre-created on FA setups. If the policy does not exist, the request will fail.
    • pure_fa_file_system - Specifies the file system where the volume needs to be placed. If the file system does not exist in the FlashArray setup, the volume create request fails.

    Optional parameters:

    • pure_quota_policy - If provided, associates the volume with a quota policy to enforce a size limit.
    • pure_nfs_endpoint - Used when there are multiple endpoints per array. Overrides the default NFSEndPoint specified in pure.json.
    • allowedTopologies - Uses topology labels to select arrays with matching labels for volume placement.
    • volumeBindingMode: If you have enabled CSI topology, ensure you specify the volumeBindingMode: WaitForFirstConsumer parameter along with allowedTopologies. The volumeBindingMode: WaitForFirstConsumer delays volume binding until the Kubernetes scheduler selects a suitable node that matches the allowedTopologies labels.
    • mountOptions - Overrides default mount options. Supports only TCP, not UDP. You can also specify multiple security options using the mountOptions.sec field . By default, NFS uses sec=auth_sys, but support is also available for Kerberos-based authentication options, including sec=krb5 (authentication only), sec=krb5i (authentication and integrity), and sec=krb5p (authentication, integrity, and encryption).

    Example StorageClass YAML:

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
    name: fa-file-sc
    provisioner: pxd.portworx.com
    parameters:
    backend: "pure_fa_file"
    pure_nfs_policy: "test-policy"
    pure_fa_file_system: "name01"
    pure_quota_policy: "100g_policy"
    pure_nfs_endpoint: <nfs-endpoints-of-fa>
    mountOptions:
    - nfsvers=3
    - proto=tcp
    # (Optional) Below lines are required only if you are using CSI topology
    volumeBindingMode: WaitForFirstConsumer
    allowedTopologies:
    - matchLabelExpressions:
    - key: topology.portworx.io/zone
    values:
    - <zone-1>
    - key: topology.portworx.io/region
    values:
    - <region-1>
  2. Apply this YAML to your cluster:

    kubectl apply -f sc.yaml
    storageclass.storage.k8s.io/fa-file-sc created

Create a PVC

Define a PersistentVolumeClaim (PVC) that references the fa-file-sc StorageClass.

  1. To create a PVC, define the specifications and reference the StorageClass you previously created by specifying its name in the spec.storageClassName field.

    Example PVC specification:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
    name: pure-claim-fa
    labels:
    app: nginx
    spec:
    accessModes:
    - ReadWriteMany
    resources:
    requests:
    storage: 20Gi
    storageClassName: fa-file-sc

    Save this YAML in a file pvc.yaml.

  2. Apply this YAML to your cluster:

    kubectl apply -f pvc.yaml 
    persistentvolumeclaim/pure-claim-fa created

Mount a PVC to a pod

Attach the PVC to a pod by referencing it in the volumes section and mounting it inside the container:

  1. Create a Pod and specify the PVC name in the persistentVolumeClaim.claimName field. Here is an example pod specification:

    kind: Pod
    apiVersion: v1
    metadata:
    name: nginx-pod
    labels:
    app: nginx
    spec:
    volumes:
    - name: pure-vol
    persistentVolumeClaim:
    claimName: pure-claim-fa
    containers:
    - name: nginx
    image: nginx
    volumeMounts:
    - name: pure-vol
    mountPath: /data
    ports:
    - containerPort: 80
  2. (Optional) To control pod scheduling based on node labels, add the nodeAffinity field to the Pod specification. For example:

    spec:
    affinity:
    nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    nodeSelectorTerms:
    - matchExpressions:
    - key: topology.portworx.io/zone
    operator: In
    values:
    - zone-0
    - key: topology.portworx.io/region
    operator: In
    values:
    - region-0

Verify pod status

Check pod readiness and confirm the volume is mounted:

watch kubectl get pods

When the pod status shows Running, it is actively using the provisioned FlashArray file volume.