Create and use FlashBlade PVCs
Follow these steps to configure Portworx CSI to work with your FlashBlade and make storage available for your applications.
Create a StorageClass
The StorageClass defines how volumes are created and managed in Kubernetes. For FlashBlade, you need to specify the backend type ("pure_file"
), NFS endpoint, and other configuration options such as mount options and topology rules.
Example StorageClass specifications:
- Multiple NFS Endpoints
- Basic NFS Endpoint
To provision volumes with multiple NFS endpoints, create the StorageClass specification as follows:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: portworx-csi-fb
provisioner: pxd.portworx.com
parameters:
pure_nfs_endpoint: "<nfs-endpoint-1>"
backend: "pure_file"
mountOptions:
- nfsvers=3
- tcp
allowVolumeExpansion: true
allowedTopologies:
- matchLabelExpressions:
- key: topology.portworx.io/zone
values:
- <zone-1>
- key: topology.portworx.io/region
values:
- <region-1>
If CSI topology is not enabled, you can omit the allowedTopologies
section.
To configure a single NFS endpoint, use the following StorageClass specification:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: portworx-csi-fb
provisioner: pxd.portworx.com
parameters:
backend: "pure_file"
pure_export_rules: "*(rw)"
mountOptions:
- nfsvers=3
- tcp
allowVolumeExpansion: true
allowedTopologies:
- matchLabelExpressions:
- key: topology.portworx.io/zone
values:
- <zone-1>
- key: topology.portworx.io/region
values:
- <region-1>
If CSI topology is not enabled, you can omit the allowedTopologies
section.
Ensure unique topology labels for FlashBlade
To ensure successful PVC creation, verify that the labels in the allowedTopologies
section uniquely identify a single FlashBlade endpoint from the pure.json
file.
For example, if you specify topology.portworx.io/zone: <zone-1>
in the StorageClass and multiple FlashBlades listed in the pure.json
file, Portworx CSI will fail to create PVCs for FlashBlade Direct Access volumes and display the following error message:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Provisioning 1s (x4 over 10s) pxd.portworx.com_px-csi-ext-6f77f7c664-xxxx External provisioner is provisioning volume for claim "default/pure-multiple-nfs"
Warning ProvisioningFailed 1s (x4 over 9s) pxd.portworx.com_px-csi-ext-6f77f7c664-xxx failed to provision volume with StorageClass "portworx-multiple-nfs": rpc error: code = Internal desc = Failed to create volume: multiple storage backends match volume provisioner, unable to determine which backend the provided NFSEndpoint matches to
Normal ExternalProvisioning 0s (x3 over 10s) persistentvolume-controller Waiting for a volume to be created either by the external provisioner 'pxd.portworx.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
Apply the StorageClass
Apply the StorageClass specification using the command:
kubectl apply -f <storageclass.yml>
storageclass.storage.k8s.io/portworx-csi-fb created
Create a PVC
-
Once the StorageClass is created, you can create PVCs to request storage for your application.
noteThe Pure export rules for accessing FlashBlade defined by the specified
accessModes
in the PVC specification.*(rw)
: This rule is set forReadWriteOnce
,ReadWriteMany
, andReadWriteOncePod
PVC access modes. It allows clients to perform both read and write operations on the storage.*(ro)
: This rule is applied forReadOnlyMany
PVC access mode. It ensures that the storage can only be accessed in read-only mode, preventing modifications to the data.Example PVC specification:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pure-multiple-nfs
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: portworx-csi-fb -
Apply this YAML to your cluster:
kubectl apply -f <pvc.yml>
Modify NFS endpoints for existing volumes
To assign or update an NFS endpoint for existing FlashBlade Direct Access volumes, use the following command:
pxctl volume update --pure_nfs_endpoint "<fb-nfs-endpoint>" <existing-fb-pvc>
Update Volume: Volume update successful for volume pvc-80406c8d-xxx-xxxx
The updated NFS endpoint will take effect during the next mount cycle.
Mount the PVC to a Pod
-
After creating PVCs, mount them to an application pod to make the storage available.
Example pod specification:
kind: Pod
apiVersion: v1
metadata:
name: nginx-pod
labels:
app: nginx
spec:
volumes:
- name: pure-nfs
persistentVolumeClaim:
claimName: pure-multiple-nfs
containers:
- name: nginx
image: nginx
volumeMounts:
- name: pure-nfs
mountPath: /data
ports:
- containerPort: 80 -
To control pod scheduling based on node labels, add the
nodeAffinity
field to the Pod specification. For example:spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.portworx.io/zone
operator: In
values:
- zone-0
- key: topology.portworx.io/region
operator: In
values:
- region-0
Verify pod status
Monitor the pod’s status to ensure it is running and connected to the volume:
watch kubectl get pods
Wait for the STATUS
to show as Running
for a pod. Once the pod is running, you can verify that it is connected as a host for the volume.