Dynamic Provisioning of PVCs

This document describes how to dynamically provision a volume using Kubernetes and Portworx.

Using Dynamic Provisioning

Using Dynamic Provisioning and Storage Classes you don’t need to create Portworx volumes out of band and they will be created automatically. Using Storage Classes objects an admin can define the different classes of Portworx Volumes that are offered in a cluster. Following are the different parameters that can be used to define a Portworx Storage Class

Name Description Example
fs Filesystem to be laid out: xfs|ext4 fs: “ext4”
repl Replication factor for the volume: 1|2|3 repl: “3”
sharedv4 Flag to create a globally shared namespace volume which can be used by multiple pods over NFS with POSIX compliant semantics sharedv4: “true”
priority_io IO Priority: low|medium|high priority_io: “high”
io_profile Overrides the I/O algorithm Portworx uses for a volume. For more information about IO profiles, see the IO profiles section of the documentation. io_profile: “db”
group The group a volume should belong too. Portworx will restrict replication sets of volumes of the same group on different nodes. If the force group option ‘fg’ is set to true, the volume group rule will be strictly enforced. By default, it’s not strictly enforced. group: “volgroup1”
fg This option enforces volume group policy. If a volume belonging to a group cannot find nodes for its replication sets which don’t have other volumes of same group, the volume creation will fail. fg: “true”
label List of comma-separated name=value pairs to apply to the Portworx volume label: “name=mypxvol”
nodes Comma-separated Portworx Node ID’s to use for replication sets of the volume nodes: “minion1,minion2”
aggregation_level Specifies the number of replication sets the volume can be aggregated from aggregation_level: “2”
sticky Flag to create sticky volumes that cannot be deleted until the flag is disabled sticky: “true”
journal Flag to indicate if you want to use journal device for the volume’s data. This will use the journal device that you used when installing Portworx. This is useful to absorb frequent syncs from short bursty workloads. Default: false journal: “true”
placement_strategy Flag to refer the name of the VolumePlacementStrategy. For example:

apiVersion: storage.k8s.io/v1
kind: StorageClass
 name: postgres-storage-class
provisioner: kubernetes.io/portworx-volume
 placement_strategy: “postgres-volume-affinity”
secure Flag to create an encrypted volume. For details about how you can create encrypted volumes, see the Create encrypted PVCs page. secure: “true”
Flag to create scheduled snapshots with Stork. For example:

 schedulePolicyName: daily
  portworx/snapshot-type: local
 schedulePolicyName: weekly
  portworx/snapshot-type: cloud

Note: This example references two schedules:
  • The default-schedule backs up volumes to the local Portworx cluster daily.
  • The weekly-schedule backs up volumes to cloud storage every week.
For details about how you can create scheduled snapshots with Stork, see the Scheduled snapshots page.
sharedv4_svc_type Flag to indicate the mechanism Kubernetes will use for locating your sharedv4 volume. If you use this flag and there’s a failover of the nodes running your sharedv4 volume, you no longer need to restart your pods. Possible values are: ClusterIP or LoadBalancer. sharedv4_svc_type: “ClusterIP”
NOTE: For the list of Kubernetes-specific parameters that you can use with a Portworx Storage class, see the Storage Classes page of the Kubernetes documentation.

Provision volumes

Step 1: Create Storage Class.

Create the storageclass:

kubectl create -f examples/volumes/portworx/portworx-sc.yaml


kind: StorageClass
apiVersion: storage.k8s.io/v1
  name: portworx-sc
provisioner: kubernetes.io/portworx-volume
  repl: "1"

Download example

Verifying storage class is created:

kubectl describe storageclass portworx-sc
     Name: 	        	portworx-sc
     IsDefaultClass:	        No
     Annotations:		<none>
     Provisioner:		kubernetes.io/portworx-volume
     Parameters:		repl=1
     No events.

Step 2: Create Persistent Volume Claim.

Creating the persistent volume claim:

kubectl create -f examples/volumes/portworx/portworx-volume-pvcsc.yaml


kind: PersistentVolumeClaim
apiVersion: v1
  name: pvcsc001
    volume.beta.kubernetes.io/storage-class: portworx-sc
    - ReadWriteOnce
      storage: 2Gi

Download example

Verifying persistent volume claim is created:

kubectl describe pvc pvcsc001
Name:	      	pvcsc001
Namespace:      default
StorageClass:   portworx-sc
Status:	      	Bound
Volume:         pvc-e5578707-c626-11e6-baf6-08002729a32b
Labels:	      	<none>
Capacity:	    2Gi
Access Modes:   RWO
No Events.

Persistent Volume is automatically created and is bounded to this pvc.

Verifying persistent volume claim is created:

kubectl describe pv pvc-e5578707-c626-11e6-baf6-08002729a32b
Name: 	      	pvc-e5578707-c626-11e6-baf6-08002729a32b
Labels:        	<none>
StorageClass:  	portworx-sc
Status:	      	Bound
Claim:	      	default/pvcsc001
Reclaim Policy: 	Delete
Access Modes:   	RWO
Capacity:	        2Gi
Type:	      	PortworxVolume (a Portworx Persistent Volume resource)
VolumeID:   	374093969022973811
No events.

Step 3: Create Pod which uses Persistent Volume Claim with storage class.

Create the pod:

kubectl create -f examples/volumes/portworx/portworx-volume-pvcscpod.yaml


apiVersion: v1
kind: Pod
  name: pvpod
  - name: test-container
    image: gcr.io/google_containers/test-webserver
    - name: test-volume
      mountPath: /test-portworx-volume
  - name: test-volume
      claimName: pvcsc001

Download example

Verifying pod is created:

kubectl get pod pvpod
pvpod       1/1     Running   0          48m
To access PV/PVCs with a non-root user refer here

Delete volumes

For dynamically provisioned volumes using StorageClass and PVC (PersistenVolumeClaim), if a PVC is deleted, the corresponding Portworx volume will also get deleted. This is because Kubernetes, for PVC, creates volumes with a reclaim policy of deletion. So the volumes get deleted on PVC deletion.

To delete the PVC and the volume, you can run kubectl delete -f <pvc_spec_file.yaml>

Last edited: Monday, May 16, 2022