Dynamic provisioning of PVCs in GCP Anthos
This document describes how to dynamically provision a volume using Kubernetes and Portworx.
Using Dynamic Provisioning
Using Dynamic Provisioning and Storage Classes you don't need to create Portworx volumes out of band and they will be created automatically. Using Storage Classes objects an admin can define the different classes of Portworx Volumes that are offered in a cluster. Following are the different parameters that can be used to define a Portworx Storage Class:
Name | Description | Example |
---|---|---|
fs | Specifies a filesystem to be laid out: xfs|ext4. Note: The only preferred and default filesystem across all backends is ext4. The default backend can also support xfs. However, the PX-StoreV2 backend exclusively supports ext4, with no provision for any other filesystems. | fs: "ext4" |
repl | Specifies the replication factor for the volume: 1|2|3 | repl: "3" |
sharedv4 | Creates a globally shared namespace volume which can be used by multiple pods over NFS with POSIX compliant semantics | sharedv4: "true" |
sharedv4_svc_type | Indicates the mechanism Kubernetes will use for locating your sharedv4 volume. If you use this flag and there's a failover of the nodes running your sharedv4 volume, you no longer need to restart your pods. Possible values are: ClusterIP or LoadBalancer. | sharedv4_svc_type: "ClusterIP" |
sharedv4_failover_strategy | Specifies how aggressively to fail over to a new server for a Sharedv4 or Sharedv4 Service volume (Valid Values: aggressive, normal). The default failover strategy for sharedv4 service volumes is aggressive, because these volumes are able to fail over without restarting all the application pods. For more information, see Sharedv4 failover and failover strategy. | sharedv4_failover_strategy: "normal" |
priority_io | Specifies IO Priority: low|medium|high. The default is low | priority_io: "high" |
io_profile | Overrides I/O algorithm that Portworx uses for a volume. For more information about IO profiles, see the IO profiles section of the documentation. | io_profile: "db" |
group | Specifies the group a volume should belong too. Portworx restricts replication sets of volumes of the same group on different nodes. If the force group option 'fg' is set to true, the volume group rule is strictly enforced. By default, it's not strictly enforced. | group: "volgroup1" |
fg | Enforces volume group policy. If a volume belonging to a group cannot find nodes for its replication sets which don't have other volumes of the same group, the volume creation will fail. | fg: "true" |
label | Arbitrary key: value labels that can be applied on a volume | label: "name: mypxvol" |
nodes | Specifies comma-separated Portworx Node IDs to use for replication sets of the volume | nodes: "minion1,minion2" |
ephemeral | Creates the ephemeral volumes | ephemeral: false |
size | Specifies a volume size in GB (default 1) | size: "1073741824" |
block_size | Specifies a block size in Bytes (default 4096) | block_size: "4096" |
queue_depth | Specifies a block device queue depth. (Valid Range: [1 256]) (default 128) | queue_depth: 128 |
snap_interval | Specifies an interval in minutes at which periodic snapshots will be triggered. Set to 0 to disable snapshots | |
snap_schedule | Specifies the name of the snapshot schedule policy created using the pxctl sched-policy command | Refer to this page for examples |
zones | Specify comma-separated zone names in which the volume replicas should be distributed | |
racks | Specify comma-separated rack names in which the volume replicas should be distributed | |
async_io | Enables asynchronous IO processing on the backend drives and could be useful in situations where the workload profile is bursty in nature. Please reach out to Portworx Support before enabling. | async_io: false |
csi_mount_options | Specifies the mounting options for a volume through CSI | |
sharedv4_mount_options | Specifies a comma-separated list of Sharedv4 NFS client mount options provided as key=value pairs | |
proxy_endpoint | Specifies the endpoint address of the external NFS share Portworx is proxying | proxy_endpoint: "nfs://<nfs-share-endpoint> " |
proxy_nfs_subpath | Specifies the sub-path from the NFS share to which this proxy volume has access to | |
proxy_nfs_exportpath | Exports path for NFS proxy volume | proxy_nfs_exportpath: "/<mount-path> " |
export_options | Defines the export options. Currently, only NFS export options are supported for Sharedv4 volumes | |
mount_options | Specifies the mounting options for a volume when it is attached and mounted | |
best_effort_location_provisioning | Requested nodes, zones, racks are optional | |
direct_io | Enables Direct IO on a volume | direct_io: "true" |
scan_policy_trigger | Specifies the trigger point on which filesystem check is triggered. Valid Values: none, on_mount, on_next_mount | |
scan_policy_action | Specifies a filesystem scan action to be taken when triggered. Valid Values: none, scan_only, scan_repair | |
force_unsupported_fs_type | Forces a filesystem type that is not supported. The driver may still refuse to use the type | force_unsupported_fs_type: false |
match_src_vol_provision | Provisions the restore volume on the same pools as the source volume (src volume must exist) | |
nodiscard | Mounts the volume with nodiscard option. This is useful when the volume undergoes a large amount of block discards and later the application rewrites to these discarded block making the discard work done by Portworx useless. This option must be used along with auto_fstrim . Refer to this page for limitations on xfs formatted volumes. | nodiscard: false |
auto_fstrim | Enables auto_fstrim on a volume and requires the nodiscard option to be set. Refer to this page for more details. | auto_fstrim: true |
storagepolicy | Creates a volume on the Portworx cluster that follows the specified set of specs/rules. Refer this page for more details. | |
backend | Specifies which storage backend Portworx is going to provide direct access to. (Valid Values: pure_block, pure_file) | backend: "pure_block" |
pure_export_rules | Specifies the export rules for exporting a Pure Flashblade volume | pure_export_rules: "*(rw)" |
io_throttle_rd_iops | Specifies maximum Read IOPs a volume will be throttled to. Refer to this page for more details. | io_throttle_rd_iops: "1024" |
io_throttle_wr_iops | Specifies maximum Write IOPs a volume will be throttled to. Refer to this page for more details. | io_throttle_wr_iops: "1024" |
io_throttle_rd_bw | Specifies maximum Read bandwidth a volume will be throttled to. Refer to this page for more details. | io_throttle_rd_bw: "10" |
io_throttle_wr_bw | Specifies maximum Write bandwidth a volume will be throttled to. Refer to this page for more details. | io_throttle_wr_bw: "10" |
aggregation_level | Specifies the number of replication sets the volume can be aggregated from | aggregation_level: "2" |
sticky | Creates sticky volumes that cannot be deleted until the flag is disabled | sticky: "true" |
journal | Indicates if you want to use journal device for the volume's data. This will use the journal device that you used when installing Portworx. This is useful to absorb frequent syncs from short bursty workloads. Default: false | journal: "true" |
secure | Creates an encrypted volume. For details about how you can create encrypted volumes, see the Create encrypted PVCs page. | secure: "true" |
placement_strategy | Flag to refer the name of the VolumePlacementStrategy . For example:apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: postgres-storage-class provisioner: pxd.portworx.com parameters: placement_strategy: "postgres-volume-affinity" For details about how to create and use VolumePlacementStrategy , see [this page] (/docs/portworx-enterprise/operations/operate-kubernetes/storage-operations/create-pvcs/volume-placement-strategies/create-use-volplacestrat.md). | placement_strategy: "postgres-volume-affinity" |
snapshotschedule.stork.libopenstorage.org | Creates scheduled snapshots with Stork. For example:snapshotschedule.stork.libopenstorage.org/default-schedule: schedulePolicyName: daily annotations: portworx/snapshot-type: local snapshotschedule.stork.libopenstorage.org weekly-schedule: schedulePolicyName: weekly annotations: portworx/snapshot-type: cloud portworx/cloud-cred-id: <credential-uuid> NOTE:This example references two schedules:
For details about how you can create scheduled snapshots with Stork, see the Scheduled snapshots page. |
For the list of Kubernetes-specific parameters that you can use with a Portworx Storage class, see the Storage Classes page of the Kubernetes documentation.
Provision volumes
Step 1: Create Storage Class.
Create the storageclass:
kubectl create -f examples/volumes/portworx/portworx-sc.yaml
Example:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: portworx-sc
provisioner: pxd.portworx.com
parameters:
repl: "1"
Verifying storage class is created:
kubectl describe storageclass portworx-sc
Name: portworx-sc
IsDefaultClass: No
Annotations: <none>
Provisioner: pxd.portworx.com
Parameters: repl=1
No events.
Step 2: Create Persistent Volume Claim.
Creating the persistent volume claim:
kubectl create -f examples/volumes/portworx/portworx-volume-pvcsc.yaml
Example:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvcsc001
annotations:
volume.beta.kubernetes.io/storage-class: portworx-sc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Verifying persistent volume claim is created:
kubectl describe pvc pvcsc001
Name: pvcsc001
Namespace: default
StorageClass: portworx-sc
Status: Bound
Volume: pvc-xxxxxxxx-xxxx-xxxx-xxxx-08002729a32b
Labels: <none>
Capacity: 2Gi
Access Modes: RWO
No Events.
Persistent Volume is automatically created and is bounded to this pvc.
Verifying persistent volume is created:
kubectl describe pv pvc-xxxxxxxx-xxxx-xxxx-xxxx-08002729a32b
Name: pvc-xxxxxxxx-xxxx-xxxx-xxxx-08002729a32b
Labels: <none>
StorageClass: portworx-sc
Status: Bound
Claim: default/pvcsc001
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 2Gi
Message:
Source:
Type: PortworxVolume (a Portworx Persistent Volume resource)
VolumeID: 374093969022973811
No events.
Step 3: Create Pod which uses Persistent Volume Claim with storage class.
Create the pod:
kubectl create -f examples/volumes/portworx/portworx-volume-pvcscpod.yaml
Example:
apiVersion: v1
kind: Pod
metadata:
name: pvpod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/test-webserver
volumeMounts:
- name: test-volume
mountPath: /test-portworx-volume
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: pvcsc001
Verifying pod is created:
kubectl get pod pvpod
NAME READY STATUS RESTARTS AGE
pvpod 1/1 Running 0 48m
To access PV/PVCs with a non-root user refer here
Delete volumes
For dynamically provisioned volumes using StorageClass and PVC (PersistenVolumeClaim), if a PVC is deleted, the corresponding Portworx volume will also get deleted. This is because Kubernetes, for PVC, creates volumes with a reclaim policy of deletion. So the volumes get deleted on PVC deletion.
To delete the PVC and the volume, you can run kubectl delete -f <pvc_spec_file.yaml>