Volume lifecycle basics in OKE
Summary and Key concepts
Summary:
The article provides instructions for creating and using persistent volumes with the Portworx CSI Driver. It includes examples for defining PersistentVolumeClaim
(PVC) specs, such as the px-csi-db
storage class, and shows how to reference these PVCs in deployments. It also describes how to create sharedv4 CSI-enabled volumes for multi-pod access, and how to clone volumes using CSI. Lastly, it clarifies that migrating PVCs from Kubernetes' native driver to CSI is not necessary, as both can co-exist in the same cluster.
Kubernetes Concepts:
- PersistentVolumeClaim (PVC): A request for storage resources by a user.
- StorageClass: Defines the class of storage available for dynamically provisioning volumes.
- Deployment: Manages replicated applications, including rolling updates.
- ReadWriteOnce: An access mode where only a single node can mount the volume in read/write mode.
- ReadWriteMany: An access mode where multiple nodes can mount the volume in read/write mode.
Portworx Concepts:
- Portworx CSI Driver: Implements the CSI standard to expose Portworx storage features in Kubernetes.
- Sharedv4 Volumes: Allows multiple pods to share a single volume.
- CSI Volume Cloning: Allows copying a volume, including its data, to create a new volume.
Create and use persistent volumes
Create and use volumes with CSI by configuring specs you create for your storage class, PVC, and volumes.
In this example, we are using the px-csi-db
storage class out of the box. Please refer to CSI Enabled Storage Classes for a list of available CSI enabled storage classes offered by Portworx.
-
Create a PersistentVolumeClaim based on the
px-csi-db
StorageClass:kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: px-mysql-pvc
spec:
storageClassName: px-csi-db
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi -
Create a volume by referencing the PVC you created. This example creates a MySQL deployment referencing the
px-mysql-pvc
PVC you created in the previous step:apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
replicas: 1
template:
metadata:
labels:
app: mysql
version: "1"
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: px-mysql-pvc
Create sharedv4 CSI-enabled volumes
Create sharedv4 CSI-enabled volumes by performing the following steps.
-
Create a sharedv4 PVC by creating the following
shared-pvc.yaml
file:kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: px-mysql-pvc
spec:
storageClassName: px-csi-db
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
-
Apply the
shared-pvc.yaml
file:kubectl apply -f shared-pvc.yaml
Clone volumes with CSI
You can clone CSI-enabled volumes, duplicating both the volume and content within it.
-
Create a PVC that references the PVC you wish to clone, specifying the
dataSource
with thekind
andname
of the target PVC you wish to clone. The following spec creates a clone of thepx-mysql-pvc
PVC in a YAML file namedclonePVC.yaml
:kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: clone-of-px-mysql-pvc
spec:
storageClassName: px-csi-db
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
dataSource:
kind: PersistentVolumeClaim
name: px-mysql-pvc
-
Apply the
clonePVC.yaml
spec to create the clone:kubectl apply -f clonePVC.yaml
Migrate to CSI PVCs
Currently, you cannot migrate or convert PVCs created using the native Kubernetes driver to the CSI driver. However, this is not required, and both types of PVCs can co-exist on the same cluster.
Contribute
Portworx by Pure Storage welcomes contributions to its CSI implementation, which is open-source with a repository located at OpenStorage. In addition, we also encourage contributions to the Kubernetes-CSI open source implementation.