Local Snapshots
A local snapshot is a user-triggered, point-in-time copy of a volume or group of volumes, stored within the same Kubernetes cluster. Portworx stores the local snapshot on the same physical node or storage backend as the source data. You can use local snapshots to capture the current state of your data without moving it outside the cluster. They are fast to create, lightweight, and ideal for short-term protection and quick recovery from accidental changes or data corruption.
Creating a Local Snapshot
Create local snapshots of Portworx volumes and clone the snapshots to use in pods. With local snapshots, you can either snapshot individual PVCs one by one or snapshot a group of PVCs by using a label selector.
Use Stork to manage snapshots on Kubernetes. To create Portworx snapshots using PVC annotations, follow the instructions at Create a PVC from a snapshot.
Prerequisites
Ensure that you have Stork installed and running on your cluster.
If you fetched the Portworx specs from the Portworx spec generator in Portworx Central and used the default options, Stork is already installed.
Local Snapshot of a Single PVC
This method is not supported for FlashArray Direct Access volumes. Use the CSI-based snapshots method for snapshotting FlashArray Direct Access PVCs.
Creating snapshot within a single namespace
-
If you have a PVC called jenkins-home-jenkins-master-0, in the jenkins namespace, you can create a snapshot for that PVC by using the following spec:
apiVersion: volumesnapshot.external-storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: jenkins-home-jenkins-master-0
namespace: jenkins
spec:
persistentVolumeClaimName: jenkins-home-jenkins-master-0
- Kubernetes
- OpenShift
-
Once you apply the above object you can check the status of the snapshots using
kubectl:kubectl get volumesnapshot.volumesnapshot.external-storage.k8s.io -n jenkinsNAME AGE
jenkins-jobs-jenkins-master-0-snapshot-2019-03-20-snap1 6mkubectl get volumesnapshotdatas.volumesnapshot.external-storage.k8s.io -n jenkinsNAME AGE
k8s-volume-snapshot-xxxxxxxx-xxxx-xxxx-xxxx-0242ac110002 8m
-
Once you apply the above object you can check the status of the snapshots using
oc:oc get -n jenkins volumesnapshotNAME AGE
jenkins-jobs-jenkins-master-0-snapshot-2019-03-20-snap1 6moc get -n jenkins volumesnapshotdatasNAME AGE
k8s-volume-snapshot-xxxxxxxx-xxxx-xxxx-xxxx-0242ac110002 8m
-
The creation of the volumesnapshotdatas object indicates that the snapshot has been created.
- Kubernetes
- OpenShift
Portworx Volume Snapshot ID and the PVC for which the snapshot was created:
kubectl describe volumesnapshotdatasName: k8s-volume-snapshot-xxxxxxxx-xxxx-xxxx-xxxx-0242ac110002
Namespace:
Labels: <none>
Annotations: <none>
API Version: volumesnapshot.external-storage.k8s.io/v1
Kind: VolumeSnapshotData
Metadata:
Creation Timestamp: 2019-03-20T22:22:37Z
Generation: 1
Resource Version: 56596513
Self Link: /apis/volumesnapshot.external-storage.k8s.io/v1/volumesnapshotdatas/k8s-volume-snapshot-xxxxxxxx-xxxx-xxxx-xxxx-0242ac110002
UID: xxxxxxxx-xxxx-xxxx-xxxx-0cc47ab5f9a2
Spec:
Persistent Volume Ref:
Kind: PersistentVolume
Name: pvc-xxxxxxxx-xxxx-xxxx-xxxx-0cc47ab5f9a2
Portworx Volume:
Snapshot Id: 411710013297550893
Snapshot Type: local
Volume Snapshot Ref:
Kind: VolumeSnapshot
Name: jenkins/jenkins-jobs-jenkins-master-0-snapshot-2019-03-20-snap1-xxxxxxxx-xxxx-xxxx-xxxx-0cc47ab5f9a2
Status:
Conditions:
Last Transition Time: 2019-03-20T22:22:37Z
Message: Snapshot created successfully and it is ready
Reason:
Status: True
Type: Ready
Creation Timestamp: <nil>
Events: <none>Portworx Volume Snapshot ID and the PVC for which the snapshot was created:
oc describe volumesnapshotdatasName: k8s-volume-snapshot-xxxxxxxx-xxxx-xxxx-xxxx-0242ac110002
Namespace:
Labels: <none>
Annotations: <none>
API Version: volumesnapshot.external-storage.k8s.io/v1
Kind: VolumeSnapshotData
Metadata:
Creation Timestamp: 2019-03-20T22:22:37Z
Generation: 1
Resource Version: 56596513
Self Link: /apis/volumesnapshot.external-storage.k8s.io/v1/volumesnapshotdatas/k8s-volume-snapshot-xxxxxxxx-xxxx-xxxx-xxxx-0242ac110002
UID: xxxxxxxx-xxxx-xxxx-xxxx-0cc47ab5f9a2
Spec:
Persistent Volume Ref:
Kind: PersistentVolume
Name: pvc-xxxxxxxx-xxxx-xxxx-xxxx-0cc47ab5f9a2
Portworx Volume:
Snapshot Id: 411710013297550893
Snapshot Type: local
Volume Snapshot Ref:
Kind: VolumeSnapshot
Name: jenkins/jenkins-jobs-jenkins-master-0-snapshot-2019-03-20-snap1-xxxxxxxx-xxxx-xxxx-xxxx-0cc47ab5f9a2
Status:
Conditions:
Last Transition Time: 2019-03-20T22:22:37Z
Message: Snapshot created successfully and it is ready
Reason:
Status: True
Type: Ready
Creation Timestamp: <nil>
Events: <none> -
You can use the
storkctlcommand to verify that the snapshot was created successfully:storkctl -n jenkins get snapNAME PVC STATUS CREATED COMPLETED TYPE
jenkins-jobs-jenkins-master-0-snapshot-2019-03-20-snap1 jenkins-jobs-jenkins-master-0 Ready 20 Mar 19 15:22 PDT 20 Mar 19 15:22 PDT local
For details about how you can restore a snapshot to a new PVC or the original PVC, see the Restore snapshots section.
Creating snapshots across namespaces
When creating snapshots, you can provide comma separated regexes with stork.libopenstorage.org/snapshot-restore-namespaces annotation to specify which namespaces the snapshot can be restored to.
When creating PVC from snapshots, if a snapshot exists in another namespace, the snapshot namespace should be specified with stork.libopenstorage.org/snapshot-source-namespace annotation.
Let us take an example where we have two namespaces dev and prod. We create a PVC and snapshot in the dev namespace and then create a PVC in the prod namespace from the snapshot.
-
Create the namespaces
apiVersion: v1
kind: Namespace
metadata:
name: dev
labels:
name: dev
---
apiVersion: v1
kind: Namespace
metadata:
name: prod
labels:
name: prod -
Create the PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-data
namespace: dev
annotations:
volume.beta.kubernetes.io/storage-class: px-mysql-sc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: px-mysql-sc
provisioner: pxd.portworx.com
parameters:
repl: "2" -
Create the snapshot
apiVersion: volumesnapshot.external-storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: mysql-snapshot
namespace: dev
annotations:
stork.libopenstorage.org/snapshot-restore-namespaces: "prod"
spec:
persistentVolumeClaimName: mysql-data -
Create a PVC in a different namespace from the snapshot
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-clone
namespace: prod
annotations:
snapshot.alpha.kubernetes.io/snapshot: mysql-snapshot
stork.libopenstorage.org/snapshot-source-namespace: dev
spec:
accessModes:
- ReadWriteOnce
storageClassName: stork-snapshot-sc
resources:
requests:
storage: 2Gi
Local Snapshot of a Group of PVCs
Creating group snapshots
To take group snapshots, you need to use the GroupVolumeSnapshot CRD object.
For example:
apiVersion: stork.libopenstorage.org/v1alpha1
kind: GroupVolumeSnapshot
metadata:
name: cassandra-groupsnapshot
spec:
pvcSelector:
matchLabels:
app: cassandra
Above spec takes a group snapshot of all PVCs that match labels app=cassandra.
The Examples section has a more detailed end-to-end example.
Above spec keeps all the snapshots local to the Portworx cluster. If you intend on backing up the group snapshots to cloud (S3 endpoint), refer to Create group cloud snapshots.
The GroupVolumeSnapshot object also supports specifying pre and post rules that are run on the application pods using the volumes being snapshotted. This allows users to quiesce the
applications before the snapshot is taken and resume I/O after the snapshot is taken. For more information, see 3D Snapshots.
Checking status of group snapshots
A new VolumeSnapshot object gets created for each PVC that matches the given pvcSelector.
For example, if the label selector app: cassandra matches three PVCs, you have three volumesnapshot objects.
You can track the status of the group volume snapshots using:
- Kubernetes
- OpenShift
kubectl describe groupvolumesnapshot <group-snapshot-name>
oc describe groupvolumesnapshot <group-snapshot-name>
This shows the latest status and also lists the VolumeSnapshot objects after it is complete.
Below is an example of the status section of the cassandra group snapshot.
Status:
Stage: Final
Status: Successful
Volume Snapshots:
Conditions:
Last Transition Time: 2019-01-14T18:02:47Z
Message: Snapshot created successfully and it is ready
Reason:
Status: True
Type: Ready
Data Source:
Portworx Volume:
Snapshot Id: 1015874155818710382
Parent Volume ID: 763613271174793816
Task ID:
Volume Snapshot Name: cassandra-group-snapshot-cassandra-data-cassandra-2-xxxxxxxx-xxxx-xxxx-xxxx-080027ee1df7
Conditions:
Last Transition Time: 2019-01-14T18:02:47Z
Message: Snapshot created successfully and it is ready
Reason:
Status: True
Type: Ready
Data Source:
Portworx Volume:
Snapshot Id: 1130064992705573378
Parent Volume ID: 1081147806034223862
Task ID:
Volume Snapshot Name: cassandra-group-snapshot-cassandra-data-cassandra-0-xxxxxxxx-xxxx-xxxx-xxxx-080027ee1df7
Conditions:
Last Transition Time: 2019-01-14T18:02:47Z
Message: Snapshot created successfully and it is ready
Reason:
Status: True
Type: Ready
Data Source:
Portworx Volume:
Snapshot Id: 175241555565145805
Parent Volume ID: 237262101530372284
Task ID:
Volume Snapshot Name: cassandra-group-snapshot-cassandra-data-cassandra-1-xxxxxxxx-xxxx-xxxx-xxxx-080027ee1df7
You can see three VolumeSnapshots which are part of the group snapshot. The name of the VolumeSnapshot is in the Volume Snapshot Name field. For more details on the VolumeSnapshot:
- Kubernetes
- OpenShift
kubectl get volumesnapshot.volumesnapshot.external-storage.k8s.io/<volume-snapshot-name> -o yaml
oc get volumesnapshot.volumesnapshot.external-storage.k8s.io/<volume-snapshot-name> -o yaml
Snapshots across namespaces
When creating a group snapshot, you can specify a list of namespaces to which the group snapshot can be restored. Below is an example of a group snapshot which can be restored into prod-01 and prod-02 namespaces.
apiVersion: stork.libopenstorage.org/v1alpha1
kind: GroupVolumeSnapshot
metadata:
name: cassandra-groupsnapshot
spec:
pvcSelector:
matchLabels:
app: cassandra
restoreNamespaces:
- prod-01
- prod-02