Skip to main content
Version: 3.2

Create and use cloud snapshots

Summary and Key concepts

Summary:

This article provides a comprehensive guide on how to create and restore cloud snapshots of Portworx volumes, both within the same Portworx cluster and across different Portworx clusters. It covers key prerequisites, such as having Stork installed and configuring cloud snapshot credentials. The document walks through the process of backing up volumes to cloud storage and restoring them to new or existing Persistent Volume Claims (PVCs), either via Stork or the Portworx pxctl command-line tool. Additionally, it explains how to handle in-place restores using Stork for applications managed by the Stork scheduler, and outlines version compatibility requirements for snapshots.

Kubernetes Concepts:

  • PersistentVolumeClaim (PVC): Request for storage by a Kubernetes user, which can be backed up or restored using snapshots.
  • StorageClass: Defines how storage is provisioned for PVCs, including the stork-snapshot-sc StorageClass used for restoring from snapshots.
  • Annotations: Used to specify snapshot details when creating PVCs from snapshots.
  • Retain Policy: Ensures volumes are not deleted when associated Kubernetes objects are removed.

Portworx Concepts:

  • Stork: A Portworx storage orchestrator responsible for managing snapshots and backups in Kubernetes environments.
  • pxctl: The Portworx CLI tool for managing volumes, including creating cloud snapshots and restoring backups.

This document shows how you can create cloud snapshots of Portworx volumes and how you can clone those snapshots to use them in pods.

note

You cannot use an older version of Portworx to restore a cloud snapshot created with a newer one. For example, if you're running Portworx 2.1, you can't restore a cloud snapshot created with Portworx 2.2.

Back up a volume and restore it to the same Portworx cluster

This section shows how you can back up a volume and restore it to the same Portworx cluster.

Prerequisites

  • This requires that you already have Stork installed and running on your Kubernetes cluster. If you fetched the Portworx specs from the Portworx spec generator in Portworx Central and used the default options, Stork is already installed.
  • Cloud snapshots using below method is supported in Portworx version 1.4 and above.
  • Cloud snapshots (for aggregated volumes) using below method is supported in Portworx version 2.0 and above.

Configuring cloud secrets

To create cloud snapshots, one needs to setup secrets with Portworx which will get used to connect and authenticate with the configured cloud provider. Follow instructions on the create and configure credentials section to setup secrets.

Create cloud snapshots

With cloud snapshots, you can either snapshot individual PVCs one by one or snapshot a group of PVCs.

Restore cloud snapshots

Once you've created a cloud snapshot, you can restore it to a new PVC or the original PVC.

Restore a cloud snapshot to a new PVC

When you install Stork, it also creates a storage class called stork-snapshot-sc. This storage class can be used to create PVCs from snapshots.

To create a PVC from a snapshot, add the snapshot.alpha.kubernetes.io/snapshot annotation to refer to the snapshot name. If the snapshot exists in another namespace, you should specify the snapshot namespace with the stork.libopenstorage.org/snapshot-source-namespace annotation in the PVC.

The Retain policy is important if you need to keep the volume in place, even after removing the Kubernetes objects from a cluster.

note
  • As shown in the following example, the storageClassName should be the Stork StorageClass stork-snapshot-sc.
  • When using this storage class the PVC is creating with delete as Retain policy. However, if the source PVC is having the policy as retain, then this will not be inherited to the restored PVC. After the restore, you should manually verify the retain policy and change it if needed.
  1. Create a new PVC using the following spec:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: vdbench-restore
    namespace: vdbench-sv4-svc-autojournal
    annotations:
    snapshot.alpha.kubernetes.io/snapshot: vdbench-pvc-output-sv4-svc-schedule-interval-2024-01-10-225924
    stork.libopenstorage.org/snapshot-source-namespace: vdbench-sv4-svc-autojournal
    spec:
    accessModes:
    - ReadWriteOnce
    storageClassName: stork-snapshot-sc
    resources:
    requests:
    storage: 10Gi
  2. Once the above PVC specification is applied, verify that Stork has created a PVC that is backed by a clone of the specified Portworx volume snapshot(vdbench-pvc-output-sv4-svc-schedule-interval-2024-01-10-225924):

    storkctl -n vdbench-sv4-svc-autojournal get volumesnapshot
    NAME                                                             PVC                          STATUS   CREATED               COMPLETED             TYPE
    vdbench-pvc-output-sv4-svc-schedule-interval-2024-01-10-225924 vdbench-pvc-output-sv4-svc Ready 10 Jan 24 14:59 PST 10 Jan 24 14:59 PST cloud
  3. Verify that a cloud snapshot is restored to the PVC created above:

    kubectl get pvc -n vdbench-sv4-svc-autojournal

    NAME                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS              AGE
    vdbench-pvc-enc-sv4-svc Bound pvc-xxxxxxxx-xxxx-xxxx-xxxx-a2959220f31b 50Gi RWX vdbench-sc-sv4-svc-auto 29m
    vdbench-pvc-output-sv4-svc Bound pvc-xxxxxxxx-xxxx-xxxx-xxxx-ceda98f2ae06 5Gi RWX vdbench-sc-sv4-svc-auto 29m
    vdbench-restore Bound pvc-xxxxxxxx-xxxx-xxxx-xxxx-9916a6098418 10Gi RWO stork-snapshot-sc 4m46s

    In the above example output, the vdbench-restore PVC is in a Bound status and is associated with the correct volume that represents the restored snapshot.

Restore a cloud snapshot to the original PVC

When you perform an in-place restore to a PVC, Stork takes the pods using that PVC offline, restores the volume from the snapshot, then brings the pods back online.

note

In-place restore using VolumeSnapshotRestore works only for applications deployed using the stork scheduler. If you're not using the Stork scheduler, Portworx displays the following error when describing the VolumeSnapshotRestore resource:

Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Failed 5s (x2 over 15s) stork application not scheduled by stork scheduler
  1. Create a VolumeSnapshotRestore YAML file specifying the following:

    • apiVersion as stork.libopenstorage.org/v1alpha1
    • kind as VolumeSnapshotRestore
    • metadata.name with the name of the object that performs the restore
    • metadata.namespace with the name of the target namespace
    • spec.sourceName with the name of the snapshot you want to restore
    • spec.sourceNamespace with the namespace in which the snapshot resides

    The following example restores data from a snapshot called mysql-snapshot which was created in the mysql-snap-restore-splocal namespace to a PVC called mysql-snap-inrestore in the default namespace:

    apiVersion: stork.libopenstorage.org/v1alpha1
    kind: VolumeSnapshotRestore
    metadata:
    name: mysql-snap-inrestore
    namespace: default
    spec:
    sourceName: mysql-snapshot
    sourceNamespace: mysql-snap-restore-splocal
  2. Place the spec into a file called mysql-cloud-snapshot-restore.yaml and apply it:

    kubectl apply -f mysql-cloud-snapshot-restore.yaml

  3. You can enter the following command to see the status of the restore process:

    storkctl get volumesnapshotrestore
    NAME                   SOURCE-SNAPSHOT   SOURCE-SNAPSHOT-NAMESPACE   STATUS          VOLUMES   CREATED
    mysql-snap-inrestore mysql-snapshot default Successful 1 23 Sep 19 21:55 EDT

    You can also use the kubectl describe command to retrieve more detailed information about the status of the restore process.

    Example:

    kubectl describe volumesnapshotrestore mysql-snap-inrestore

    Name:         mysql-snap-inrestore
    Namespace: default
    Labels: <none>
    Annotations: kubectl.kubernetes.io/last-applied-configuration:
    {"apiVersion":"stork.libopenstorage.org/v1alpha1","kind":"VolumeSnapshotRestore","metadata":{"annotations":{},"name":"mysql-snap-inrestore...
    API Version: stork.libopenstorage.org/v1alpha1
    Kind: VolumeSnapshotRestore
    Metadata:
    Creation Timestamp: 2019-09-23T17:24:30Z
    Generation: 5
    Resource Version: 904014
    Self Link: /apis/stork.libopenstorage.org/v1alpha1/namespaces/default/volumesnapshotrestores/mysql-snap-inrestore
    UID: xxxxxxxx-xxxx-xxxx-xxxx-000c295d6364
    Spec:
    Group Snapshot: false
    Source Name: mysql-snapshot
    Source Namespace: default
    Status:
    Status: Successful
    Volumes:
    Namespace: default
    Pvc: mysql-data
    Reason: Restore is successful
    Snapshot: k8s-volume-snapshot-xxxxxxxx-xxxx-xxxx-xxxx-320ff611f4ca
    Status: Successful
    Volume: pvc-xxxxxxxx-xxxx-xxxx-xxxx-000c295d6364
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Normal Successful 0s stork Snapshot in-Place Restore completed

References

Back up a volume and restore it to a different Portworx cluster

This section shows how you can back up a volume and restore it to a different Portworx cluster using the pxctl command-line utility.

Prerequisites

Before you can back up and restore a volume to a different Portworx cluster, you must meet the following prerequisites:

  • Two running Portworx clusters . Refer to the Installation page for details about how to install Portworx.
  • An object store. Cloud snapshots work with Amazon S3, Azure Blob, Google Cloud Storage, or any S3 compatible object store. If you don't have an object store, Portworx by Pure Storage recommends using MinIO. See the MinIO Quickstart Guide page for details about installing MinIO.
  • A secret store provider. Refer to the Secret store management page for details about configuring a secret store provider.

Create your cloud snapshot credentials on the source cluster

The options you use to create your cloud snapshot credentials differ based on which secret store provider you use. The steps in this document describe AWS KMS, but you can find instructions for creating other credentials in the CLI reference.

  1. Enter the pxctl credentials create command, specifying the following:

    • The --provider flag with the name of the cloud provider (s3).
    • The --s3-access-key flag with your secret access key
    • The --s3-secret-key flag with your access key ID
    • The --s3-region flag with the name of the S3 region (us-east-1)
    • The --s3-endpoint flag with the name of the endpoint (s3.amazonaws.com)
    • The optional --s3-storage-class flag with either the STANDARD or STANDARD-IA value, depending on which storage class you prefer
    • The name of your cloud credentials

    Example:

    pxctl credentials create --provider s3 --s3-access-key <YOUR_ACCESS_KEY> --s3-secret-key <YOUR_SECRET_KEY> --s3-region us-east-1 --s3-endpoint <YOUR_ENDPOINT> --s3-storage-class <YOUR_STORAGE_CLASS> <YOUR_SOURCE_S3_CRED>
    Credentials created successfully, UUIDU0d9847d6-786f-4ed8- b263-5cde5a5a12f5
  2. You can validate your cloud snapshot credentials by entering the pxctl credentials validate command followed by the name of your cloud credentials:

    pxctl cred validate <YOUR_SOURCE_S3_CRED>
    Credential validated successfully

Back up a volume

  1. Enter the following pxctl volume list command to list all volumes on the source cluster:

    pxctl volume list
    ID			NAME	SIZE	HA	SHARED	ENCRYPTED	IO_PRIORITY	STATUS				SNAP-ENABLED
    869510655149846346 testvol 1 GiB 1 no no HIGH up - attached on X.X.X.123 no
    186765995885697345 vol2 1 GiB 1 no no HIGH up - attached on X.X.X.123 no
  2. To back up a volume, enter the following pxctl cloudsnap backup command, specifying the name of your volume. The following example backs up a volume called testvol:

    pxctl cloudsnap backup testvol
    Cloudsnap backup started successfully with id: xxxxxxxx-xxxx-xxxx-xxxx-a46868cc6b5c
  3. Enter the pxctl cloudsnap status command to display the status of your backup or restore operations:

    pxctl cloudsnap status
    NAME					SOURCEVOLUME		STATE		NODE		TIME-ELAPSED	COMPLETED
    xxxxxxxx-xxxx-xxxx-xxxx-9bff6ea440eb 869510655149846346 Backup-Failed X.X.X.153 80.915632ms Wed, 22 Jan 2020 23:51:17 UTC
    xxxxxxxx-xxxx-xxxx-xxxx-10458b7a8445 869510655149846346 Backup-Done X.X.X.153 55.098204ms Wed, 22 Jan 2020 23:52:15 UTC
    xxxxxxxx-xxxx-xxxx-xxxx-f15839b26e76 186765995885697345 Backup-Failed X.X.X.153 39.703754ms Wed, 29 Jan 2020 18:17:30 UTC
    xxxxxxxx-xxxx-xxxx-xxxx-2f4a504c01f9 186765995885697345 Backup-Done X.X.X.153 60.439873ms Wed, 29 Jan 2020 18:34:17 UTC
    xxxxxxxx-xxxx-xxxx-xxxx-a46868cc6b5c 869510655149846346 Backup-Done X.X.X.153 45.874676ms Wed, 29 Jan 2020 22:32:30 UTC
  4. To see more details about your backup operation, enter the pxctl cloudsnap status command specifying the following:

    • The --json flag

    • The --name flag with the task name of your backup.

      Example:

      pxctl --json cloudnsap status --name xxxxxxxx-xxxx-xxxx-xxxx-a46868cc6b5c
      xxxxxxxx-xxxx-xxxx-xxxx-a46868cc6b5c
      {
      "xxxxxxxx-xxxx-xxxx-xxxx-a46868cc6b5c": {
      "ID": "xxxxxxxx-xxxx-xxxx-xxxx-92cf569c9358/869510655149846346-1140911084048715440",
      "OpType": "Backup",
      "Status": "Done",
      "BytesDone": 368640,
      "BytesTotal": 0,
      "EtaSeconds": 0,
      "StartTime": "2020-01-29T22:32:30.258745865Z",
      "CompletedTime": "2020-01-29T22:32:30.304620541Z",
      "NodeID": "xxxxxxxx-xxxx-xxxx-xxxx-3c38a8c04736",
      "SrcVolumeID": "869510655149846346",
      "Info": [
      ""
      ],
      "CredentialUUID": "xxxxxxxx-xxxx-xxxx-xxxx-5cde5a5a12f5",
      "GroupCloudBackupID": ""
      }
  5. Run the pxctl cloudsnap list command, and look through the output to find the identifier of the cloud snapshot associated with your volume. You will use this to restore your cloud snapshot.

    pxctl cloudsnap list
    SOURCEVOLUME		SOURCEVOLUMEID			CLOUD-SNAP-ID										CREATED-TIME				TYPE		STATUS
    testvol 869510655149846346 xxxxxxxx-xxxx-xxxx-xxxx-92cf569c9358/869510655149846346-457116323485794032 Wed, 22 Jan 2020 23:52:15 UTC Manual Done
    vol2 186765995885697345 xxxxxxxx-xxxx-xxxx-xxxx-92cf569c9358/186765995885697345-237744851553132030 Wed, 29 Jan 2020 18:34:17 UTC Manual Done
    testvol 869510655149846346 xxxxxxxx-xxxx-xxxx-xxxx-92cf569c9358/869510655149846346-1140911084048715440 Wed, 29 Jan 2020 22:32:30 UTC Manual Done

    The CLOUD-SNAP-ID column is in the form of <YOUR_SOURCE_CLUSTER_ID>/<YOUR_CLOUD_SNAP_ID>. In this example, the identifier of the source cluster is xxxxxxxx-xxxx-xxxx-xxxx-92cf569c9358, and the identifier of the cloud snapshot is 869510655149846346-457116323485794032.

Create your cloud snapshot credentials on the destination cluster

  1. Enter the pxctl credentials create command, specifying the following:

    • The --provider flag with the name of the cloud provider (s3).
    • The --s3-access-key flag with your secret access key
    • The --s3-secret-key flag with your access key ID
    • The --s3-region flag with the name of the S3 region (us-east-1)
    • The --s3-endpoint flag with the name of the endpoint (s3.amazonaws.com)
    • The optional --s3-storage-class flag with either the STANDARD or STANDARD-IA value, depending on which storage class you prefer
    • The name of your cloud credentials

    Example:

    pxctl credentials create --provider s3 --s3-access-key <YOUR_ACCESS_KEY> --s3-secret-key <YOUR_SECRET_KEY> --s3-region us-east-1 --s3-endpoint <YOUR_ENDPOINT> --s3-storage-class <YOUR_STORAGE_CLASS> <YOUR_DEST_S3_CRED>
    Credentials created successfully, UUID:bb281a27-c2bb-4b3d-b5b9- efa0316a9561

Restore your volume on the target cluster

  1. On the target cluster, verify that your cloud snapshot is visible. Enter the pxctl cloudsnap list command, specifying the --cluster flag with the identifier of the source cluster.

    Example:

    pxctl cloudsnap list --cluster 3f2fa12e-186f-466d- ac35-92cf569c9358
    xxxxxxxx-xxxx-xxxx-xxxx-92cf569c9358
    SOURCEVOLUME SOURCEVOLUMEID CLOUD-SNAP-ID CREATED-TIME TYPE STATUS
    testvol 869510655149846346 xxxxxxxx-xxxx-xxxx-xxxx-92cf569c9358/869510655149846346-457116323485794032 Wed, 22 Jan 2020 23:52:15 UTC Manual Done
    vol2 186765995885697345 xxxxxxxx-xxxx-xxxx-xxxx-92cf569c9358/186765995885697345-237744851553132030 Wed, 29 Jan 2020 18:34:17 UTC Manual Done
    testvol 869510655149846346 xxxxxxxx-xxxx-xxxx-xxxx-92cf569c9358/869510655149846346-1140911084048715440 Wed, 29 Jan 2020 22:32:30 UTC Manual Done
  2. To restore your volume, run the pxctl cloudsnap restore command specifying the --snap flag with the cloud snapshot identifier associated with your backup. Example:

    pxctl cloudsnap restore --snap xxxxxxxx-xxxx-xxxx-xxxx-92cf569c9358/869510655149846346-1140911084048715440
    Cloudsnap restore started successfully on volume: 1127186980413628688 with task name:xxxxxxxx-xxxx-xxxx-xxxx-a6b731f73983
  3. To see the status of your restore operation, enter the following command:

    pxctl cloudsnap status
    NAME				                  	SOURCEVOLUME									STATE		NODE		TIME-ELAPSED	COMPLETED
    xxxxxxxx-xxxx-xxxx-xxxx-dd77c14c00bc 79001397979145130 Backup-Done X.X.X.94 44.634974ms Wed, 29 Jan 2020 20:13:58 UTC
    xxxxxxxx-xxxx-xxxx-xxxx-2aba15c5b300 xxxxxxxx-xxxx-xxxx-xxxx-92cf569c9358/869510655149846346-1140911084048715440 Restore-Done X.X.X.94 53.527074ms Wed, 29 Jan 2020 22:52:47 UTC
  4. Run the pxctl volume list command to list all volumes on the destination cluster:

    pxctl volume list
    ID			NAME					SIZE	HA	SHARED	ENCRYPTED	IO_PRIORITY	STATUS		SNAP-ENABLED
    1021141073379827532 Restore-869510655149846346-556794585 1 GiB 1 no no HIGH up - detached no
    79001397979145130 samvol 1 GiB 1 no no HIGH up - detached no

The naming scheme for cloud backups

Cloud backups adhere to the following naming scheme: <bucket-id>/<vol-id>-<snap-id>.

Example:

  • xxxxxxxx-xxxx-xxxx-xxxx-14223ac55170/56706279008755778-725134927222077463

For incremental backups, Portworx adds the -incr suffix as follows: <bucket-id>/<vol-id>-<snap-id>-incr.

Example:

  • xxxxxxxx-xxxx-xxxx-xxxx-14223ac55170/590114184663672482-951325819047337066-incr
Was this page helpful?