Skip to main content
Version: 3.1

Scheduled snapshots in OpenShift vSphere

Summary and Key concepts

Summary:

This article provides detailed steps for automating snapshots of Portworx volumes using Autopilot and Stork in a Kubernetes environment. It covers setting up cloud credentials, using the storkctl tool, and creating schedule policies to manage automatic snapshots. The article shows how to associate schedule policies with both individual volumes and StorageClasses, allowing automatic snapshots to be taken at regular intervals, either locally or in the cloud. Additionally, it explains how to monitor and verify the snapshot schedules and snapshots using storkctl commands.

Kubernetes Concepts:

  • PersistentVolumeClaim (PVC): A request for storage by a Kubernetes user. The article demonstrates how to attach snapshot schedules to PVCs.
  • StorageClass: Defines how storage is provisioned in Kubernetes. The article shows how to apply snapshot schedules to all PVCs using a specific StorageClass.
  • Annotations: Used in both PVC and StorageClass definitions to specify attributes like snapshot types and cloud credentials.
  • OpenShift Commands (oc): Several examples use oc commands to interact with the OpenShift cluster.

Portworx Concepts:

  • Autopilot: A feature that automates storage operations, such as resizing volumes and managing snapshots based on predefined rules.

  • Stork: The Stork component enables management of storage operations, including scheduling volume snapshots. The storkctl tool is used to interact with Stork.

  • Cloud Snapshots: Snapshots stored in cloud environments. The article shows how to configure cloud credentials and associate them with snapshot schedules.

Prerequisites

Configuring cloud secrets

To create cloud snapshots, one needs to setup secrets with Portworx which will get used to connect and authenticate with the configured cloud provider.

Follow instructions on the create and configure credentials section to setup secrets.

Storkctl

important

Always use the latest storkctl binary tool by downloading it from the current running Stork container.

Perform the following steps to download storkctl from the Stork pod:

  • Linux:

    STORK_POD=$(oc get pods -n <namespace> -l name=stork -o jsonpath='{.items[0].metadata.name}') &&
    oc cp -n <px-namespace> $STORK_POD:/storkctl/linux/storkctl ./storkctl
    sudo mv storkctl /usr/local/bin &&
    sudo chmod +x /usr/local/bin/storkctl
  • OS X:

    STORK_POD=$(oc get pods -n <namespace> -l name=stork -o jsonpath='{.items[0].metadata.name}') &&
    oc cp -n <px-namespace> $STORK_POD:/storkctl/darwin/storkctl ./storkctl
    sudo mv storkctl /usr/local/bin &&
    sudo chmod +x /usr/local/bin/storkctl
  • Windows:

    1. Copy storkctl.exe from the stork pod:

      STORK_POD=$(oc get pods -n <px-namespace> -l name=stork -o jsonpath='{.items[0].metadata.name}') &&
      oc cp -n <px-namespace> $STORK_POD:/storkctl/windows/storkctl.exe ./storkctl.exe
    2. Move storkctl.exe to a directory in your PATH.

Create a schedule policy

You can use a schedule policy to specify when Portworx should trigger a specific action.

  1. Create a file named daily-policy.yaml, specifying the following fields and values:

    • apiVersion: with the version of the Stork scheduler (this example uses stork.libopenstorage.org/v1alpha1)

    • kind: with the SchedulePolicy value

    • metadata.name: with the name of the SchedulePolicy object (this example uses daily)

    • policy.daily.time: with the backup time (this example uses "10:14PM")

    • policy.retain: with the number of backups Portworx must retain (this example retains 3 backups)

      apiVersion: stork.libopenstorage.org/v1alpha1
      kind: SchedulePolicy
      metadata:
      name: daily
      policy:
      daily:
      time: "10:14PM"
      retain: 3

    For more details about how you can configure aschedule policy, see the Schedule Policy reference page.

  2. Apply the spec:

    oc apply -f daily-policy.yaml
    schedulepolicy.stork.libopenstorage.org/daily created
  3. You can check the status of your schedule policy by entering the storkctl get schedulepolicy command:

    storkctl get schedulepolicy
    NAME      INTERVAL-MINUTES   DAILY     WEEKLY             MONTHLY
    daily N/A 10:14PM N/A N/A

Associate a schedule policy with a StorageClass or a Volume

The following sections show how you can associate a schedule policy either with a Volume or a StorageClass.

Create a VolumeSnapshotSchedule

Use a VolumeSnapshotSchedule to associate your schedule policy at the CRD level, and back up specific volumes according to a schedule you define.

  1. Create a file called volume-snapshot-schedule.yaml specifying the following fields and values:
  • metadata:
    • name: with the name of this VolumeSnapshotSchedule policy
    • namespace: the namespace in which this policy will exist
    • annotations:
      • portworx/snapshot-type: with the cloud or local value, depending on what environment you want store your snapshots in
      • portworx/cloud-cred-id: with your cloud environment credentials
      • stork.libopenstorage.org/snapshot-restore-namespaces: with other namespaces snapshots taken with this policy can restore to
      • The following annotations are required when PX-Security is enabled:
        • openstorage.io/auth-secret-namespace: namespace where the kubernetes secret holding the auth token resides
        • openstorage.io/auth-secret-name: name of the kubernetes secret which holds the auth token
  • spec:
    • schedulePolicyName: with the name of the schedule policy you defined in the steps above

    • suspend: with a boolean value specifying if the schedule should be in a suspended state

    • preExecRule: with the name of a rule to run before taking the snapshot

    • postExecRule: with the name of a rule to run after taking the snapshot

    • reclaimPolicy: with retain or delete, indicating what Portworx should do with the snapshots that were created using the schedule. Specifying the delete value deletes the snapshots created by this schedule when the schedule is deleted.

    • template.spec.persistentVolumeClaimName: with the PVC you want this policy to apply to

      apiVersion: stork.libopenstorage.org/v1alpha1
      kind: VolumeSnapshotSchedule
      metadata:
      name: mysql-snapshot-schedule
      namespace: mysql
      annotations:
      portworx/snapshot-type: cloud
      portworx/cloud-cred-id: <cred_id>
      stork.libopenstorage.org/snapshot-restore-namespaces: otherNamespace
      # Add the below annotations when PX-Security is enabled.
      #openstorage.io/auth-secret-namespace: <secret-namespace>
      #openstorage.io/auth-secret-name: <secret-name>
      spec:
      schedulePolicyName: testpolicy
      suspend: false
      reclaimPolicy: Delete
      preExecRule: testRule
      postExecRule: otherTestRule
      template:
      spec:
      persistentVolumeClaimName: mysql-data
  1. Apply the spec:

    oc apply -f volume-snapshot-schedule.yaml

Create a storage class

Use a StorageClass to apply your schedule policy to all PVCs using that StorageClass.

  1. Create a file called sc-with-snap-schedule.yaml with the following content:

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
    name: px-sc-with-snap-schedules
    annotations:
    # Add the below annotations when PX-Security is enabled.
    #openstorage.io/auth-secret-namespace: <secret-namespace>
    #openstorage.io/auth-secret-name: <secret-name>
    provisioner: pxd.portworx.com
    parameters:
    repl: "2"
    snapshotschedule.stork.libopenstorage.org/default-schedule: |
    schedulePolicyName: daily
    annotations:
    portworx/snapshot-type: local
    snapshotschedule.stork.libopenstorage.org/weekly-schedule: |
    schedulePolicyName: weekly
    annotations:
    portworx/snapshot-type: cloud
    portworx/cloud-cred-id: <credential-uuid>
note

This example references two schedules:

  • The default-schedule backs up volumes to the local Portworx cluster daily.
  • The weekly-schedule backs up volumes to cloud storage every week.
  1. Apply the spec:

    oc apply -f

Specifying the cloud credential to use

note

Specifying the portworx/cloud-cred-id is required only if you have more than one cloud credentials configured. If you have a single one, by default, that credential is used.

Let's list all the available cloud credentials we have.

PX_POD=$(oc get pods -l name=portworx -n <px-namespace> -o jsonpath='{.items[0].metadata.name}')
oc exec $PX_POD -n <px-namespace> -- /opt/pwx/bin/pxctl credentials list

The above command will output the credentials required to authenticate/access the objectstore. Pick the one you want to use for this snapshot schedule and specify it in the portworx/cloud-cred-id annotation in the StorageClass.

Next, let's apply our newly created storage class:

oc apply -f sc-with-snap-schedule.yaml
storageclass.storage.k8s.io/px-sc-with-snap-schedules created

Create a PVC

After we've created the new StorageClass, we can refer to it by name in our PVCs like this:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-snap-schedules-demo
annotations:
volume.beta.kubernetes.io/storage-class: px-sc-with-snap-schedules
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi

Paste the listing from above into a file named pvc-snap-schedules-demo.yaml and run:

oc create -f pvc-snap-schedules-demo.yaml
persistentvolumeclaim/pvc-snap-schedules-demo created

Let's see our PVC:

oc get pvc
NAME                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                AGE
pvc-snap-schedules-demo Bound pvc-xxxxxxxx-xxxx-xxxx-xxxx-080027ee1df7 2Gi RWO px-sc-with-snap-schedules 14s

The above output shows that a volume named pvc-xxxxxxxx-xxxx-xxxx-xxxx-080027ee1df7 was automatically created and is now bounded to our PVC.

We're all set!

Checking snapshots

Verifying snapshot schedules

First let's verify that the snapshot schedules are created correctly.

storkctl get volumesnapshotschedules
NAME                                       PVC                       POLICYNAME   PRE-EXEC-RULE   POST-EXEC-RULE   RECLAIM-POLICY   SUSPEND   LAST-SUCCESS-TIME
pvc-snap-schedules-demo-default-schedule pvc-snap-schedules-demo daily Retain false
pvc-snap-schedules-demo-weekly-schedule pvc-snap-schedules-demo weekly Retain false

Here we can see 2 snapshot schedules, one daily and one weekly.

Verifying snapshots

Now that we've put everything in place, we would want to verify that our cloudsnaps are created.

Using storkctl

Also, you can use storkctl to make sure that the snapshots are created by running:

storkctl get volumesnapshots
NAME                                                                  PVC                       STATUS    CREATED               COMPLETED             TYPE
pvc-snap-schedules-demo-default-schedule-interval-2019-03-27-015546 pvc-snap-schedules-demo Ready 26 Mar 19 21:55 EDT 26 Mar 19 21:55 EDT local
pvc-snap-schedules-demo-weekly-schedule-interval-2019-03-27-015546 pvc-snap-schedules-demo Ready 26 Mar 19 21:55 EDT 26 Mar 19 21:55 EDT cloud