Skip to main content
Version: 3.1

ApplicationRegistration

With Stork 2.4.3 and newer, you can back up and restore customer-specific CRDs. Using this method, you create a Stork custom resource, called an ApplicationRegistration, which registers your CRD with Stork, allowing you to perform a migration, backup, and restore of your CRDs specified resources.

Prerequisites

  • Your Portworx installation must contain the libopenstorage/stork:2.4.3 image.

Fetch default ApplicationRegistrations

By default, Stork supports a number of customer-specific CRDs. List the existing defaults by entering the storkctl get appreg command:

storkctl get applicationregistrations
NAME        KIND                       CRD-NAME                 VERSION    SUSPEND-OPTIONS                       KEEP-STATUS
cassandra CassandraDatacenter cassandra.datastax.com v1beta1 spec.stopped,bool false
couchbase CouchbaseBucket couchbase.com v2 false
couchbase CouchbaseCluster couchbase.com v2 spec.paused,bool false
couchbase CouchbaseEphemeralBucket couchbase.com v2 false
couchbase CouchbaseMemcachedBucket couchbase.com v2 false
couchbase CouchbaseReplication couchbase.com v2 false
couchbase CouchbaseUser couchbase.com v2 false
couchbase CouchbaseGroup couchbase.com v2 false
couchbase CouchbaseRoleBinding couchbase.com v2 false
couchbase CouchbaseBackup couchbase.com v2 false
couchbase CouchbaseBackupRestore couchbase.com v2 false
ibm IBPCA ibp.com v1alpha1 spec.replicas,int false
ibm IBPConsole ibp.com v1alpha1 spec.replicas,int false
ibm IBPOrderer ibp.com v1alpha1 spec.replicas,int false
ibm IBPPeer ibp.com v1alpha1 spec.replicas,int false
redis RedisEnterpriseCluster app.redislabs.com v1 false
redis RedisEnterpriseDatabase app.redislabs.com v1 false
weblogic Domain weblogic.oracle v8 spec.serverStartPolicy,string,NEVER false

Register a new CRD with Stork

To register a new CRD with Stork, perform the following steps:

  1. Create an applicationregistration spec, specifying the following:
  • metadata.name: With the name of the spec.

  • resources.PodsPath: (Optional) With The path which stores the pods created by the CR. These will be deleted when scaling down the migration.

  • resources.group: With the group of the CRD being registered.

  • resources.version: With the version of the CRD being registered.

  • resources.kind: With the kind of the CRD being registered.

  • resources.keepStatus: (Optional) If you don't want to save the resource's status after migration, set this value to false.

  • resources.suspendOptions.path: (Optional) With the path in the CRD spec which contains the option to suspend the application.

  • resources.suspendOptions.type: (Optional) With the type of the field that is used to suspend the operation. For example, int, if the field contains the replica count for the application.

    apiVersion: stork.libopenstorage.org/v1alpha1
    kind: ApplicationRegistration
    metadata:
    name: myappname
    resources:
    - PodsPath: <POD_PATH>
    group: <CRD_GROUP_NAME>
    version: <CRD_VERSION>
    kind: <CR_KIND>
    # to keep status of CR on migration <!-- where does this apply? to keepStatus below it? can you elaborate more on the statement? -->
    keepStatus: false
    # To disable CR on migration,
    # CR spec path for disable
    suspendOptions:
    path: <spec_path>
    type: <type_of_value_to_set> (can be "int"/"bool")

    The following example ApplicationRegistration allows Stork to back-up, restore, or migrate a datastax/cassandra operator:

    apiVersion: stork.libopenstorage.org/v1alpha1
    kind: ApplicationRegistration
    metadata:
    name: cassandra
    resources:
    - PodsPath: ""
    group: cassandra.datastax.com
    version: v1beta1
    kind: CassandraDatacenter
    keepStatus: false #cassandra datacenter status will not be migrated
    suspendOptions:
    path: spec.stopped #path to disable cassandra datacenter
    type: bool #type of value to be set for spec.stopped
  1. Apply the spec:

    kubectl apply -f <application-registration-spec>.yaml

Once you've applied the spec, you can verify it by entering the following storkctl get command, specifying your own application name:

storkctl get appreg <app-name>
NAME        KIND                  CRD-NAME                 VERSION   SUSPEND-OPTIONS     KEEP-STATUS
cassandra CassandraDatacenter cassandra.datastax.com v1beta1 spec.stopped,bool false
note

If you register your CRD with Stork using an applicationRegistration CRD, you do not need to modify the migration, backup, or restore specs.

Managing application migrations with Stork

Migration of applications controlled by operators in Stork can be conducted with either asynchronous or synchronous DR methods. When the startApplications parameter is set to false for migrations, it is expected that the application pods will not be running in the destination cluster once the migration is completed. For DR scenarios, the startApplications flag is by default set to false since the applications need to be in a scale down state on the destination cluster.

Different applications may require specific scaling down procedures by modifying certain parameters in their Custom Resource (CR) specifications. Stork provides support for modifying the CR spec to scale down these applications, utilizing the options provided by the ApplicationRegistration's suspendOptions. See here for the list of options.

Safeguarding application pods during migration with Stork's stash strategy

For certain applications controlled by clusterwide operators that do not support scaling down via CR spec modifications, the pods related to these applications may become active in the destination namespace after migration. This can be problematic if the intention is to avoid starting the application in the destination cluster before performing the actual failover.

To prevent the application pods from becoming active prematurely, Stork offers a feature known as the "Stash Strategy". This feature allows the CR content to be stashed in a config map during migration on the destination cluster. The actual CR spec is created on the destination cluster only when the applications failover to the destination cluster using storkctl.

note

The stashStrategy feature is only available starting from Stork version 23.8.0. Moreover, do not define any suspend options for application registrations with the stashStrategy enabled.

Here is an example of an application registration for Elasticsearch with the stash strategy enabled. Modifications need to be made in both the source and destination clusters before initiating the migration:

apiVersion: stork.libopenstorage.org/v1alpha1
kind: ApplicationRegistration
metadata:
name: elasticsearch
resources:
- group: elasticsearch.k8s.elastic.co
keepStatus: false
kind: Elasticsearch
stashStrategy:
stashCR: true
version: v1
- group: elasticsearch.k8s.elastic.co
keepStatus: false
kind: Elasticsearch
stashStrategy:
stashCR: true
version: v1beta1
- group: elasticsearch.k8s.elastic.co
keepStatus: false
kind: Elasticsearch
stashStrategy:
stashCR: true
version: v1alpha1
note

When configuring Disaster Recovery (DR) setups in OpenShift, it's crucial to maintain consistency in the operator's namespace scope. Mixing namespaced and clusterwide operators on different sides of the replication process can lead to operational issues and unexpected behavior. To ensure a smooth and reliable DR deployment, make a deliberate choice between using either namespaced or clusterwide operators, and avoid combining them within the same DR setup. Consistency in the operator's scope simplifies management, troubleshooting, and maintenance of your disaster recovery solution.

Was this page helpful?