Skip to main content
Version: 2.8

Install Stork

note

If you are planning to install/upgrade your Stork version to 25.2.0 on your air-gapped environment, ensure that you pull the following kopiaexecutor and nfsexecutor images from the docker paths and push them to your custom image registry:
docker.io/openstorage/kopiaexecutor:master-latest
docker.io/openstorage/nfsexecutor:master-latest

You can install Stork with or without Portworx Enterprise using the following methods:

Deployment method without Portworx Enterprise

To install Stork version 25.2.1 on your Kubernetes cluster without installing Portworx Enterprise, run the below commands:

  1. Download the Stork deployment spec:

    curl -fsL -o stork-spec.yaml "https://install.portworx.com/pxbackup?comp=stork&storkNonPx=true"
  2. In the stork-spec.yaml, change the Stork version to 25.2.1 if the version differs.

note

If your application cluster is running in the IBM Cloud environment, ensure that the image repository path is set to icr.io/ext/portworx/stork:<supported-pxb-stork-version> before applying the stork-spec.yaml during Stork installation (without PXE).

  1. Apply the stork-spec.yaml to install the latest Stork version:

    kubectl apply -f stork-spec.yaml

Deployment Method with Portworx Enterprise

If you have to install Stork along with Portworx Enterprise, you can opt-in for Daemonset installation or Portworx Operator installation:

Portworx DaemonSet installation

To install Stork using Daemonset installation method:

  1. Fetch the Kubernetes version and then download stork-spec.yaml:

    KBVER=$(kubectl version --short | awk -Fv '/Server Version: /{print $3}')
    curl -fsL -o stork-spec.yaml "https://install.portworx.com/pxbackup?kbver=${KBVER}&comp=stork"
  2. Apply the stork-spec.yaml with the below command:

    kubectl apply -f stork-spec.yaml

Portworx Operator installation

Stork fresh installation for Portworx Backup through web console

If Stork is not installed as part of Portworx deployment, perform the following steps:

  1. From the home page, click Add cluster.

  2. Choose your Kubernetes platform.

  3. Provide cluster name and Kubeconfig details.

  4. Click Px-cluster to copy the stork installation command.

  5. Run the Stork installation command.

    note

    If Stork is installed through PX Cluster option from the web console in a namespace other than the namespace where Portworx Enterprise is deployed, perform Step 6 or else go to Step 7.

  6. Update the following key-value pairs in stork deployment's (stork-spec.yaml) environment variable section, using kubectl edit command.

    kubectl edit deployment stork -n <stork-namespace>
    env:
    - name: PX_NAMESPACE
    value: <portworx-deployed-namespace>
    - name: PX_SERVICE_NAME
    value: portworx-api
    - name: STORK-NAMESPACE
    value: portworx
  7. Click Add Cluster.

Updating Stork deployment for Portworx Backup through web console

Perform the below steps to update Stork installation using Portworx operator option:

  1. Edit the stc (Kubernetes resource):

    kubectl edit stc -n <portworx-deployed-namespace>
  2. Append the Stork image and version details in Stork section:

    stork: args: webhook-controller: "true" enabled: true image: openstorage/stork: 25.2.1

  3. Save and exit.

Install Stork in air-gapped environments

  1. If your application cluster is air-gapped, then you must pull the following images before installing Stork:
ImageImage pathVersion
Storkopenstorage/stork25.2.1
Command Executoropenstorage/cmdexecutor25.2.1
NFS Executoropenstorage/nfsexecutor1.2.18
Kopia Executoropenstorage/kopiaexecutor1.2.18
  1. Push the above images to your internal registry server, accessible by the air-gapped nodes.

  2. After pushing the images, follow the instructions in How to install Stork based on your deployment methods to install your Stork version.

Sample Stork Spec for reference:


apiVersion: v1 kind: ServiceAccount metadata: name: stork-account namespace: kube-system

apiVersion: apps/v1 kind: Deployment metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" labels: tier: control-plane name: stork namespace: kube-system spec: selector: matchLabels: name: stork strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate replicas: 3 template: metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" labels: name: stork tier: control-plane spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution:

  • labelSelector: matchExpressions:
    • key: "name" operator: In values:
      • stork topologyKey: "kubernetes.io/hostname" hostPID: false containers:
  • name: stork command:
    • /stork
    • --verbose
    • --leader-elect=true
    • --health-monitor-interval=120
    • --webhook-controller=true image: icr.io/ext/portworx/stork:25.2.1 imagePullPolicy: Always resources: requests: cpu: '0.1' serviceAccountName: stork-account

kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: stork-scheduler-role rules:

  • apiGroups: [""] resources: ["endpoints"] verbs: ["get", "create", "update"]
  • apiGroups: [""] resources: ["configmaps"] verbs: ["get", "list", "watch"]
  • apiGroups: ["", "events.k8s.io"] resources: ["events"] verbs: ["create", "patch", "update"]
  • apiGroups: [""] resourceNames: ["kube-scheduler"] resources: ["endpoints"] verbs: ["delete", "get", "patch", "update"]
  • apiGroups: [""] resources: ["nodes", "namespaces"] verbs: ["get", "list", "watch"]
  • apiGroups: [""] resources: ["pods"] verbs: ["delete", "get", "list", "watch"]
  • apiGroups: [""] resources: ["bindings", "pods/binding"] verbs: ["create"]
  • apiGroups: [""] resources: ["pods/status"] verbs: ["patch", "update"]
  • apiGroups: [""] resources: ["replicationcontrollers", "services"] verbs: ["get", "list", "watch"]
  • apiGroups: ["apps", "extensions"] resources: ["replicasets"] verbs: ["get", "list", "watch"]
  • apiGroups: ["apps"] resources: ["statefulsets"] verbs: ["get", "list", "watch"]
  • apiGroups: ["policy"] resources: ["poddisruptionbudgets"] verbs: ["get", "list", "watch"]
  • apiGroups: [""] resources: ["persistentvolumeclaims", "persistentvolumes"] verbs: ["get", "list", "watch", "update"]
  • apiGroups: ["storage.k8s.io"] resources: ["storageclasses", "csinodes"] verbs: ["get", "list", "watch"]
  • apiGroups: ["coordination.k8s.io"] resources: ["leases"] verbs: ["create", "update", "get", "list", "watch"]

kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: stork-scheduler-role-binding subjects:

  • kind: ServiceAccount name: stork-scheduler-account namespace: kube-system roleRef: kind: ClusterRole name: stork-scheduler-role apiGroup: rbac.authorization.k8s.io

kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: stork-snapshot-sc provisioner: stork-snapshot

kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: stork-role rules:

  • apiGroups: [""] resources: [""] verbs: ["*"]

kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: stork-role-binding subjects:

  • kind: ServiceAccount name: stork-account namespace: kube-system roleRef: kind: ClusterRole name: stork-role apiGroup: rbac.authorization.k8s.io