Using scheduler convergence

When a pod runs on the same host as its volume, it is known as convergence or hyper-convergence. Because this configuration reduces the network overhead of an application, performance is typically better.

By modifying your pod spec files you can influence kubernetes to schedule pods on nodes where the volume is located.

The recommended method to run your pods hyperconverged is to use stork. This document describes how you can accomplish hyperconvergence via labels.

Using pre-provisioned volumes

If you have already created Portworx volumes out of band without using Kubernetes you can still influence the scheduler to schedule a pod on specific set of nodes.

Lets say you created two volumes viz. vol1 and vol2

At this point, when you create a volume, PX will communicate with Kubernetes to place host labels on the nodes that contain a volume’s data blocks. For example:

[root@localhost porx]# kubectl --kubeconfig="/root/kube-config.json" get nodes --show-labels

NAME         STATUS    AGE       LABELS   Ready     13d,vol2=true,vol3=true   Ready     12d,vol1=true,vol2=true

The label vol1=true implies that the node hosts volume vol1’s data.

Using PersistentVolumeClaims

If you used Kubernetes’s dynamic volume provisioning with Persistent Volume claims, then instead of the volume names, the claim names would be used as the node labels. Here is a sample PVC

kind: PersistentVolumeClaim
apiVersion: v1
  name: pvc-high-01
  annotations: portworx-io-high
    - ReadWriteOnce
      storage: 512Gi

Once the PVC gets Bound by kubernetes, you will see the following labels on the node

[root@localhost porx]# kubectl --kubeconfig="/root/kube-config.json" get nodes --show-labels

NAME         STATUS    AGE       LABELS   Ready     13d,pvc-high-01=true   Ready     12d,

Scheduling Pods and enabling hyperconvergence

You can now use these labels in the nodeAffinity section in your Kubernetes pod spec as explained here

For example, your pod may look like:

apiVersion: v1
kind: Pod
  name: nginx
    env: test
        - matchExpressions:
          - key: "pvc-high-01"
            operator: In
              - "true"
  - name: nginx
    image: nginx
    - name: portworx-volume
      mountPath: /data
  - name: portworx-volume
      claimName: pvc-high-01

In the nodeAffinity section we specify the required constraint implies that the specified rules must be met for a pod to schedule onto a node. The key value in the above spec is set to the claim name as the volume being mounted at /data is a persistentVolumeClaim. If you are using a pre-provisioned volume and not a PVC you will replace the key with PV name like vol1