Skip to main content
Version: 3.2

Create sharedv4 PVCs in OCP IBM Cloud

This document describes how to use Portworx sharedv4 (ReadWriteMany) volumes in your cluster.

Portworx provides two types of sharedv4 features:

  • Sharedv4 volumes
  • Sharedv4 service volumes

Prerequisites

  • Sharedv4 volumes must be enabled on your cluster. Portworx sharedv4 volumes are enabled by default.

Provision a sharedv4 Volume

Sharedv4 volumes are useful when you want multiple PODs to access the same PVC (volume) at the same time. They can use the same volume even if they are running on different hosts. They provide a global namespace and the semantics are POSIX compliant.

To increase fault tolerance, you can enable sharedv4 service volumes by setting a value for sharedv4_svc_type. With this feature enabled, every sharedv4 volume has a Kubernetes service associated with it. Sharedv4 service volumes expose the volume via a Kubernetes service IP. If the sharedv4 (NFS) server goes offline for a sharedv4 service volume and the volume requires a failover, only application pods that were running on the 2 nodes involved in failover need to be restarted.

note

Notes about sharedv4 and sharedv4 service volumes:

  • A sharedv4 volume is created if and only if the PVC access mode is ReadWriteMany or ReadOnlyMany. The "sharedv4" setting in the storageClass does not matter. In other words, if an app expects a sharedv4 volume while using a ReadWriteOnce PVC, some of the pods may fail to start. The PVC will have to be modified to use ReadWriteMany or ReadOnlyMany access mode.
  • Sharedv4 service volumes are intended for use within your cluster where the volume resides.
  • Sharedv4 service volumes default to using NFSv version 4.0.
  • Sharedv4 (non-service) volumes default to using NFS version 3.
  • Sharedv4 service volumes are not supported on Portworx clusters using Metro DR.
  • On failover, applications may receive an error for non idempotent requests. For example, if an mkdir call is issued prior to failover, the client can resend it to the new server, which returns an EEXIST error if the directory was created by the first call.

Step 1: Create a StorageClass

  1. Create the following portworx-sharedv4-sc.yaml StorageClass, specifying your own values for the following fields:

    • metadata.name: Specify a name for your StorageClass.

    • parameters.repl: Specify the replication factor you'd like to set.

    • sharedv4: Set the value to true.

    • (Optional) sharedv4_svc_type: Set the value to "ClusterIP" if you want to enable the sharedv4 service feature

    • (Optional) stork.libopenstorage.org/preferRemoteNodeOnly: Set the value to "true" if you want to strictly enforce pod anti-hyperconvergence with respect to volume replica.

    • (Optional) stork.libopenstorage.org/preferRemoteNode: Set the value to "false" if you want hyperconvergence with respect to volume replica. See for Sharedv4 service volume hyperconvergence for more information.

    • (Optional) sharedv4_failover_strategy: Set the value to normal or aggressive (shorter failover grace period)

      note

      The default value for sharedv4_failover_strategy in sharedv4 volumes is normal, and the default value for sharedv4_failover_strategy in sharedv4 service volumes is aggressive.

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
    name: portworx-rwx-rep2
    provisioner: pxd.portworx.com
    parameters:
    repl: "2"
    sharedv4: "true"
    sharedv4_svc_type: "ClusterIP"
    reclaimPolicy: Retain
    allowVolumeExpansion: true
  2. Apply the StorageClass you created by running the following command:

    oc apply -f portworx-sharedv4-sc.yaml
  3. Verify that the StorageClass is created:

    oc describe storageclass portworx-rwx-rep2
    Name:            portworx-rwx-rep2
    IsDefaultClass: No
    Annotations: kubectl.kubernetes.io/last-applied-configuration={"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"portworx-rwx-rep2"},"parameters":{"disable_io_profile_protection":"1","io_profile":"auto","priority_io":"high","repl":"2","sharedv4":"true","sharedv4_svc_type":"ClusterIP"},"provisioner":"pxd.portworx.com","reclaimPolicy":"Retain"}
    Provisioner: pxd.portworx.com
    Parameters: disable_io_profile_protection=1,io_profile=auto,priority_io=high,repl=2,sharedv4=true,sharedv4_svc_type=ClusterIP
    AllowVolumeExpansion: True
    MountOptions: <none>
    ReclaimPolicy: Retain
    VolumeBindingMode: Immediate
    Events: <none>

Step 2: Create a persistent volume claim

  1. Create a ReadWriteMany persistent volume claim. Save the following content into a file:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
    name: px-sharedv4-pvc
    annotations:
    volume.beta.kubernetes.io/storage-class: portworx-rwx-rep2
    spec:
    accessModes:
    - ReadWriteMany
    resources:
    requests:
    storage: 10Gi
  2. Apply the spec:

    oc create -f px-sharedv4-pvc.yaml

    Note that accessModes for this PVC is set to ReadWriteMany (RWX) so Kubernetes on OCP allows mounting this PVC on multiple pods.

  3. Verify that the persistent volume claim is created:

    oc get pvc
    NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
    px-sharedv4-pvc Bound pvc-xxxxxxxx-xxxx-xxxx-xxxx-78dbc2ef7aeb 10Gi RWX portworx-rwx-rep2 46s

Step 3: Create pods which use the persistent volume claim

We will start two pods which use the same shared volume.

  1. Save the next two blocks into files pod1.yaml and pod2.yaml; use oc apply -f pod1.yaml and oc apply -f pod2.yaml

    apiVersion: v1
    kind: Pod
    metadata:
    name: pod1
    spec:
    containers:
    - name: test-container
    image: gcr.io/google_containers/test-webserver
    volumeMounts:
    - name: test-volume
    mountPath: /test-portworx-volume
    volumes:
    - name: test-volume
    persistentVolumeClaim:
    claimName: px-sharedv4-pvc
    apiVersion: v1
    kind: Pod
    metadata:
    name: pod2
    spec:
    containers:
    - name: test-container
    image: gcr.io/google_containers/test-webserver
    volumeMounts:
    - name: test-volume
    mountPath: /test-portworx-volume
    volumes:
    - name: test-volume
    persistentVolumeClaim:
    claimName: px-sharedv4-pvc
  2. Verify that the pods are running:

    oc get pods
    NAME      READY     STATUS    RESTARTS   AGE
    pod1 1/1 Running 0 2m
    pod2 1/1 Running 0 1m

Convert a sharedv4 volume to a sharedv4 service volume

Perform the following steps to convert a sharedv4 volume to use the new sharedv4 service feature:

  1. Detach the volume by scaling the application pods down to 0.
  2. Run the following pxctl command:
pxctl volume update --sharedv4_service_type=ClusterIP <volume>
  1. Scale the deployment back up to start the application.

Convert a sharedv4 service volume to a sharedv4 volume

Perform the following steps to convert a sharedv4 service volume to a sharedv4 volume:

  1. Detach the volume by scaling the application pods down to 0.
  2. Run the following pxctl command:
pxctl volume update --sharedv4_service_type="" <volume>
  1. Scale the deployment back up to start the application.

Convert an existing sharedv4 service volume to prefer remote nodes only

  1. Scale down the application pods.

  2. Run the following command to convert the volume to use preferRemoteNodeOnly:

    pxctl volume update --label stork.libopenstorage.org/preferRemoteNodeOnly="true" <volume>
  3. Scale the pods back up to start the application.

Convert an existing sharedv4 service volume to prefer local nodes

  1. Scale down the application pods.

  2. Run the following pxctl command to convert the volume to use preferRemoteNode:

    pxctl volume update --label stork.libopenstorage.org/preferRemoteNode="false" <volume>
  3. Scale the pods back up to start the application.

Update a legacy shared volume to a sharedv4 volume

You can convert an existing shared volume (deprecated) to a sharedv4 volume. Run the following command to update the volume setting:

pxctl volume update --sharedv4=on <volume>

note

To access PV/PVCs with a non-root user, refer here.

Was this page helpful?