Skip to main content
Unlisted page
This page is unlisted. Search engines will not index it, and only users having a direct link can access it.

Run KubeVirt VMs with raw block RWX volumes for Live migration support

important

This feature is under Directed Availability. Please engage with your Portworx CSI representative if you are interested and need to enable it in your environment under the current guidelines.

Portworx CSI enables the seamless integration of KubeVirt virtual machines (VMs) within Kubernetes clusters, leveraging the high performance of ReadWriteMany (RWX) volumes backed by FlashArray. This approach supports raw block devices, which provide direct block storage access instead of a mounted filesystem. This is particularly beneficial for applications that demand low-latency and high-performance storage.

Traditional RWX storage using shared NFS filesystem volumes may face performance limitations, especially for workloads requiring high-speed I/O operations. By contrast, raw block devices:

  • Eliminate filesystem overhead.
  • Provide direct access to the underlying storage for improved performance.
  • Enable efficient live migration of VMs using Kubernetes-native tools.

This page explains how to deploy KubeVirt VMs using raw block storage and describes the live migration workflow.

Prerequisites

  • Both OpenShift and Operating System Version (OSV) versions must be 4.15 or higher.

Workflow for Live migration of KubeVirt VMs

The live migration workflow for KubeVirt VMs using Portworx CSi involves these steps:

  1. Use a StorageClass and PVC configured for RWX raw block storage.
  2. The source VM is attached to an RWX volume hosted on shared raw block storage. Access is controlled to ensure both the source and destination nodes can interact with the volume.
  3. The source node writes VM data to the RWX volume. The destination node pre-copies VM state and disk contents by reading from the same volume.
  4. The destination node gains write access to the RWX volume, and the VM is initialized on the destination node, ensuring seamless migration within the Kubernetes environment.

Create a StorageClass

  1. To provision storage for KubeVirt VMs, create a StorageClass as shown below. This configuration uses the pure_block backend to create volumes on FlashArray:

    important

    If you need the mount path to have 777 permissions, set the parameters.allow_others field to true in your StorageClass. This setting ensures that all users have read, write, and execute access to the mount point. Use this setting carefully to avoid granting unintended access.

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
    name: fada-rwx-sc
    parameters:
    backend: "pure_block"
    #allow_others: true # uncomment this line if you need the mount path to have 777 permissions.
    provisioner: pxd.portworx.com
    volumeBindingMode: WaitForFirstConsumer
    allowVolumeExpansion: true
  2. Apply this StorageClass to your cluster:

    kubectl apply -f storageclass.yaml

Create a PVC

  1. Using the above StorageClass, define a PVC with the following configuration:

    • accessModes: ReadWriteMany for shared volume access.
    • volumeMode: Block to create a raw block device volume
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
    name: fada-rwx-pvc
    spec:
    storageClassName: fada-rwx-sc
    accessModes:
    - ReadWriteMany
    resources:
    requests:
    storage: 2Gi
    volumeMode: Block
  2. Apply this PVC to your cluster:

    kubectl apply -f pvc.yaml

When deploying KubeVirt VMs, reference the PVC created in the previous step to attach the RWX raw block volume to the VM. Ensure the VM configuration specifies the correct StorageClass and volume.

Once the VM is running with the specified PVC, you can perform live migration using OpenShift's native functionality. The shared RWX volume ensures data consistency during the migration process by allowing simultaneous read/write operations for the source and destination nodes.

Known Limitation: KubeVirt VMs might not automatically fail over to another node

On OpenShift Container Platform, when a node hosting KubeVirt virtual machines (VMs) becomes unavailable, the VMs might not automatically fail over to another node. This behavior is expected. To address this, you can either deploy a node remediation operator for automated handling or manually trigger a failover:

Deploy a node remediation operator

To automate failover and ensure that VMs are rescheduled on healthy nodes, deploy the Node Health Check (NHC) Operator in OCP.

  1. Install the NHC Operator using the OpenShift Web Console or OpenShift CLI.

    note

    The NHC Operator includes Self-Node Remediation (SNR) functionality by default.

  2. Configure a Node Health Check to monitor worker nodes and specify a remediation duration. For detailed configuration steps, see Red Hat documentation.

    note

    Ensure that you select the worker nodes when creating the Node Health Check.

Once configured, the NHC Operator detects node failures, drains unhealthy nodes after the specified duration, and triggers the rescheduling of pods, including VMs, to healthy nodes.

Manually trigger VM failover

If automated remediation is not feasible, you can manually trigger VM failover by draining and removing the unavailable node.

  1. Drain the unavailable node to safely evict workloads from it:

    oc drain <down-node>
  2. Delete the node from the OCP cluster:

    oc delete node <down-node>
  3. Wait for the pods and VMs to terminate and restart on a healthy node.

After rescheduling, the KubeVirt VMs will return to a Ready state.