Skip to main content
EARLY ACCESS

This feature is available as Early Access (EA) and should not be used in production.

Version: 3.5

Rapid Migration of VMs from VMware to OpenShift Virtualization using XCOPY

This topic explains how to rapidly migrate VMware virtual machines to OpenShift Virtualization by using the storage copy offload feature of Migration Toolkit for Virtualization (MTV, formerly Forklift) with Portworx as the storage platform.

Traditional migrations transfer VM data over the network through the migration controller, which can be slow and resource-intensive for large disks. The storage copy offload feature instead delegates the data copy to the underlying storage array using the SCSI XCOPY command, allowing FlashArray to clone LUNs directly without transferring data through the host or network. This approach reduces migration time and minimizes network usage.

Overview

The migration workflow consists of the following phases:

  1. User creates a migration plan by selecting the target VMs and then executes the migration.
  2. MTV offloads the data copy from the VM disk to a temporary FADA volume to the FlashArray. The FlashArray performs the XCOPY operation directly between the LUNs.
  3. A post-migration hook copies the FADA data into Portworx-backed volumes on the same FlashArray.
  4. The migrated VM starts on OpenShift Virtualization by using the Portworx-backed target PVCs.
important

Use this workflow only when the source VMware datastore and the target Portworx storage reside on the same FlashArray and within the same FlashArray realm.

The migration workflow uses an Ansible playbook that runs as a post-migration hook in MTV. This playbook performs the following tasks:

  • Identifies migrated VM disks
  • Creates corresponding Portworx volumes
  • Initiates data conversion
  • Rebinds VM PVCs to Portworx volumes
  • Restarts VMs after conversion

Prerequisites

Before you begin, ensure that your cluster meets the following prerequisites:

  • Meet all prerequisites for migrating VMs using the storage copy offload feature.
    For more information, see the Prerequisites section in OpenShift documentation.
  • OpenShift Virtualization is installed.
    For information on how to install OpenShift Virtualization, see OpenShift documentation.
  • Migration Toolkit for Virtualization (MTV) version 2.10 or later is installed.
    For information on how to install MTV, see OpenShift documentation.
  • Portworx Enterprise version 3.5.1 or later is installed.
  • FlashArray runs Purity 6.3 or later.
  • Network connectivity exists between VMware, FlashArray, and the OpenShift cluster.
  • A target namespace for migrated VMs (for example, migrated-vms) is created.
  • A Portworx StorageClass with replication factor 1 (repl=1) is configured.

Limitations

Consider the following limitations before you begin the migration process:

  • Volumes with replication factors greater than 1 are not supported during migration. Increase the replication factor after migration or use dynamic pools (supported with Portworx Enteprise 3.6.0 or later) if you use the VM for production workloads.
  • Only cold (offline) migrations are supported, and the VM state is preserved.
  • Does not support Virtual Volumes (vVols) datastore.
  • A single VM migration plan is validated for:
    • Up to 10 VMs per plan
    • Up to 4 disks per VM
    • Disk sizes up to 1.5 TB

Procedure

Step 1: Enable the feature_copy_offload setting

In MTV Operator, set the value of feature_copy_offload to true in forklift-controller:

oc patch forkliftcontrollers.forklift.konveyor.io forklift-controller --type merge -p {"spec": {"feature_copy_offload": "true"}} -n openshift-mtv

Step 2: Create the storage secret

  1. Create a Kubernetes secret in the openshift-mtv namespace with the Pure FlashArray management endpoint, user credentials, and the PURE_CLUSTER_PREFIX value:

    oc create secret generic <your-storage-map-secret> \
    -n openshift-mtv \
    --from-literal=STORAGE_HOSTNAME="https://<flasharray-mgmt>" \
    --from-literal=STORAGE_TOKEN="<flasharray-api-token>" \
    --from-literal=STORAGE_SKIP_SSL_VERIFICATION="false" \
    --from-literal=PURE_CLUSTER_PREFIX="px_<8chars>"

    Replace <your-storage-map-secret> with the name of your secret.

    note

    Portworx supports STORAGE_TOKEN based authentication for Pure FlashArray, which replaces the need for username and password credentials. Ensure that your secret includes the STORAGE_TOKEN field with a valid FlashArray API token.

    The following table describes the parameters in the Pure FlashArray storage secret and helps you determine which values are required when creating the secret.

    KeyDescriptionRequiredDefault
    STORAGE_HOSTNAMESpecifies the IP address or URL of the hostYes-
    STORAGE_TOKENSpecifies the FlashArray API token used for authenticationYes-
    STORAGE_PASSWORDSpecifies the passwordYes-
    STORAGE_SKIP_SSL_VERIFICATIONSpecifies whether to skip SSL verification. Set to true to disable SSL verificationNofalse
    PURE_CLUSTER_PREFIXSpecifies the cluster prefix. The value is set in the StorageCluster resource. To retrieve it, run the following command: printf "px_%.8s" $(oc get storagecluster -A -o=jsonpath='{.items[?(@.spec.cloudStorage.provider=="pure")].status.clusterUid}')Yes-
  2. Verify that the secret is created successfully:

    oc get secret <your-storage-map-secret> -n openshift-mtv
    NAME                        TYPE     DATA   AGE
    <your-storage-map-secret> Opaque 5 10s
  3. Update this secret to include the target Portworx StorageClass:

    oc -n openshift-mtv patch secret <your-storage-map-secret> \
    -p '{"data":{"PXD_STORAGE_CLASS":"'"$(echo -n '<your-px-storage-class>' | base64)"'"}}' \
    --type merge

Step 3: Create an ownerless storage map

Create an ownerless storage map by using the OpenShift console.
For more information, see OpenShift documentation.

important

In the Offload options (optional) section, ensure that you perform the following steps:

  • Select vSphere XCOPY from Offload plugin dropdown menu.
  • Select <your-storage-map-secret> from the Storage secret dropdown menu.
    This secret is created in Step 2.
  • Select Pure Storage FlashArray from the Storage product dropdown menu.

Step 4: Create migration plan with post migration hook

Create a migration plan by using the OpenShift console.
For more information, see OpenShift documentation.

To convert FADA volumes into Portworx-backed volumes, enable the post migration hook when you create the migration plan. To do this, perform the following steps on the Hooks (optional) page in the OpenShift console:

  1. Select the Enable post migration hook checkbox.
  2. In the Hook runner image field, enter quay.io/konveyor/hook-runner:latest.
  3. In the Ansible playbook field, paste this YAML.
  4. In the Service account field, enter fa-pxd-converter.
    note
    • For MTV 2.11 and later, set the Service account directly in the OpenShift console while enabling the hook.

    • For MTV versions earlier than 2.11, set the Service account after creating the migration plan but before starting the migration:

      oc -n openshift-mtv patch hook <plan-name>-post-hook \
      -p '{"spec":{"serviceAccount":"fa-pxd-converter"}}' \
      --type merge

      Replace <plan-name> with the name of your migration plan.

important
  • The conversion process uses a container image (for example, portworx/fa-pxd-converter:v0.0.1) to perform high-performance data transfer from FlashArray-backed volumes to Portworx volumes. This image runs in conversion pods during the post-migration process. Ensure that the conversion image is accessible from your OpenShift cluster and that the image pull secrets are configured, if required.

Step 5: Configure RBAC in your OpenShift Cluster

A cluster administrator must create the RBAC resources required to allow the migration process to:

  • Manage pods for data conversion
  • Create and update PersistentVolumeClaims (PVCs)
  • Interact with VirtualMachines and VirtualMachineInstances
  • Access secrets for FlashArray integration

The migration workflow requires the following resources:

  • A ServiceAccount named fa-pxd-converter in the openshift-mtv namespace
  • A ServiceAccount named fa-pxd-converter in the target migration namespace
  • A ClusterRole that can manage PVCs, PVs, pods, jobs, StorageClass objects, secrets, StorageMap objects, and KubeVirt VM resources
  • A ClusterRoleBinding for the fa-pxd-converter ServiceAccount
  • An OpenShift SecurityContextConstraints object that allows privileged execution for the fa-pxd-converter ServiceAccounts

To create the RBAC resources, do the following:

  1. Create a YAML file, for example migration-rbac.yaml, based on the information provided in the template file.

    important

    This template uses migrated-vms as the target migration namespace. Replace it with your target namespace where applicable.

  2. Apply the YAML file.

    oc apply -f migration-rbac.yaml

Step 6: Run and monitor the migration

Start the migration from MTV and monitor its progress until the hook workflow completes.

When the migration completes, the Migration status column on the Migration plans page displays Complete.

The following tasks are performed automatically after the migration completes:

  • VMs are migrated from VMware to Openshift Virtualization
  • FADA volumes are converted to Portworx volumes
  • Original storage bindings are replaced
  • If you set the VM target power state to Retain source VM power state, Portworx Enterprise preserves the source VM power state.

Step 7: Verify the migration

After the migration completes, verify that the resources are correctly created and configured:

  • Verify that the VM is running.
    To do this, navigate to Virtualization > VirtualMachines > <vm-name>. The VM is running if the status is Running on the Overview tab of the VirtualMachine details page.
    Alternatively, you can run the following command:

    oc get vm,vmi -n <namespace>
    NAME        AGE   STATUS   IP            NODE
    <vm-name> 10m Running 10.130.0.10 <node-name>
  • Verify that PVCs are bound to Portworx:

    oc get pvc -n <namespace>
    NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    <pvc-name> Bound pvc-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx 10Gi RWO portworx-sc 10m
  • Verify that PVCs use the Portworx StorageClass:

    oc describe pvc <pvc-name> -n <namespace>
    Name:          <pvc-name>
    Namespace: <namespace>
    StorageClass: portworx-sc
    Status: Bound
    Volume: pvc-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

Troubleshooting

Use the following guidance to identify and resolve common issues that may occur during or after migration.

IssueSymptomResolution
VM does not restartThe VM does not start after migration.Run the kubectl patch vm <vm-name> -n <namespace> --type merge -p '{"spec":{"running":true}}' command.
Replace <vm-name> with the name of the VM that did not restart and <namespace> with the namespace where the VM is deployed.
Permission errors during migrationErrors occur when creating or updating resources.Verify the RBAC configuration in Configure RBAC in your OpenShift Cluster. Ensure that the correct ServiceAccount is configured in the migration hook.
Slow migrationMigration takes longer than expected.Validate FlashArray integration and confirm storage mapping configuration.
Storage not using Portworx volumesVM disks are not backed by Portworx volumes.Verify the StorageClass configuration and confirm that the migration playbook is completed successfully.