Linux Virtualization Cloning a KVM Guest Virtual Machine
Internal
Overview
The goal of a cloning operation is to create an entirely new guest, based on the configuration of an existing guest. Entirely new storage must be provisioned during the cloning operation, and the O/S image must be transferred on the new storage and updated as described below. Also, care should be taken to avoid conflict while accessing shared resources - memory, CPUs, network devices.
Procedure
Shut Down the Source Virtual Machine
Shut down the guest to be cloned:
virsh shutdown source_vm
Export the Configuration
Export the configuration of the guest to be cloned:
virsh dumpxml source_vm > .../vm-definition-repository/source_vm.xml
Adjust the Configuration
Copy the XML definition under a new name, conventionally the name of the guest being built. Edit the XML as needed. These are some things you may want to change:
Name of the Guest
Update the name of the guest as:
<domain type='kvm'> <name>the_new_name</name> ...
For more details on constraints associated with the guest name, see name.
UUID
Remove the <uuid> line, a new UUID will be generated.
Memory
Adjust the amount of memory, specified as <memory>. Remove the <currentMemory> line.
Virtual CPU Count
Adjust the number of virtual CPUs allocated to this virtual machine, using <vcpu> element. Use an "auto" placement:
<vcpu placement='auto'>2</vcpu>
CDROM
Remove the "cdrom" disk(s), if it is not going to be used.
Storage
Edit the <disk> definitions and adjust the names for the new storage volumes that will be provisioned for the new virtual machine. The virtual machine will definitely need a virtual machine image, stored in (usually) qcow2 format, and optionally other block storage devices in raw format. Conventionally, the virtual machine image is stored in the main storage pool and it is named based on the name of the VM. For more naming conventions, see Linux Virtualization Naming Conventions.
<disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/main-storage-pool/new-vm-name.qcow2'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </disk>
<disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/main-storage-pool/'new-vm-name-docker.raw'/> <target dev='vdb' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </disk>
If an additional disk is needed, it can be created externally as a storage volume, and then attached to the template, after the template was shut down:
virsh attach-disk <template-name> /main-storage-pool/'<template-name>-docker.raw vdb --config
Network
Locate the network interface and replace the value of the mac address with a randomly generated value.
It is important to replace the MAC address with a unique value per virtualization host, otherwise conflicts will occur.
A shell script that generate random MAC addresses is available here: bash script that generates a random MAC address.
<interface type='network'> <mac address='52:54:00:79:03:0c'/> <source network='default'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface>
See <interface>.
For the virtual machines that need additional network interfaces, follow these procedures, depending on the type of the network interface the guest needs:
Provision the Storage
For each disk the virtual machine relies on, starting with the virtual machine image storage, clone the disks following the appropriate procedure. If the storage was originally empty, or we want to start with an empty storage, we could provision a new disk, as described below. Note that this does not apply to the virtual machine image, we need the content of the virtual machine image to clone the virtual machine:
- Clone a qcow2 virtual machine image
- Clone a raw volume
- Create a new raw block storage volume (a new filesystem will need to be built on this one later).
Create the Clone
virsh define .../vm-definition-repository/target_vm.xml
Edit Virtual Machine Image before the First Boot
This step is optional, but the virtual machine image can be edited off-line before boot with virt-edit, and configuration details such as the IP address(es), can be configured, so the virtual machine starts without IP address conflicts.
virt-edit -a /main-storage-pool/appproxy.qcow2 /etc/sysconfig/network-scripts/ifcfg-eth0 virt-edit -a /main-storage-pool/appproxy.qcow2 /etc/hosts ...
Boot the Clone and Finalize the Configuration
Boot the clone:
virsh start --console new-virtual-machine-name
Connect to it with virsh console, or via SSH using the IP address of the template, and finalize the configuration: