Linux Virtualization Cloning a KVM Guest Virtual Machine: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
 
(38 intermediate revisions by the same user not shown)
Line 1: Line 1:
=Internal=
=Internal=


* [[Linux_Virtualization_Operations#Cloning_a_Guest_Virtual_Machine|Linux Virtualization Operations]]
* [[Linux_KVM_Virtualization_Guest_Operations#Clone_a_Guest_Virtual_Machine|Linux Virtualization Guest Operations]]
 
=Overview=
 
The goal of a cloning operation is to create an entirely new guest, based on the configuration of an existing guest. Entirely new storage must be provisioned during the cloning operation, and the O/S image must be transferred on the new storage and updated as described below. Also, care should be taken to avoid conflict while accessing shared resources - memory, CPUs, network devices.


=Procedure=
=Procedure=
Line 10: Line 14:


   [[virsh shutdown]] ''source_vm''
   [[virsh shutdown]] ''source_vm''
virsh shutdown ocp36.basic-template


==Export the Configuration==
==Export the Configuration==
Line 16: Line 22:


   [[virsh dumpxml]] ''source_vm'' > ''.../vm-definition-repository/source_vm.xml''
   [[virsh dumpxml]] ''source_vm'' > ''.../vm-definition-repository/source_vm.xml''
virsh dumpxml ocp36.basic-template > /root/environments/ocp36/ocp36.basic-template.xml


==Adjust the Configuration==
==Adjust the Configuration==


Copy the XML definition under a new name, conventionally the name of the guest being built. Edit the XML as needed. These are some things you may want to change:
Copy the XML definition under a new name, conventionally the name of the guest being built.  
 
cp ocp36.basic-template.xml ocp36.generic-template.xml
 
Edit the XML as needed. These are some things you may want to change:


====Name of the Guest====
====Name of the Guest====
Line 51: Line 63:
====Storage====
====Storage====


Edit the <disk> definitions and adjust the names for the new storage volumes that will be provisioned for the new virtual machine. The virtual machine will definitely need a [[Linux_Virtualization_Concepts#Virtual_Machine_Image|virtual machine image]], stored in (usually) [[Linux_Virtualization_Concepts#qcow2|qcow2]] format, and optionally other block storage devices in [[Linux_Virtualization_Concepts#raw|raw]] format. Conventionally, the virtual machine image is stored in the main storage pool and it is [[Linux_Virtualization_Naming_Conventions#Virtual_Machine_Image_Naming_Convention|named based on the name of the VM]]:
Edit the <disk> definitions and adjust the names for the new storage volumes that will be provisioned for the new virtual machine. The virtual machine will definitely need a [[Linux_Virtualization_Concepts#Virtual_Machine_Image|virtual machine image]], stored in (usually) [[Linux_Virtualization_Concepts#qcow2|qcow2]] format, and optionally other block storage devices in [[Linux_Virtualization_Concepts#raw|raw]] format. Conventionally, the virtual machine image is stored in the main storage pool and it is [[Linux_Virtualization_Naming_Conventions#Virtual_Machine_Image_Naming_Convention|named based on the name of the VM]]. For more naming conventions, see [[Linux Virtualization Naming Conventions|Linux Virtualization Naming Conventions]].


  <disk type='file' device='disk'>
  <disk type='file' device='disk'>
Line 57: Line 69:
   &lt;source file='/main-storage-pool/''new-vm-name''.qcow2'/>
   &lt;source file='/main-storage-pool/''new-vm-name''.qcow2'/>
   <target dev='vda' bus='virtio'/>
   <target dev='vda' bus='virtio'/>
   ...
   <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
  </disk>
  </disk>


Line 64: Line 76:
   &lt;source file='/main-storage-pool/'new-vm-name''-docker.raw'/>
   &lt;source file='/main-storage-pool/'new-vm-name''-docker.raw'/>
   <target dev='vdb' bus='virtio'/>
   <target dev='vdb' bus='virtio'/>
   ...
   <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
  </disk>
  </disk>
If an additional disk is needed, it can be created externally as a storage volume, and then attached to the template, after the template was shut down:
virsh attach-disk ''<template-name>'' /main-storage-pool/'<template-name>''-docker.raw vdb --config


====Network====
====Network====


Locate the network interface and replace the value of the mac address with a randomly generated value.  
The two most common situations are configuring the guest with a network interface that connects to a [[KVM_Virtual_Networking_Concepts#Virtual_Network|virtual network]] or with a network interface that attached directly to one of the virtualization host physical interfaces using a [[KVM_Virtual_Networking_Concepts#macvtap_Driver|macvtap driver]]. The background behind these options is covered extensively in the [[KVM_Virtual_Networking_Concepts|Virtual Networking Concepts]] document. From a guest perspective, the corresponding configurations are:
 
1. Virtual Network:
 
<syntaxhighlight lang='xml'>
<interface type='network'>
  <source network='default'/>
</interface>
</syntaxhighlight>
 
where the network name should be replaced with the actual name of the virtual network to connect to.
 
More details here: [[KVM_Virtual_Networking_Concepts#Guest_Configuration_for_Virtual_Network_Mode|Guest Configuration for Virtual Network]].
 
2. Direct Attachment to the Virtualization Host Physical Network Device:


{{Warn|It is important to replace the MAC address with a unique value per virtualization host, otherwise conflicts will occur.}}
<syntaxhighlight lang='xml'>
<interface type='direct'>
  <source dev='em3' mode='private'/>
</interface>
</syntaxhighlight>


A shell script that generate random MAC addresses is available here: [[Bash_script_that_generates_a_random_MAC_address|bash script that generates a random MAC address]].
The source device should be replaced with the name of the virtualization host network interface corresponding to the physical device we bind to. If we want to specify a MAC address, we can, but it optional. If not specified, it will be generate. A shell script that generate random MAC addresses is available here: [[Bash_script_that_generates_a_random_MAC_address|bash script that generates a random MAC address]].


<interface type='network'>
Also see [[KVM_Virtual_Machine_XML_Configuration_Example#interface|<interface>]].
    <mac address='52:54:00:79:03:0c'/>
    &lt;source network='default'/>
    <model type='virtio'/>
    <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>


See [[KVM_Virtual_Machine_XML_Configuration_Example#interface|<interface>]].
====USB Controllers====


For the virtual machines that need additional network interfaces, follow these procedures, depending on the type of the network interface the guest needs:
All can be removed.
* [[KVM_Virtualization_Attaching_a_Guest_Directly_to_a_Virtualization_Host_Network_Interface|Attaching a Guest Directly to a Virtualization Host Network Interface]]


==Provision the Storage==
==Provision the Storage==
Line 93: Line 121:
* [[Virsh_vol-clone#Cloning_a_Raw_Volume|Clone a raw volume]]
* [[Virsh_vol-clone#Cloning_a_Raw_Volume|Clone a raw volume]]
* [[Virsh_vol-create-as#Create_a_New_Raw_Block_Storage_Volume|Create a new raw block storage volume]] (a new filesystem will need to be built on this one later).
* [[Virsh_vol-create-as#Create_a_New_Raw_Block_Storage_Volume|Create a new raw block storage volume]] (a new filesystem will need to be built on this one later).
When creating logical volumes, follow the [[Linux_Virtualization_Naming_Conventions#Storage_Volume_Naming_Convention|storage volume naming conventions]]. For VirtualBox, the procedure of [[VirtualBox_Operations#Creating_and_Installing_a_new_Virtual_Disk|creating and installing a new virtual disk is available here]].


==Create the Clone==
==Create the Clone==


    [[virsh define|virsh define]] ''.../vm-definition-repository/target_vm.xml''
[[virsh define|virsh define]] ''.../vm-definition-repository/target_vm.xml''
 
virsh define /root/environments/ocp36/ocp36.generic-template.xml
 
==Edit Virtual Machine Image before the First Boot==
 
This step is optional, but the virtual machine image can be edited off-line before boot with [[virt-edit]], and configuration details such as the IP address(es), can be configured, so the virtual machine starts without IP address conflicts.
 
  virt-edit -a /main-storage-pool/appproxy.qcow2 /etc/sysconfig/network-scripts/ifcfg-eth0
  virt-edit -a /main-storage-pool/appproxy.qcow2 /etc/hosts
  ...


==Boot the Clone and Finalize the Configuration==
==Boot the Clone and Finalize the Configuration==
Line 102: Line 142:
Boot the clone:
Boot the clone:


  [[virsh start]] --console ''new-virtual-machine''
[[virsh start]] --console ''new-virtual-machine-name''
 
virsh start --console ocp36.generic-template


Connect to it with [[virsh console]], or via SSH using the IP address of the template, and finalize the configuration:
Connect to it with [[virsh console]], or via SSH using the IP address of the template, and finalize the configuration:


{{Internal|Reconfigure Linux VM Guest Image|Reconfigure Linux VM Guest Image}}
{{Internal|Reconfigure Linux VM Guest Image|Reconfigure Linux VM Guest Image}}
==Update the XML Definition==
Update the XML definition of the VM that has just been created, for reference.
  virsh dumpxml ''target_vm'' >  ''.../vm-definition-repository/target_vm.xml''

Latest revision as of 02:24, 22 April 2018

Internal

Overview

The goal of a cloning operation is to create an entirely new guest, based on the configuration of an existing guest. Entirely new storage must be provisioned during the cloning operation, and the O/S image must be transferred on the new storage and updated as described below. Also, care should be taken to avoid conflict while accessing shared resources - memory, CPUs, network devices.

Procedure

Shut Down the Source Virtual Machine

Shut down the guest to be cloned:

 virsh shutdown source_vm
virsh shutdown ocp36.basic-template

Export the Configuration

Export the configuration of the guest to be cloned:

 virsh dumpxml source_vm > .../vm-definition-repository/source_vm.xml
virsh dumpxml ocp36.basic-template > /root/environments/ocp36/ocp36.basic-template.xml

Adjust the Configuration

Copy the XML definition under a new name, conventionally the name of the guest being built.

cp ocp36.basic-template.xml ocp36.generic-template.xml

Edit the XML as needed. These are some things you may want to change:

Name of the Guest

Update the name of the guest as:

<domain type='kvm'>
  <name>the_new_name</name>
  ...

For more details on constraints associated with the guest name, see name.

UUID

Remove the <uuid> line, a new UUID will be generated.

Memory

Adjust the amount of memory, specified as <memory>. Remove the <currentMemory> line.

Virtual CPU Count

Adjust the number of virtual CPUs allocated to this virtual machine, using <vcpu> element. Use an "auto" placement:

<vcpu placement='auto'>2</vcpu>

CDROM

Remove the "cdrom" disk(s), if it is not going to be used.

Storage

Edit the <disk> definitions and adjust the names for the new storage volumes that will be provisioned for the new virtual machine. The virtual machine will definitely need a virtual machine image, stored in (usually) qcow2 format, and optionally other block storage devices in raw format. Conventionally, the virtual machine image is stored in the main storage pool and it is named based on the name of the VM. For more naming conventions, see Linux Virtualization Naming Conventions.

<disk type='file' device='disk'>
  <driver name='qemu' type='qcow2'/>
  <source file='/main-storage-pool/new-vm-name.qcow2'/>
  <target dev='vda' bus='virtio'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>
<disk type='file' device='disk'>
  <driver name='qemu' type='raw'/>
  <source file='/main-storage-pool/'new-vm-name-docker.raw'/>
  <target dev='vdb' bus='virtio'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</disk>

If an additional disk is needed, it can be created externally as a storage volume, and then attached to the template, after the template was shut down:

virsh attach-disk <template-name> /main-storage-pool/'<template-name>-docker.raw vdb --config

Network

The two most common situations are configuring the guest with a network interface that connects to a virtual network or with a network interface that attached directly to one of the virtualization host physical interfaces using a macvtap driver. The background behind these options is covered extensively in the Virtual Networking Concepts document. From a guest perspective, the corresponding configurations are:

1. Virtual Network:

<interface type='network'>
  <source network='default'/>
</interface>

where the network name should be replaced with the actual name of the virtual network to connect to.

More details here: Guest Configuration for Virtual Network.

2. Direct Attachment to the Virtualization Host Physical Network Device:

<interface type='direct'>
   <source dev='em3' mode='private'/>
</interface>

The source device should be replaced with the name of the virtualization host network interface corresponding to the physical device we bind to. If we want to specify a MAC address, we can, but it optional. If not specified, it will be generate. A shell script that generate random MAC addresses is available here: bash script that generates a random MAC address.

Also see <interface>.

USB Controllers

All can be removed.

Provision the Storage

For each disk the virtual machine relies on, starting with the virtual machine image storage, clone the disks following the appropriate procedure. If the storage was originally empty, or we want to start with an empty storage, we could provision a new disk, as described below. Note that this does not apply to the virtual machine image, we need the content of the virtual machine image to clone the virtual machine:

When creating logical volumes, follow the storage volume naming conventions. For VirtualBox, the procedure of creating and installing a new virtual disk is available here.

Create the Clone

virsh define .../vm-definition-repository/target_vm.xml
virsh define /root/environments/ocp36/ocp36.generic-template.xml

Edit Virtual Machine Image before the First Boot

This step is optional, but the virtual machine image can be edited off-line before boot with virt-edit, and configuration details such as the IP address(es), can be configured, so the virtual machine starts without IP address conflicts.

 virt-edit -a /main-storage-pool/appproxy.qcow2 /etc/sysconfig/network-scripts/ifcfg-eth0
 virt-edit -a /main-storage-pool/appproxy.qcow2 /etc/hosts
 ...

Boot the Clone and Finalize the Configuration

Boot the clone:

virsh start --console new-virtual-machine-name
virsh start --console ocp36.generic-template

Connect to it with virsh console, or via SSH using the IP address of the template, and finalize the configuration:

Reconfigure Linux VM Guest Image