Virtualization Concepts: Difference between revisions
(80 intermediate revisions by the same user not shown) | |||
Line 2: | Line 2: | ||
* https://home.feodorov.com:9443/wiki/Wiki.jsp?page=Virtualization | * https://home.feodorov.com:9443/wiki/Wiki.jsp?page=Virtualization | ||
* [[Linux Virtualization]] | |||
=Virtualization= | =Virtualization= | ||
''Virtualization'' represents running software, usually multiple operating systems, concurrently and in isolation from other programs on a single system, called [[Virtualization_Concepts#Host_Operating_System|host]]. The software entity that controls virtualization is called [[#Hypervisor|hypervisor]]. The [[#Virtual_Machine|virtual machines]] executing on the host under the control of the hypervisor are known as [[#Guest_Operating_System|guests]] or [[#Guest_Operating_System|guest operating systems]]. | |||
There are several types of virtualization: | |||
==Full Virtualization== | |||
''Full virtualization'' allows for an unmodified version of the guest operating system. The guest addresses the host's CPU and other hardware resources via a channel created by the hypervisor. This is the most performant virtualization type, because the guest operating system communicates directly with the physical CPU. | |||
==Paravirtualization== | |||
''Paravirtualization'' requires a modified guest operating system, which communicates with the hypervisor. The hypervisor passes the unmodified calls from the guest to the CPU and other devices, and the guest is capable of generating these calls because hypervisor-specific support for translation was compiled into it. Also see [[#Paravirtualized_Devices|paravirtualized devices]]. | |||
==Software Virtualization== | |||
''Software virtualization or emulation'' uses binary translation and other emulation techniques to run unmodified guest operating systems. The hypervisor translated the guest calls to a format that can be understand by the host operating system. Also see [[#Emulated_Devices|emulated devices]]. | |||
==Containers and Virtualization== | |||
{{Internal|Docker_Concepts#Containers_and_Virtualization|Docker Concepts | Containers and Virtualization}} | |||
=Virtualization Platforms= | =Virtualization Platforms= | ||
* Red Hat Virtualization | * [[Linux_Virtualization_Concepts#Red_Hat_Enterprise_Linux_as_Virtualization_Platform|Red Hat Enterprise Linux as Virtualization Plaform]] | ||
* [[Linux_Virtualization_Concepts#Red_Hat_Virtualization|Red Hat Virtualization]] | |||
* [[Linux_Virtualization_Concepts#Red_Hat_OpenStack_Platform|Red Hat OpenStack Platform]] | |||
* VMware vSphere | * VMware vSphere | ||
* Microsof Hyper-V | * Microsof Hyper-V | ||
Line 13: | Line 34: | ||
=Hypervisor= | =Hypervisor= | ||
The entity that controls virtualization is referred to as ''hypervisor''. The | The software entity that controls virtualization is referred to as ''hypervisor''. The hypervisor manages the hardware resources of the host system and makes them available to the [[#Guest_Operating_System|guest operating systems]]. | ||
Hypervisors: KVM, Xen, VMware ESX. | Hypervisors: [[Linux_Virtualization_Concepts#KVM_.28Kernel-based_Virtual_Machine.29|KVM]], [[Linux_Virtualization_Concepts#Xen|Xen]], VMware ESX. | ||
=Host Operating System= | =Host Operating System= | ||
Line 26: | Line 47: | ||
=Virtual Machine= | =Virtual Machine= | ||
{{Internal|Linux Virtualization Concepts#KVM_Virtual_Machine_States|KVM Virtual Machine States}} | |||
=Hardware Virtualization Extensions= | |||
{{External|[https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/sect-Troubleshooting-Enabling_Intel_VT_x_and_AMD_V_virtualization_hardware_extensions_in_BIOS.html RHEL 7 Virtualization Administration Guide - Hardware Extensions in BIOS]}} | |||
''Hardware virtualization extensions'' provide hardware assist to the virtualization software, reducing the size and complexity of the virtualization software. Areas that are especially interesting are CPU virtualization, allowing software in the VM to run without any performance or compatibility hit, as if it was running natively on a dedicated CPU, memory virtualization, I/O virtualization for offloading of packet processing to network adapters, etc. Intel packages its hardware virtualization extensions as "Intel Virtualization Technology (VT-x) Extensions", and AMD as "AMD-V". | |||
==Checking/Enabling Virtualization Extensions== | |||
{{Internal|Linux_CPU_Info#Virtualization_Extensions|Linux CPU Info - Virtualization Extensions}} | |||
=Hardware Devices and Virtualization= | |||
The host's physical hardware can e exposed to the guest operating systems in at least three different ways: [[#Emulated_Devices|emulated (or virtualized) devices]], [[#Paravirtualized_Devices|paravirtualized devices]] and [[#Physically_Shared_Devices|physically shared devices]]. All these hardware devices appear as being physically attached to the virtual machine, but the device drivers exposing them to the guest operating system work in different ways. | |||
==Emulated Devices== | |||
An ''emulated (or virtualized) device'' is a piece of software running in the virtual machine and exposing the underlying hardware to the virtual machine, via emulated drivers. The virtual machine sees the hardware as physically attached, though. The emulated driver is a translation layer sitting between the virtual machine and the host's kernel, which manages the source device. The device level instructions are completely translated by the hypervisor. Any device of the same type (storage, network, keyboard, mouse) that is recognized by the host's kernel can be used as the backing source device for the emulated drivers. Examples of emulated components: Intel i440FX host PCI bridge, PIIX3 PCI to ISA bridge, PS/2 mouse and keyboard, PCI UHCI USB controllers, EHCI controllers, etc. Storage devices and storage pools can used emulated drivers to attach storage to virtual machines. The guest uses the emulated storage driver to access the underlying storage pool. An example of an emulated storage driver is the [[Linux Virtualization Concepts#IDE|emulated IDE driver]]. The emulated IDE driver can be used to attach any combination of up to four virtualized IDE hard disks or CD-ROM drivers. A typical emulated network device is e1000, which corresponds to an Intel E1000 network adapter (Intel 82540EM, 82573L, etc.). Also see [[#Software_Virtualization|software virtualization]]. | |||
==Paravirtualized Devices== | |||
A ''paravirtualized device'' represents a virtual device that contains hypervisor-specific code, deployed on the guest system. The paravirtualized device and knows how to make hypervisor-specific calls. In the case of KVM, the paravirtualized devices are implemented on top of [[Linux_Virtualization_Concepts#virtio|virtio API]], which is a layer between the hypervisor and the guest. In general, paravirtualized device decrease I/O latency and increase I/O throughput to near bare-metal levels. If available, it is recommended to use paravirtualized devices instead of [[#Emulated_Devices|emulated devices]]. Also see [[#Paravirtualization|paravirtualization]]. | |||
==Physically Shared Devices== | |||
A ''physically shared device'' is an actual hardware device installed on the host that is directly accessed by the virtual machine, in a process known as ''device assignment'' or ''passthrough''. Examples: [[Linux Virtualization Concepts#VFIO|VFIO]], [[Linux Virtualization Concepts#USB_PCI_SCSI_Passthrough|USB, PCI and SCSI passthrough]], [[Linux Virtualization Concepts#SR-IOV|SR-IOV]], [[Linux Virtualization Concepts#NPIV|NPIV]]. | |||
=Migration= | |||
''Migration'' describes the process of moving a guest virtual machine from one host to another. There are two types of migration: <span id='Offline_Migration'>'''Offline migration'''</span> suspends the guest virtual machine and then moves the image to the destination host. The virtual machine is then resumed on the destination host. <span id='Live_Migration'>'''Live migration'''</span> is the process of migrating an active virtual machine from one host to another. | |||
=Overcommitting= | |||
''Overcommitting'' represents allocation to guests of more virtualized CPU and memory than actual physical resources available on the host system. This way, resources are dynamically swapped when needed by one guest and not used by another. Overcommitting can improve resource utilization efficiency, but it also poses risks to the system stability. | |||
{{External|[https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/chap-Overcommitting_with_KVM.html RHEL 7 Virtualization Administration Guide]}} | |||
=Kernel Same-page Merging (KSM)= | |||
Kernel Same-page Merging (KSM) is a technique enabling guests to share identical memory pages. These shared pages are usually common libraries or other similar high-use data. KSM allows for greater guest density of identical or similar guests operating on the same host, by avoiding memory duplication. | |||
{{External|[https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/chap-KSM.html RHEL 7 Virtualization Tuning and Optimization Guide]}} | |||
=Disk I/O Throttling= | |||
''Disk I/O throttling'' provides the ability to set a limit on disk I/O requests sent from individual VMs to the host machine. This prevents a virtual machine from over-utilizing shared resources, and thus impacting the performance of other VMs. | |||
{{External|[https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-BlockIO-Techniques.html#sect-Virtualization_Tuning_Optimization_Guide-BlockIO-IO_Throttling RHEL 7 Virtualization Tuning and Optimization Guide]}} | |||
=Automatic NUMA Balancing= | |||
''Automatic non-uniform memory access (NUMA) balancing'' is a technique involving moving tasks, which can be threads or processes, closer to the memory they are accessing. This improves the performance of application running on non-uniform memory access (NUMA) hardware systems, without the need for manual tuning. | |||
{{External|[https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-NUMA-Auto_NUMA_Balancing.html RHEL 7 Virtualization Tuning and Optimization Guide]}} | |||
=Virtual CPU Hot Add= | |||
''Virtual CPU hot add'' is the capability to increase processing power allocated to virtual machines without shutting down the quests. | |||
{{External|[https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/sect-Managing_guest_virtual_machines_with_virsh-Displaying_per_guest_virtual_machine_information.html RHEL 7 Virtualization Administration Guide]}} | |||
=Virtualization and Networking= | |||
Also see: {{Internal|Linux_Virtualization_Concepts#Networking_and_KVM_Virtualization|Linux Virtualization Concepts - Networking and KVM Virtualization}} | |||
==Virtual Network Switch== | |||
{{External|[https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/chap-Virtual_Networking.html#sect-Virtual_Networking-Virtual_network_switches RHEL7 Virtualization Administration Guide - Virtual Network Switch]}} | |||
==Network Address Translation== | |||
{{External|[https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/sect-Virtual_Networking-Network_Address_Translation.html RHEL7 Virtualization Administration Guide - Network Address Translation]}} | |||
=Linux Virtualization Concepts= | =Linux Virtualization Concepts= | ||
{{Internal|Linux Virtualization Concepts|Linux Virtualization Concepts}} | {{Internal|Linux Virtualization Concepts|Linux Virtualization Concepts}} | ||
=Thin Provisioning= | |||
{{External|https://en.wikipedia.org/wiki/Thin_provisioning}} | |||
Thin provisioning involves using virtualization technology to give the appearance of having more physical resources than are actually available. |
Latest revision as of 07:00, 2 January 2021
Internal
Virtualization
Virtualization represents running software, usually multiple operating systems, concurrently and in isolation from other programs on a single system, called host. The software entity that controls virtualization is called hypervisor. The virtual machines executing on the host under the control of the hypervisor are known as guests or guest operating systems.
There are several types of virtualization:
Full Virtualization
Full virtualization allows for an unmodified version of the guest operating system. The guest addresses the host's CPU and other hardware resources via a channel created by the hypervisor. This is the most performant virtualization type, because the guest operating system communicates directly with the physical CPU.
Paravirtualization
Paravirtualization requires a modified guest operating system, which communicates with the hypervisor. The hypervisor passes the unmodified calls from the guest to the CPU and other devices, and the guest is capable of generating these calls because hypervisor-specific support for translation was compiled into it. Also see paravirtualized devices.
Software Virtualization
Software virtualization or emulation uses binary translation and other emulation techniques to run unmodified guest operating systems. The hypervisor translated the guest calls to a format that can be understand by the host operating system. Also see emulated devices.
Containers and Virtualization
Virtualization Platforms
- Red Hat Enterprise Linux as Virtualization Plaform
- Red Hat Virtualization
- Red Hat OpenStack Platform
- VMware vSphere
- Microsof Hyper-V
Hypervisor
The software entity that controls virtualization is referred to as hypervisor. The hypervisor manages the hardware resources of the host system and makes them available to the guest operating systems.
Hypervisors: KVM, Xen, VMware ESX.
Host Operating System
The host operating system (or the host OS) is the operating system of the physical computer on which the hypervisor is installed.
Guest Operating System
The guest operating system (or the guest OS) is the operating system that is running inside the virtual machine.
Virtual Machine
Hardware Virtualization Extensions
Hardware virtualization extensions provide hardware assist to the virtualization software, reducing the size and complexity of the virtualization software. Areas that are especially interesting are CPU virtualization, allowing software in the VM to run without any performance or compatibility hit, as if it was running natively on a dedicated CPU, memory virtualization, I/O virtualization for offloading of packet processing to network adapters, etc. Intel packages its hardware virtualization extensions as "Intel Virtualization Technology (VT-x) Extensions", and AMD as "AMD-V".
Checking/Enabling Virtualization Extensions
Hardware Devices and Virtualization
The host's physical hardware can e exposed to the guest operating systems in at least three different ways: emulated (or virtualized) devices, paravirtualized devices and physically shared devices. All these hardware devices appear as being physically attached to the virtual machine, but the device drivers exposing them to the guest operating system work in different ways.
Emulated Devices
An emulated (or virtualized) device is a piece of software running in the virtual machine and exposing the underlying hardware to the virtual machine, via emulated drivers. The virtual machine sees the hardware as physically attached, though. The emulated driver is a translation layer sitting between the virtual machine and the host's kernel, which manages the source device. The device level instructions are completely translated by the hypervisor. Any device of the same type (storage, network, keyboard, mouse) that is recognized by the host's kernel can be used as the backing source device for the emulated drivers. Examples of emulated components: Intel i440FX host PCI bridge, PIIX3 PCI to ISA bridge, PS/2 mouse and keyboard, PCI UHCI USB controllers, EHCI controllers, etc. Storage devices and storage pools can used emulated drivers to attach storage to virtual machines. The guest uses the emulated storage driver to access the underlying storage pool. An example of an emulated storage driver is the emulated IDE driver. The emulated IDE driver can be used to attach any combination of up to four virtualized IDE hard disks or CD-ROM drivers. A typical emulated network device is e1000, which corresponds to an Intel E1000 network adapter (Intel 82540EM, 82573L, etc.). Also see software virtualization.
Paravirtualized Devices
A paravirtualized device represents a virtual device that contains hypervisor-specific code, deployed on the guest system. The paravirtualized device and knows how to make hypervisor-specific calls. In the case of KVM, the paravirtualized devices are implemented on top of virtio API, which is a layer between the hypervisor and the guest. In general, paravirtualized device decrease I/O latency and increase I/O throughput to near bare-metal levels. If available, it is recommended to use paravirtualized devices instead of emulated devices. Also see paravirtualization.
A physically shared device is an actual hardware device installed on the host that is directly accessed by the virtual machine, in a process known as device assignment or passthrough. Examples: VFIO, USB, PCI and SCSI passthrough, SR-IOV, NPIV.
Migration
Migration describes the process of moving a guest virtual machine from one host to another. There are two types of migration: Offline migration suspends the guest virtual machine and then moves the image to the destination host. The virtual machine is then resumed on the destination host. Live migration is the process of migrating an active virtual machine from one host to another.
Overcommitting
Overcommitting represents allocation to guests of more virtualized CPU and memory than actual physical resources available on the host system. This way, resources are dynamically swapped when needed by one guest and not used by another. Overcommitting can improve resource utilization efficiency, but it also poses risks to the system stability.
Kernel Same-page Merging (KSM)
Kernel Same-page Merging (KSM) is a technique enabling guests to share identical memory pages. These shared pages are usually common libraries or other similar high-use data. KSM allows for greater guest density of identical or similar guests operating on the same host, by avoiding memory duplication.
Disk I/O Throttling
Disk I/O throttling provides the ability to set a limit on disk I/O requests sent from individual VMs to the host machine. This prevents a virtual machine from over-utilizing shared resources, and thus impacting the performance of other VMs.
Automatic NUMA Balancing
Automatic non-uniform memory access (NUMA) balancing is a technique involving moving tasks, which can be threads or processes, closer to the memory they are accessing. This improves the performance of application running on non-uniform memory access (NUMA) hardware systems, without the need for manual tuning.
Virtual CPU Hot Add
Virtual CPU hot add is the capability to increase processing power allocated to virtual machines without shutting down the quests.
Virtualization and Networking
Also see:
Virtual Network Switch
Network Address Translation
Linux Virtualization Concepts
Thin Provisioning
Thin provisioning involves using virtualization technology to give the appearance of having more physical resources than are actually available.