Logical Volume Management and Virtualization: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
No edit summary
No edit summary
Line 3: Line 3:
* [[Linux Logical Volume Management#Subjects|Logical Volume Management]]
* [[Linux Logical Volume Management#Subjects|Logical Volume Management]]


=Overview=
=TODO=


Objective: to be able to shrink the disk space of the guest: I am at the stage where I can access an off-line block device of a guest vm and "explode" it with kpartx, but then I can't interact with the physical device that contains the volume group because of the name conflict between "VolGroup00" of the host and "VolGroup00" of the guest. Need more investigation.


!!!Logical Volume Management and Virtualization
=Overview=
 
!!!TODO
 
* Objective: to be able to shrink the disk space of the guest: I am at the stage where I can access an off-line block device of a guest vm and "explode" it with kpartx, but then I can't interact with the physical device that contains the volume group because of the name conflict between "VolGroup00" of the host and "VolGroup00" of the guest. Need more investigation.
 
!!External
 
 
 
!!Internal
 
|[Virtualization]
|[LogicalVolumeManagement]
 


One of the most common configurations of disk allocation for Xen guest virtual machines during installation is to specify a ''logical volume'' as the main (and only) "disk" for the guest VM.
One of the most common configurations of disk allocation for Xen guest virtual machines during installation is to specify a ''logical volume'' as the main (and only) "disk" for the guest VM.
Line 26: Line 13:
Let's consider the case of a physical host ('tahiti') that runs multiple guest virtual machines and allocates for each of its virtual machines a single ''logical volume''.  
Let's consider the case of a physical host ('tahiti') that runs multiple guest virtual machines and allocates for each of its virtual machines a single ''logical volume''.  


All tahiti's logical volumes are part of a single Volume Group ({{/dev/VolGroup00}}) installed on a physical SCSI device ({{/dev/sda2}}), of 700GB. We are going to install the "test" virtual machine, and we create a 20GB "test" logical volume for it:
All tahiti's logical volumes are part of a single Volume Group (<tt>/dev/VolGroup00</tt>) installed on a physical SCSI device (<tt>/dev/sda2</tt>), of 700GB. We are going to install the "test" virtual machine, and we create a 20GB "test" logical volume for it:


{{{
<pre>
   LV Name                /dev/VolGroup00/test
   LV Name                /dev/VolGroup00/test
   VG Name                VolGroup00
   VG Name                VolGroup00
Line 42: Line 29:
   - currently set to    256
   - currently set to    256
   Block device          253:7
   Block device          253:7
}}}
</pre>


During the virtual machine installation procedure we specified {{/dev/VolGroup00/test}} as the "disk" and this was pretty much all we specified. The {{virt-install}} installation utility "partitions" the disk as necessary.
During the virtual machine installation procedure we specified {{/dev/VolGroup00/test}} as the "disk" and this was pretty much all we specified. The <tt>virt-install</tt> installation utility "partitions" the disk as necessary.


As result of the installation, {{/dev/VolGroup00/test}} becomes a ''partitioned block device'', similar to a physical disk with its own partition table.  
As result of the installation, <tt>/dev/VolGroup00/test</tt> becomes a ''partitioned block device'', similar to a physical disk with its own partition table.  


{{fdisk}} can be used to take a look at the partition table from the host system:
<tt>fdisk</tt> can be used to take a look at the partition table from the host system:


{{{
<pre>
root@tahiti mnt]# fdisk /dev/VolGroup00/test  
root@tahiti mnt]# fdisk /dev/VolGroup00/test  


Line 63: Line 50:
/dev/VolGroup00/test2              14        2610    20860402+  8e  Linux LVM
/dev/VolGroup00/test2              14        2610    20860402+  8e  Linux LVM


}}}
</pre>


The installation utility creates two partitions ('test1' and 'test2'). 'test1' is a bootable partition of type 83 (linux native partition) of about , and 'test2' is a Linux Logical Volume Manager partition (8e) of 19.88 GB. For a list of partition identifiers, take a look here: [http://www.win.tue.nl/~aeb/partitions/partition_types-1.html]
The installation utility creates two partitions ('test1' and 'test2'). 'test1' is a bootable partition of type 83 (linux native partition) of about , and 'test2' is a Linux Logical Volume Manager partition (8e) of 19.88 GB. For a list of partition identifiers, take a look here: http://www.win.tue.nl/~aeb/partitions/partition_types-1.html


The logical volume {{/dev/VolGroup00/test}} is seen by the "test" guest virtual machine as {{/dev/xvda}}. {{fdisk}} can also be used from the guest virtual machine and returns the following:
The logical volume <tt>/dev/VolGroup00/test</tt> is seen by the "test" guest virtual machine as <tt>/dev/xvda</tt>. <tt>fdisk</tt> can also be used from the guest virtual machine and returns the following:


{{{
<pre>
[root@test dev]# fdisk /dev/xvda
[root@test dev]# fdisk /dev/xvda


Line 81: Line 68:
/dev/xvda1  *          1          13      104391  83  Linux
/dev/xvda1  *          1          13      104391  83  Linux
/dev/xvda2              14        2610    20860402+  8e  Linux LVM
/dev/xvda2              14        2610    20860402+  8e  Linux LVM
 
</pre>
}}}


Upon boot, the "test" guest virtual machine uses these two partitions as follows:
Upon boot, the "test" guest virtual machine uses these two partitions as follows:


{{/dev/xvda1}} and mounted as "/boot".
<tt>/dev/xvda1</tt> and mounted as "/boot".


{{/dev/xvda2}} is used entirely for the volume group VolGroup00 (19.88 GB), containing to logical volumes:
<tt>/dev/xvda2</tt> is used entirely for the volume group VolGroup00 (19.88 GB), containing to logical volumes:


{{/dev/VolGroup00/LogVol00}} of  18.84 G. It contains an ext3 file system and it is mounted as root.
<tt>/dev/VolGroup00/LogVol00</tt> of  18.84 G. It contains an ext3 file system and it is mounted as root.


and
and


{{/dev/VolGroup00/LogVol01}} of 1.03 GB. It is used as swap.  
<tt>/dev/VolGroup00/LogVol01</tt> of 1.03 GB. It is used as swap.  


{{File:LogicalVolumeManagementAndVirtualization.png}}


[LogicalVolumeManagementAndVirtualization.png]
=Accessing the "guest" VolumeGroup from the host=
 
!!!Accessing the "guest" VolumeGroup from the host
 
The guest main (and only) Volume Group can be accessed and manipulated from the virtualization host via a ''device map'' created with {{kpartx}}. {{kpartx}} is a tool that can be used to make partitions of arbitrary partitioned block devices accessible via {{/dev/mapper}}.


The guest main (and only) Volume Group can be accessed and manipulated from the virtualization host via a ''device map'' created with <tt>kpartx</tt>. <tt>kpartx</tt> is a tool that can be used to make partitions of arbitrary partitioned block devices accessible via <tt>/dev/mapper</tt>.


In order to access the guest VolumeGroup:
In order to access the guest VolumeGroup:
Line 110: Line 94:
2. Display /dev/VolGroup00/test's partitions:
2. Display /dev/VolGroup00/test's partitions:


{{{
<pre>
 
[root@tahiti mapper]# cd /dev/mapper
[root@tahiti mapper]# cd /dev/mapper
[root@tahiti mapper]# kpartx /dev/VolGroup00/test  
[root@tahiti mapper]# kpartx /dev/VolGroup00/test  
test1 : 0 208782 /dev/VolGroup00/test 63
test1 : 0 208782 /dev/VolGroup00/test 63
test2 : 0 41720805 /dev/VolGroup00/test 208
test2 : 0 41720805 /dev/VolGroup00/test 208
</pre>


}}}
3. Use <tt>kpartx</tt> to create the device map and "expose" the "/dev/xdva" partitions for manipulation:
 
 
3. Use {{kpartx}} to create the device map and "expose" the "/dev/xdva" partitions for manipulation:


{{{
<pre>
cd /dev/mapper
kpartx -a /dev/VolGroup00/test
</pre>


    cd /dev/mapper
The command will create <tt>/dev/mapper/test1</tt> and <tt>/dev/mapper/test2</tt>, which are block devices exposing access to the block devices known as "/dev/xvda1" and "/dev/xvda2" to the test virtual machine.
    kpartx -a /dev/VolGroup00/test
 
}}}
 
The command will create {{/dev/mapper/test1}} and {{/dev/mapper/test2}}, which are block devices exposing access to the block devices known as "/dev/xvda1" and "/dev/xvda2" to the test virtual machine.


The reverse is  
The reverse is  


{{{
<pre>
    kpartx -d /dev/VolGroup00/test
kpartx -d /dev/VolGroup00/test
</pre>


}}}
4. <tt>/dev/mapper/test2</tt> is the physical device for the virtual machine's volume group:


4. {{/dev/mapper/test2}} is the physical device for the virtual machine's volume group:
<pre>
 
{{{
[root@tahiti mapper]# pvdisplay /dev/mapper/test2
[root@tahiti mapper]# pvdisplay /dev/mapper/test2
   --- Physical volume ---
   --- Physical volume ---
Line 152: Line 130:
   Allocated PE          636
   Allocated PE          636
   PV UUID              R4qj4x-FeMY-UmmQ-atKm-PQ2B-OmCg-lvY8Qo
   PV UUID              R4qj4x-FeMY-UmmQ-atKm-PQ2B-OmCg-lvY8Qo
}}}
</pre>
 


!!!Organizatorium
=Organizatorium=


{{{
<pre>
1: kpartx -a /dev/myvg/mylv
1: kpartx -a /dev/myvg/mylv
2: Resize partition using fdisk (delete old partition, recreate new one
2: Resize partition using fdisk (delete old partition, recreate new one
Line 165: Line 142:
5: e2fsck -y /dev/myvg/mylv5
5: e2fsck -y /dev/myvg/mylv5
6: resize2fs -p /dev/myvg/mylv5  
6: resize2fs -p /dev/myvg/mylv5  
}}}
</pre>


* [http://linuxwave.blogspot.com/2008/08/resizing-your-xen-domu-using-lvm.html]
* http://linuxwave.blogspot.com/2008/08/resizing-your-xen-domu-using-lvm.html
* [http://www.azhowto.com/2009/02/07/how-to-resize-lvm-running-xen-part-2-decrease-disk-size/]
* http://www.azhowto.com/2009/02/07/how-to-resize-lvm-running-xen-part-2-decrease-disk-size/

Revision as of 02:59, 3 May 2017

Internal

TODO

Objective: to be able to shrink the disk space of the guest: I am at the stage where I can access an off-line block device of a guest vm and "explode" it with kpartx, but then I can't interact with the physical device that contains the volume group because of the name conflict between "VolGroup00" of the host and "VolGroup00" of the guest. Need more investigation.

Overview

One of the most common configurations of disk allocation for Xen guest virtual machines during installation is to specify a logical volume as the main (and only) "disk" for the guest VM.

Let's consider the case of a physical host ('tahiti') that runs multiple guest virtual machines and allocates for each of its virtual machines a single logical volume.

All tahiti's logical volumes are part of a single Volume Group (/dev/VolGroup00) installed on a physical SCSI device (/dev/sda2), of 700GB. We are going to install the "test" virtual machine, and we create a 20GB "test" logical volume for it:

  LV Name                /dev/VolGroup00/test
  VG Name                VolGroup00
  LV UUID                o9im2e-haPN-PKqa-RALQ-YALw-O4sA-WOXThJ
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                20.00 GB
  Current LE             640
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:7

During the virtual machine installation procedure we specified Template:/dev/VolGroup00/test as the "disk" and this was pretty much all we specified. The virt-install installation utility "partitions" the disk as necessary.

As result of the installation, /dev/VolGroup00/test becomes a partitioned block device, similar to a physical disk with its own partition table.

fdisk can be used to take a look at the partition table from the host system:

root@tahiti mnt]# fdisk /dev/VolGroup00/test 

[...]

Disk /dev/VolGroup00/test: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

               Device Boot      Start         End      Blocks   Id  System
/dev/VolGroup00/test1   *           1          13      104391   83  Linux
/dev/VolGroup00/test2              14        2610    20860402+  8e  Linux LVM

The installation utility creates two partitions ('test1' and 'test2'). 'test1' is a bootable partition of type 83 (linux native partition) of about , and 'test2' is a Linux Logical Volume Manager partition (8e) of 19.88 GB. For a list of partition identifiers, take a look here: http://www.win.tue.nl/~aeb/partitions/partition_types-1.html

The logical volume /dev/VolGroup00/test is seen by the "test" guest virtual machine as /dev/xvda. fdisk can also be used from the guest virtual machine and returns the following:

[root@test dev]# fdisk /dev/xvda

[...]

Disk /dev/xvda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

    Device Boot      Start         End      Blocks   Id  System
/dev/xvda1   *           1          13      104391   83  Linux
/dev/xvda2              14        2610    20860402+  8e  Linux LVM

Upon boot, the "test" guest virtual machine uses these two partitions as follows:

/dev/xvda1 and mounted as "/boot".

/dev/xvda2 is used entirely for the volume group VolGroup00 (19.88 GB), containing to logical volumes:

/dev/VolGroup00/LogVol00 of 18.84 G. It contains an ext3 file system and it is mounted as root.

and

/dev/VolGroup00/LogVol01 of 1.03 GB. It is used as swap.


Accessing the "guest" VolumeGroup from the host

The guest main (and only) Volume Group can be accessed and manipulated from the virtualization host via a device map created with kpartx. kpartx is a tool that can be used to make partitions of arbitrary partitioned block devices accessible via /dev/mapper.

In order to access the guest VolumeGroup:

1. Shutdown the 'test' virtual machine.

2. Display /dev/VolGroup00/test's partitions:

[root@tahiti mapper]# cd /dev/mapper
[root@tahiti mapper]# kpartx /dev/VolGroup00/test 
test1 : 0 208782 /dev/VolGroup00/test 63
test2 : 0 41720805 /dev/VolGroup00/test 208

3. Use kpartx to create the device map and "expose" the "/dev/xdva" partitions for manipulation:

cd /dev/mapper
kpartx -a /dev/VolGroup00/test

The command will create /dev/mapper/test1 and /dev/mapper/test2, which are block devices exposing access to the block devices known as "/dev/xvda1" and "/dev/xvda2" to the test virtual machine.

The reverse is

kpartx -d /dev/VolGroup00/test

4. /dev/mapper/test2 is the physical device for the virtual machine's volume group:

[root@tahiti mapper]# pvdisplay /dev/mapper/test2
  --- Physical volume ---
  PV Name               /dev/mapper/test2
  VG Name               VolGroup00
  PV Size               19.89 GB / not usable 19.49 MB
  Allocatable           yes (but full)
  PE Size (KByte)       32768
  Total PE              636
  Free PE               0
  Allocated PE          636
  PV UUID               R4qj4x-FeMY-UmmQ-atKm-PQ2B-OmCg-lvY8Qo

Organizatorium

1: kpartx -a /dev/myvg/mylv
2: Resize partition using fdisk (delete old partition, recreate new one
with a larger size, note that the file system remains on the partition)
3: Reboot Dom0
4: kpartx -a /dev/myvg/mylv  (again)
5: e2fsck -y /dev/myvg/mylv5
6: resize2fs -p /dev/myvg/mylv5