Difference between revisions of "KVM"
(Added info about named/dnsmasq) |
(→Accessing a disk image from the host: removing the disk) |
||
(9 intermediate revisions by the same user not shown) | |||
Line 73: | Line 73: | ||
# /etc/libvirt/hooks/qemu guest_name start begin - | # /etc/libvirt/hooks/qemu guest_name start begin - | ||
+ | |||
+ | === Ubuntu networking === | ||
+ | |||
+ | Edit /etc/network/interfaces and configure the public IP as static. | ||
+ | |||
+ | auto ens4 | ||
+ | iface ens4 inet static | ||
+ | address a.b.c.d # Replace with your public IP | ||
+ | netmask 255.255.255.255 | ||
+ | dns-nameservers 8.8.8.8 8.8.4.4 | ||
+ | post-up ip route add 192.168.123.1 dev ens4 | ||
+ | post-up ip route add default via 192.168.123.1 | ||
== Guest kickstart == | == Guest kickstart == | ||
Line 86: | Line 98: | ||
default via 192.168.123.1 dev eth0 | default via 192.168.123.1 dev eth0 | ||
EOF | EOF | ||
+ | |||
+ | = Adding serial console after install = | ||
+ | |||
+ | CentOS 6.6 | ||
+ | |||
+ | You'll need to make sure the grub config has console=ttyS0 as a kernel parameter. | ||
+ | |||
+ | initctl start serial DEV=ttyS0 SPEED=9600 | ||
+ | |||
+ | All being well, you should then be able to use virsh console to connect and get a login prompt. | ||
+ | |||
+ | = Hot-add disk to running VM = | ||
+ | |||
+ | # qemu-img create newdisk.img 10G | ||
+ | |||
+ | # cat > newdisk.xml <<EOF | ||
+ | disk type='file' device='disk'> | ||
+ | <driver name='qemu' type='qcow2'/> | ||
+ | <source file='/path/to/newdisk.img'/> | ||
+ | <target dev='vdb' bus='virtio'/> | ||
+ | </disk> | ||
+ | EOF | ||
+ | |||
+ | # virsh attach-device <domain name> /path/to/disk.xml | ||
+ | |||
+ | Check the guest to see if the disk was hotplug-inserted. The kernel should be triggered, as can be checked with dmesg: | ||
+ | |||
+ | virtio-pci 0000:00:06.0: irq 30 for MSI/MSI-X | ||
+ | vdb: unknown partition table | ||
+ | |||
+ | Alternatively, if you have an existing disk image, you can attach it as a specific device like this: | ||
+ | |||
+ | # virsh attach-disk centos6a-vm --source /home/kvm/spare10gb.dsk --target vdb | ||
+ | Disk attached successfully | ||
+ | |||
+ | And detach it like this (make sure everything is unmounted first!) | ||
+ | |||
+ | # virsh detach-disk centos6a-vm /home/kvm/spare10gb.dsk --live | ||
+ | Disk detached successfully | ||
+ | |||
+ | = Snapshot disk images = | ||
+ | |||
+ | Disk images need to be qcow2 format to be able to have snapshots made so first of all, convert the raw disk to qcow2. With the VM powered off: | ||
+ | |||
+ | # qemu-img convert -p -O qcow2 vmname.dsk vmname.dsk.qcow2 | ||
+ | # virsh edit vmname | ||
+ | |||
+ | Change | ||
+ | |||
+ | <driver name='qemu' type='raw' cache='none'/> | ||
+ | <source file='/kvm/vmname.dsk'/> | ||
+ | |||
+ | to | ||
+ | |||
+ | <driver name='qemu' type='qcow2' cache='none'/> | ||
+ | <source file='/kvm/vmname.dsk.qcow2'/> | ||
+ | |||
+ | To create a snapshot | ||
+ | |||
+ | # virsh snapshot-list vmname | ||
+ | Name Creation Time State | ||
+ | ------------------------------------------------------------ | ||
+ | Before OS upgrade 2014-08-04 13:27:55 +0100 shutoff | ||
+ | |||
+ | # virsh snapshot-create-as vmname "After OS upgrade" | ||
+ | Domain snapshot After OS upgrade created | ||
+ | |||
+ | # virsh snapshot-list vmname | ||
+ | Name Creation Time State | ||
+ | ------------------------------------------------------------ | ||
+ | After OS upgrade 2014-08-14 13:32:49 +0100 running | ||
+ | Before OS upgrade 2014-08-04 13:27:55 +0100 shutoff | ||
+ | |||
+ | = Ubuntu 16.04 console via virsh = | ||
+ | |||
+ | Edit /etc/default/grub and change the line | ||
+ | |||
+ | GRUB_CMDLINE_LINUX_DEFAULT="" | ||
+ | |||
+ | to read | ||
+ | |||
+ | GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0,38400n8 console=tty0" | ||
+ | |||
+ | This should allow '''virsh console $domain''' to work | ||
+ | |||
+ | = Accessing a disk image from the host = | ||
+ | |||
+ | This can be useful if for some reason the guest can't be powered up and you need to get at the files. Make sure the virtual machine is powered off otherwise you run the risk of corrupting the filesytem. Map the disk image to a loop device. | ||
+ | |||
+ | # losetup -f | ||
+ | /dev/loop1 | ||
+ | |||
+ | The -f parameter says to show the first available loop device. This is the one we'll use. | ||
+ | |||
+ | # losetup /dev/loop1 ./vmdisk.dsk | ||
+ | |||
+ | The fdisk -l command should show the partitions available within the loop device. | ||
+ | |||
+ | # fdisk -l /dev/loop1 | ||
+ | |||
+ | Disk /dev/loop1: 16.1 GB, 16106127360 bytes | ||
+ | 255 heads, 63 sectors/track, 1958 cylinders | ||
+ | Units = cylinders of 16065 * 512 = 8225280 bytes | ||
+ | Sector size (logical/physical): 512 bytes / 512 bytes | ||
+ | I/O size (minimum/optimal): 512 bytes / 512 bytes | ||
+ | Disk identifier: 0x000888b4 | ||
+ | |||
+ | Device Boot Start End Blocks Id System | ||
+ | /dev/loop1p1 * 1 20 153600 83 Linux | ||
+ | Partition 1 does not end on cylinder boundary. | ||
+ | /dev/loop1p2 20 275 2048000 82 Linux swap / Solaris | ||
+ | Partition 2 does not end on cylinder boundary. | ||
+ | /dev/loop1p3 275 1959 13526016 8e Linux LVM | ||
+ | |||
+ | Now we need to create device maps from the partition table. | ||
+ | |||
+ | # kpartx -av /dev/loop1 | ||
+ | add map loop1p1 (253:0): 0 307200 linear /dev/loop1 2048 | ||
+ | add map loop1p2 (253:1): 0 4096000 linear /dev/loop1 309248 | ||
+ | add map loop1p3 (253:2): 0 27052032 linear /dev/loop1 4405248 | ||
+ | |||
+ | # ls -lF /dev/mapper | ||
+ | crw-rw---- 1 root root 10, 58 Apr 6 20:34 control | ||
+ | lrwxrwxrwx 1 root root 7 Jun 12 19:37 loop1p1 -> ../dm-0 | ||
+ | lrwxrwxrwx 1 root root 7 Jun 12 19:37 loop1p2 -> ../dm-1 | ||
+ | lrwxrwxrwx 1 root root 7 Jun 12 19:37 loop1p3 -> ../dm-2 | ||
+ | |||
+ | Since the main partition was an LVM type (8e Linux LVM) rather than a basic Linux partition (83 Linux) we need to do some extra work to access it. | ||
+ | |||
+ | # lvdisplay | ||
+ | --- Logical volume --- | ||
+ | LV Path /dev/sys/root | ||
+ | LV Name root | ||
+ | VG Name sys | ||
+ | LV UUID XaGlMc-3axP-Ce1b-lRJx-8NBw-DYSF-FMoWLS | ||
+ | LV Write Access read/write | ||
+ | LV Creation host, time vm.localdomain, 2015-03-07 17:24:27 +0000 | ||
+ | LV Status NOT available | ||
+ | LV Size 12.88 GiB | ||
+ | Current LE 412 | ||
+ | Segments 1 | ||
+ | Allocation inherit | ||
+ | Read ahead sectors auto | ||
+ | |||
+ | Provided there's no other volume group called 'sys', you can activate it. | ||
+ | |||
+ | # vgchange -ay sys | ||
+ | 1 logical volume(s) in volume group "sys" now active | ||
+ | # lvdisplay | ||
+ | ... | ||
+ | LV Status available | ||
+ | ... | ||
+ | |||
+ | Finally, we can mount the filesystem and get to the files! | ||
+ | |||
+ | # mount -t ext4 /dev/sys/root /mnt/ | ||
+ | |||
+ | After you've finished accessing the filesystem, you need to reverse the process to be free up the disk file. | ||
+ | |||
+ | # umount /mnt | ||
+ | # vgchange -an sys | ||
+ | # kpartx -dv /dev/loop1 | ||
+ | # losetup -d /dev/loop1 |
Latest revision as of 19:12, 12 June 2017
Contents
DNS
dnsmasq and named won't run on the same machine without some tweaks since they both want to bind to port 53. The solution is to alter each config to listen on specific IPs only.
/etc/named.conf - named should listen on the external IP and localhost.
listen-on port 53 { 127.0.0.1; 213.229.103.79; }; listen-on-v6 port 53 { ::1; 2a02:af8:3:2000::7982; };
/etc/dnsmasq.conf - dnsmasq should listen on the virbr0 interface only
listen-address=192.168.122.1 bind-interfaces
If you prefer, you can use interface=virbr0 instead of listen-address=192.168.122.1
Networking
Configuring networking on KVM to work with individually routed IPs (or a small subnet of routed IPs) where the routed IPs aren't related to the primary IP of the host involves creating a virtual bridge, enabling some firewall rules and manually creating some routes on both the host and guest.
Virtual bridge configuration
Virtual bridge definition is as follows. The IP address used can be anything private since it's only used internally for routing.
<network> <name>routed</name> <forward mode='route'/> <bridge name='virbr1' dev='eth0' delay='0' /> <ip address='192.168.123.1' netmask='255.255.255.255'> </ip> </network>
Save the above as net-routed.xml and then create/start the network.
# virsh net-define net-routed.xml # virsh net-start routed # virsh net-autostart routed
Startup hooks
Define the IP address(es) to be routed in /etc/libvirt/hooks/routed-ips
ROUTED_GW="192.168.123.1" ROUTED_DEV="virbr1" ROUTED_IPS="92.48.112.177 92.48.112.178 92.48.112.179"
This qemu/libvirt script uses the above file and should be created as /etc/libvirt/hooks/qemu (don't forget to set the permissions as +x). The additions to manage the iptables rules were added by me, the original script only added the routes.
#!/bin/sh # Found at http://blog.gadi.cc/single-ip-routing-in-libvirt/ # Add individual IPs for our routed network to the routing table # # Since no hook exists for net-start, the best we can do is check if # all the IPs are added everytime a VM is launched, without re-adding. # When a net-destroy occurs, the routes will be automatically removed. . `dirname $0`/routed-ips if [ "$2" == "start" ]; then for IP in $ROUTED_IPS ; do if [ "`ip route list | grep $IP`" == "" ] ; then ip route add $IP via $ROUTED_GW dev $ROUTED_DEV fi # Remove the old firewall rules if present iptables -D FORWARD -d $IP -o virbr1 -j ACCEPT iptables -D FORWARD -s $IP -i virbr1 -j ACCEPT # Add them back in before iptables -I FORWARD -d $IP -o virbr1 -j ACCEPT iptables -I FORWARD -s $IP -i virbr1 -j ACCEPT done fi exit 0
The script is run like this during the startup phase of virtual machines.
# /etc/libvirt/hooks/qemu guest_name start begin -
Ubuntu networking
Edit /etc/network/interfaces and configure the public IP as static.
auto ens4 iface ens4 inet static address a.b.c.d # Replace with your public IP netmask 255.255.255.255 dns-nameservers 8.8.8.8 8.8.4.4 post-up ip route add 192.168.123.1 dev ens4 post-up ip route add default via 192.168.123.1
Guest kickstart
Guest kickstart config should contain the following sections. The post-install script creates default routing via the virtual bridge internal IP.
network --device eth0 --bootproto static --ip=92.48.112.178 --netmask=255.255.255.255 --nameserver=213.229.103.79
%post cat > /etc/sysconfig/network-script/route-eth0 <<EOF 192.168.123.1 dev eth0 default via 192.168.123.1 dev eth0 EOF
Adding serial console after install
CentOS 6.6
You'll need to make sure the grub config has console=ttyS0 as a kernel parameter.
initctl start serial DEV=ttyS0 SPEED=9600
All being well, you should then be able to use virsh console to connect and get a login prompt.
Hot-add disk to running VM
# qemu-img create newdisk.img 10G # cat > newdisk.xml <<EOF disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/path/to/newdisk.img'/> <target dev='vdb' bus='virtio'/> </disk> EOF # virsh attach-device <domain name> /path/to/disk.xml
Check the guest to see if the disk was hotplug-inserted. The kernel should be triggered, as can be checked with dmesg:
virtio-pci 0000:00:06.0: irq 30 for MSI/MSI-X vdb: unknown partition table
Alternatively, if you have an existing disk image, you can attach it as a specific device like this:
# virsh attach-disk centos6a-vm --source /home/kvm/spare10gb.dsk --target vdb Disk attached successfully
And detach it like this (make sure everything is unmounted first!)
# virsh detach-disk centos6a-vm /home/kvm/spare10gb.dsk --live Disk detached successfully
Snapshot disk images
Disk images need to be qcow2 format to be able to have snapshots made so first of all, convert the raw disk to qcow2. With the VM powered off:
# qemu-img convert -p -O qcow2 vmname.dsk vmname.dsk.qcow2 # virsh edit vmname
Change
<driver name='qemu' type='raw' cache='none'/> <source file='/kvm/vmname.dsk'/>
to
<driver name='qemu' type='qcow2' cache='none'/> <source file='/kvm/vmname.dsk.qcow2'/>
To create a snapshot
# virsh snapshot-list vmname Name Creation Time State ------------------------------------------------------------ Before OS upgrade 2014-08-04 13:27:55 +0100 shutoff # virsh snapshot-create-as vmname "After OS upgrade" Domain snapshot After OS upgrade created # virsh snapshot-list vmname Name Creation Time State ------------------------------------------------------------ After OS upgrade 2014-08-14 13:32:49 +0100 running Before OS upgrade 2014-08-04 13:27:55 +0100 shutoff
Ubuntu 16.04 console via virsh
Edit /etc/default/grub and change the line
GRUB_CMDLINE_LINUX_DEFAULT=""
to read
GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0,38400n8 console=tty0"
This should allow virsh console $domain to work
Accessing a disk image from the host
This can be useful if for some reason the guest can't be powered up and you need to get at the files. Make sure the virtual machine is powered off otherwise you run the risk of corrupting the filesytem. Map the disk image to a loop device.
# losetup -f /dev/loop1
The -f parameter says to show the first available loop device. This is the one we'll use.
# losetup /dev/loop1 ./vmdisk.dsk
The fdisk -l command should show the partitions available within the loop device.
# fdisk -l /dev/loop1
Disk /dev/loop1: 16.1 GB, 16106127360 bytes 255 heads, 63 sectors/track, 1958 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000888b4
Device Boot Start End Blocks Id System /dev/loop1p1 * 1 20 153600 83 Linux Partition 1 does not end on cylinder boundary. /dev/loop1p2 20 275 2048000 82 Linux swap / Solaris Partition 2 does not end on cylinder boundary. /dev/loop1p3 275 1959 13526016 8e Linux LVM
Now we need to create device maps from the partition table.
# kpartx -av /dev/loop1 add map loop1p1 (253:0): 0 307200 linear /dev/loop1 2048 add map loop1p2 (253:1): 0 4096000 linear /dev/loop1 309248 add map loop1p3 (253:2): 0 27052032 linear /dev/loop1 4405248
# ls -lF /dev/mapper crw-rw---- 1 root root 10, 58 Apr 6 20:34 control lrwxrwxrwx 1 root root 7 Jun 12 19:37 loop1p1 -> ../dm-0 lrwxrwxrwx 1 root root 7 Jun 12 19:37 loop1p2 -> ../dm-1 lrwxrwxrwx 1 root root 7 Jun 12 19:37 loop1p3 -> ../dm-2
Since the main partition was an LVM type (8e Linux LVM) rather than a basic Linux partition (83 Linux) we need to do some extra work to access it.
# lvdisplay --- Logical volume --- LV Path /dev/sys/root LV Name root VG Name sys LV UUID XaGlMc-3axP-Ce1b-lRJx-8NBw-DYSF-FMoWLS LV Write Access read/write LV Creation host, time vm.localdomain, 2015-03-07 17:24:27 +0000 LV Status NOT available LV Size 12.88 GiB Current LE 412 Segments 1 Allocation inherit Read ahead sectors auto
Provided there's no other volume group called 'sys', you can activate it.
# vgchange -ay sys 1 logical volume(s) in volume group "sys" now active # lvdisplay ... LV Status available ...
Finally, we can mount the filesystem and get to the files!
# mount -t ext4 /dev/sys/root /mnt/
After you've finished accessing the filesystem, you need to reverse the process to be free up the disk file.
# umount /mnt # vgchange -an sys # kpartx -dv /dev/loop1 # losetup -d /dev/loop1