Installation (KVM)

From vwiki
Revision as of 18:27, 10 April 2020 by Sstrutt (talk | contribs) (→‎Networking: Added Ubuntu 18 instructions)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Prerequisites

This guide assumes you have a working Ubuntu Server with one physical NIC that will used for networking to the host server, and also for a bridged network for guest virtual machines.

Ensure your server has CPU's that support hardware virtualisation, you should get one output of flags per CPU (which will include either svm or vmx). If you don't, reboot the server into BIOS and see if there is an option to enable CPU Virtualisation features or VT, if you don't your hardware may too old to support virtualisation. If your hardware is recent, it should, so consult your vendors documentation (either for the server, or for the server's motherboard).

 egrep '(vmx|svm)' /proc/cpuinfo


Ensure your server can run KVM hardware acceleration. Install cpu-checker and run as follows...

 apt install cpu-checker
 kvm-ok

...which should return...

INFO: /dev/kvm exists
KVM acceleration can be used

Installation

Install using the following command

apt install qemu qemu-kvm libvirt-bin bridge-utils virt-manager

Once completed, ensure libvirtd is running

systemctl status libvirtd

Networking

By default, you'll end up with a new virbr0 interface, on which all virtual machines will be deployed to. These will not be accessible to the outside world, which is sometimes fine for a private lab environment, but otherwise fairly useless. In order to all your VMs to be accessible, you need to create a new bridge interface, and move your server's IP address onto that. Once done, VMs can also be provisioned onto the same interface, and will be as accessible as your KVM server.

Ubuntu 18.04

Network config is achieved via Netplan, for an existing config such as show...

root@kvm-svr:~# cat /etc/netplan/50-cloud-init.yaml.orig
# This file is generated from information provided by the datasource.  Changes
# to it will not persist across an instance reboot.  To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
    ethernets:
        eno1:
            addresses:
            - 192.168.1.50/24
            gateway4: 192.168.10.1
            nameservers:
                addresses:
                - 192.168.1.1
                - 8.8.8.8
                - 8.8.4.4
                search:
                - vwiki.co.uk
    version: 2

...add a new bridge network and move most of the config over...

root@media-svr:~# cat /etc/netplan/50-cloud-init.yaml
# This file is generated from information provided by the datasource.  Changes
# to it will not persist across an instance reboot.  To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
    version: 2
    ethernets:
        eno1:
            dhcp4: no
            dhcp6: no
    bridges:
        br0:
            interfaces:
            - eno1
            dhcp4: no
            addresses:
            - 192.168.1.50/24
            gateway4: 192.168.1.1
            nameservers:
                addresses:
                - 192.168.1.1
                - 8.8.8.8
                - 8.8.4.4
                search:
                - vwiki.co.uk

...and apply using...

netplan apply

Configuration

Allow VNC Console Access

By default virtual machine consoles are bound to 127.0.0.1 on the host KVM server. So you can't connect from a remote machine using VNC to see the VM's console (unless you tunnel through SSH). Bind to 0.0.0.0 to make remote console access. Note that the VM configuration also needs to be changed to listen on 0.0.0.0.

  1. Edit /etc/libvirt/qemu.conf and uncomment the following line
    • vnc_listen = "0.0.0.0"
  2. Restart libvirtd
    • service libvirtd restart