VMware Introduction

From vwiki
Jump to navigation Jump to search

Physical vs. Virtual

All servers have one purpose, to serve. To facilitate this, one or more applications or services need to run, and it is these applications or services, that provide the server’s function to its clients.

Physical Server

All standard physical servers contain the same layers of components. The Hardware provides the physical resources to be used. Running on this the Basic Input/Output System (BIOS) provides an interface to the hardware. Device Drivers then provide common standard interfaces into the hardware, which allows the server’s Operating System to run on the server.

Error creating thumbnail: File missing

Finally, through the layers of abstraction that the Operating System and the layers below provide, the Application(s) (or Services) are able to run without any specific requirement or knowledge of the hardware it’s ultimately running on.


Virtual Server (Virtual Machine)

Virtual servers must provide the same service to their clients, and so run the same Applications as their physical counterparts. For this to be achieved easily, they need the same Operating System on which to run on, so that the software interfaces to the system's resources they’re designed for, are identical in either physical or virtual scenarios.

So, despite being run on a Virtual server, an application still needs access to physical resources, which are provided by server Hardware. Therefore a Virtual server contains the same layers as a standard Physical server, however to enable the virtualisation, extra layers are required.

Error creating thumbnail: File missing

In order to virtualise a Physical server, its resources need to be shared so that more than one Virtual Servers (also known as Virtual Machines) can run on it. This divvying up of resources is done by the VM Kernel, balancing up the requests of the Virtual Machines (VM’s) it’s serving, which runs directly on top of the Hardware’s BIOS. [See below for more on the VM Kernel]

Each Virtual Machine starts with a virtual BIOS, which to the layers above appears as any normal BIOS would that runs directly on a piece of hardware. It’s nothing more than a standard version Phoenix BIOS, altered by VMware to run on the VM Kernel instead of physical hardware.

As with a Physical server, on top of the BIOS runs the Device Drivers, which provide a common interface for the Operating System to utilise the systems resources. The Drivers are common device manufacturer’s drivers which are, for example, installed automatically by a Windows2003 install. However, for performance reasons, a few of Drivers have VMware modified versions which can be installed once the guest OS is up and running.

At the Operating System layer any signs of the virtualisation below are non-existent; the common interfaces provided by the drivers to the OS means that its runs exactly as it would on a Physical server. Therefore Application(s) running on the OS are oblivious to any differences in the way physical resources below are being managed.


VM Kernel

The VM Kernel is a lightweight layer of software (reportedly about 200k lines of code) written especially for the task of managing the demands of VM’s on the physical hardware below. As it sits directly on the hardware’s BIOS, without manufacturer’s drivers in-between, the VMK has to be designed for every variety of hardware it runs on, and so runs on a restrictive list of manufacturer’s kit.

Often refereed to as a ‘bare-metal’ architecture, the VMKernel has direct control of all the hardware it’s running on. It also has no user interface, any configuration changes have to be done via a separate management console. Therefore the VMK’s lack of complexity, and focus on only the job in hand, should mean an efficient use of the hardware’s performance.


VMWare Virtual Infrastructure

Host Server (ESX)

A VMware Virtual Machine host server is known as an ESX server. This has two components, the VM Kernel which provides the hardware virtualisation (overview above), and the Service Console, which is used to manage and report on the operation of the VM Kernel.

Service Console

The Service Console is a butchered version of Red Hat that runs on the ESX server in a similar fashion to a VM (although it cannot be managed in the same fashion), although it is best considered as a management agent. The Service Console is not required for VM operation; it can be restarted without affecting the guest systems’ operation or performance.

It is responsible for management of the local ESX and its config. As the VMkernel can't be accessed directly, the Service Console is the only route into it. Whilst VMware recommend against installing any additional software into the Service Console, its not unusual to install some, often to allow features such as hardware alerting, by installing DELL OpenManage or HP Insight into the Console (note that since VI4 this is less of a requirement as hardware monitoring is now available via CIM.

Error creating thumbnail: File missing


Virtual Infrastructure Management

Management of an estate of ESX’s is done by a central management server, known as the Virtual Centre. This standalone system connects to the Service Console of each ESX so that it can manage/monitor them. The Service Console keeps a local copy of the ESX’s configurations (in an SQL database), and provides centralised licensing.

To manage the entire estate of ESX servers, an engineer should connect to the Virtual Centre, either via a web browser or via the VMware Virtual Infrastructure Client software. Standard NT user account permissions apply to connections to the Virtual Centre server.

To manage an individual ESX server, an engineer can connect to the Service Console of the ESX, again either via a web browser or via the VMware Virtual Infrastructure Client software. Unix user account permissions configured in the Service Console’s Unix OS apply.

To manage a Virtual Machine, connect via standard remote management software (RDP, VNC, PCAnywhere). If this is not possible, then VM’s can be connected to via the Virtual Centre or Service Console but this is not recommended for day to day operations.

Error creating thumbnail: File missing


ESX Clusters

Overview

ESX's can be grouped together into clusters. A cluster of ESX's provides a pool of resources which are provided for use by Virtual Machines. Therefore rather than assigning VM's to specific, individual ESX's, and having to manage that arrangment, VM's are simply assigned to a cluster. This simplifies the administration of a Virtual Infrastructure, and also makes available other features such as High Availability and DRS

Error creating thumbnail: File missing

For a number of ESX's to be setup as a cluster, they must be set-up identically. The whole point of a cluster of ESX's is that you don't care exactly which ESX a particular VM is running on. Therefore if there was a difference in the network connections available between two ESX's, for example, it would be possible for a virtual machine to be running happily on ESX A one day, then fail to work when running on ESX B. The similarity requirements for an ESX to be in a cluster (and so allow a VM to run on any of the ESX's in a cluster) are

  • Shared SAN storage - This allows a virtual machines hard-drive(s) to be located centrally, independent of the ESX its running on
  • Identical network config - Both in terms of network connection (eg to the 172.17 or 192.168 networks) and naming (even if two ESX's have a connection to the 172.17, if its called Priv172.17 on one ESX, and say, Private_172.17 on the other, the VM won't work when moved from one ESX to the other).


HA - High Availability

HA takes advantage of the fact that by having a cluster, your virtual machines can run on any of the ESX's within that cluster. Therefore, should an ESX fail, there's no reason why you shouldn't be able to automatically power up the VM's that were on that failed ESX, on other ESX's within the cluster. Which is exactly what HA does.

It works by installing a HA agent within the Service Console on each ESX within a cluster. Should the agents detect that an ESX has failed, all the VM's that were on that ESX are powered up on the remaining ESX's.

Error creating thumbnail: File missing

This failover is not seamless. A VM on the failed ESX will fail itself, but will be powered on again once HA has detected the fail, and moved the VM to another ESX. In testing I performed some time back (when ESX v3.0.1 was current), it took approx 1 min for an ESX fail to be detected. It then took between 2 and 5 mins for the VM to be fully booted up. In a test of failing 12x VM's to a DELL 1950 2x dual 2GHz server, it took 5min 45sec for all VM's to fail over, power up, and applications to start and accept incoming customer connections. The speed of the failover, once the fail has been detected, it limited by the spare CPU capacity on the ESX's the VM's are being powered up on (obviously the speed it takes for your application to start/recover will vary).


DRS - Distributed Resource Scheduler

Work in progress - to be completed


VMotion

VMotion furthers the functionality provided by having similar ESX's in clusters. Though its worth noting that two ESX's that you wish to VMotion between do not need to be in the same cluster, just that many of the requirements for a cluster are also requirements for enabling VMotion. In RTS's implementation you cannot VMotion between two ESX's that are not in the same cluster.

Using VMotion its possible to move a VM from one ESX to another without powering the VM off. In practical terms, when hot migrating or VMotion'ing a VM, you'll notice one ping drop whilst this occurs, but otherwise the migration is completely transparent.

In addition to the requirements for ESX's to be in a cluster, the ESX's involved in a VMotion must have identical CPU architectures. When a Windows system starts up, it adapts itself to the available instruction set for the CPU its running on, therefore you can't migrate a VM between two otherwise identical ESX's if they have different CPU models.


Glossary

Common VMware terminology, for more detailed explanation, see above

  • DRS - Distributed Resource Scheduler, ESX cluster load balancing functionality
  • ESX - Physical host server, the VMware ESX software runs directly on top of the server's hardware and provides the ability for multiple Virtual Machines to be run on a single physical piece of server hardware
  • HA - High Availability, the ESX failover technology
  • VC - Virtual Centre, the central management server of a VI
  • VI - Virtual Infrastructure, the collection of components that provide virtualisation, and so provides the services that allow VM's to run
  • VIM - Virtual Infrastructure Manager, new name for the Virtual Centre
  • VM - Virtual Machine, one or more VM's can run on an ESX server. The VM is the virtual server that mimics a physical server and runs applications.
  • VMotion - The ability to hot migrate a virtual machine from one ESX to another, without shutting it down

Further Reading

  • VMware Overview - An introduction to virtualisation by the software vendor
  • VMware VI - An overview of the features of VI