VCP5: Difference between revisions
(→Create and Configure VMFS and NFS Datastores: 1st pass) |
(Added VCP cat and Meta) |
||
(26 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
This page shows my crib notes created into to obtain the VCP5 certification. I've been using the technology for a while now, and I've only bothered to document the gaps in my own knowledge, so it is by no means concise and unlikely to cover much that would also apply to previous certs. VMware's What's New course covers the following... | This page shows my crib notes created into to obtain the VCP5 certification. I've been using the technology for a while now, and I've only bothered to document the gaps in my own knowledge, so it is by no means concise and unlikely to cover much that would also apply to previous certs. Having taken the exam, its as rigorous as ever, you need to understand the fundamentals and older features (that were also applicable in [[VCP3]] and [[VCP4]] as much as the new features). | ||
VMware's What's New course covers the following... | |||
* List and describe key enhancements in vSphere 5.0 | * List and describe key enhancements in vSphere 5.0 | ||
* Upgrade a deployment from vSphere 4.x to vSphere 5.0 | * Upgrade a deployment from vSphere 4.x to vSphere 5.0 | ||
Line 19: | Line 21: | ||
'''vCentre Server Components''' | '''vCentre Server Components''' | ||
* Web Client (Server) | |||
* Update Manager - requires 32-bit DSN db | * Update Manager - requires 32-bit DSN db | ||
* ESXi Dump Collector - requires IPv4 | * ESXi Dump Collector - requires IPv4 | ||
Line 62: | Line 65: | ||
== Plan and Perform Upgrades of vCenter Server and VMware ESXi == | == Plan and Perform Upgrades of vCenter Server and VMware ESXi == | ||
'''ESX''' | '''ESX''' | ||
* ESX's can be upgraded via (ESX will retain MS-DOS based partitioning) | * ESX's can be upgraded from any ESX v4, via (ESX will retain MS-DOS based partitioning) | ||
** Update Manager | ** Update Manager | ||
** ISO installer | ** ISO installer | ||
Line 74: | Line 77: | ||
* v3 -> v5 Upgrade... | * v3 -> v5 Upgrade... | ||
** Requires exclusive access to datastore | ** Requires exclusive access to datastore | ||
*** No VM downtime required, but will retain MBR and block format | |||
** Original block size is retained | ** Original block size is retained | ||
'''vCenter Appliance''' | |||
# Import new Appliance | |||
# Open browsers to both old and new appliances | |||
# Set old vCentre as source, and new appliance as destination | |||
# Exchange appliance keys | |||
# Once completed, old appliance is shutdown | |||
== Secure vCenter Server and ESXi == | == Secure vCenter Server and ESXi == | ||
{| | {|class="vwikitable" | ||
|- | |- | ||
! Role !! Type !! ESX / VC !! Description | ! Role !! Type !! ESX / VC !! Description | ||
|- | |- | ||
Line 101: | Line 112: | ||
'''vCentre Access''' | '''vCentre Access''' | ||
* Disabled logged in users | * Disabled logged in users lose access as next validation period (default is 24hrs) | ||
'''ESXi Firewall''' | '''ESXi Firewall''' | ||
Line 117: | Line 128: | ||
== Identify vSphere Architecture and Solutions == | == Identify vSphere Architecture and Solutions == | ||
'''vSphere Editions''' | '''vSphere Editions''' | ||
{| | {|class="vwikitable" | ||
|- | |- | ||
! Edition !! vRAM !! vCPU/VM !! Features | ! Edition !! vRAM !! vCPU/VM !! Features | ||
|- | |- | ||
Line 133: | Line 144: | ||
| Desktop || n/a || ?? || VDI only | | Desktop || n/a || ?? || VDI only | ||
|} | |} | ||
'''Public Clouds''' | |||
* Service Offerings | |||
** Basic - Pilot projects etc, pay-for-use | |||
** Commited - Reserved allocation (subscription for resources) | |||
** Dedicated - Dedicated hardware, aka ''virtual private cloud'' | |||
'''Private Cloud''' | |||
* Workloads | |||
** Transient - temporary, suitable for pay as you go allocation pool model | |||
** High Elastic - dynamic (customer led), suitable for allocation pool model | |||
** Infrastructure - core services such as AD, print, email, etc, suitable for reservation pool model | |||
* Service Tiers | |||
** Premium - infrastructure and high perf workloads | |||
** Standard - highly elastic workloads | |||
** Development | |||
= Plan and Configure vSphere Networking = | = Plan and Configure vSphere Networking = | ||
Line 140: | Line 167: | ||
== Configure vNetwork Distributed Switches == | == Configure vNetwork Distributed Switches == | ||
* vDS - vNetwork Distributed Switch (dvSwitch) | * vDS - vNetwork Distributed Switch (dvSwitch) | ||
'''Load Balancing''' | |||
* Route based on originating virtual port | |||
* Route based on IP hash - Must be used if using EtherChannel | |||
* Route based on source MAC hash | |||
* Route based on physical NIC load - Must used for I/O Control | |||
* Use explicit failover order | |||
== Configure vSS and vDS Policies == | == Configure vSS and vDS Policies == | ||
Line 147: | Line 181: | ||
= Plan and Configure vSphere Storage = | = Plan and Configure vSphere Storage = | ||
== Configure Shared Storage for vSphere == | == Configure Shared Storage for vSphere == | ||
* Max no of storage paths per ESX - 1024 | |||
** Max no of VMFS per ESX - 256 | |||
* Max no of software FCoE adapters - 4 | |||
'''iSCSI Adapters''' | |||
* Software Adapter - ESX software implementation using VMkernel | |||
* Dependant Hardware Adapter - Standard NIC with iSCSI offload functionality, reliant on ESX's networking | |||
* Independent Hardware Adapter - Full iSCSI card, not reliant on ESX networking | |||
Boot from software iSCSI now supported. If disabled, adapter re-enables for boot. | |||
'''Storage I/O Control''' | |||
Allows VM's to be set storage IO shares and IOPS limits that take effect when datastore latency becomes too great (30ms by default) | |||
* Requirements | |||
** Datastores must be managed by single VC | |||
** Fibre Channel, iSCSI, or NFS storage (RDMs bot supported) | |||
** Single extents | |||
** SAN's with automated storage tiering must be certified compatible with Storage I/O | |||
== Configure the Storage Virtual Appliance for vSphere == | == Configure the Storage Virtual Appliance for vSphere == | ||
Allows 2 or 3 ESX's local storage to pooled together into a virtual datastore | Allows 2 or 3 ESX's local storage to pooled together into a virtual datastore | ||
Line 164: | Line 219: | ||
'''VMFS5''' | '''VMFS5''' | ||
* Supports > 2TB per extent, max 64 TB datastore size | * Supports > 2TB per extent, max 64 TB datastore size | ||
* Supports | * Supports max 64TB RDM | ||
* Max VMDK size is 2TB less 512B | |||
* 1 MB block size | * 1 MB block size | ||
* Uses hardware assisted ATS (Atomic Test and Set) locking where available | * Uses hardware assisted ATS (Atomic Test and Set) locking where available | ||
Line 171: | Line 227: | ||
'''VMFS3 -> VMFS5 Upgrade''' | '''VMFS3 -> VMFS5 Upgrade''' | ||
* VM downtime not required | * VM downtime not required | ||
'''Storage DRS - sDRS''' | |||
* 32 datastores per cluster | |||
'''Datastore removal''' | |||
* Removes from all ESX | |||
* Deletes any contents on DS (destroys the VMFS) | |||
= Deploy and Administer Virtual Machines and vApps = | = Deploy and Administer Virtual Machines and vApps = | ||
== Create and Deploy Virtual Machines == | == Create and Deploy Virtual Machines == | ||
VM hardware version in now v8, but may be v7 if | |||
* VM migrated from ESX v4 | |||
* Use a virtual disk created on ESX v4 in a VM | |||
== Create and Deploy vApps == | == Create and Deploy vApps == | ||
== Manage Virtual Machine Clones and Templates == | == Manage Virtual Machine Clones and Templates == | ||
== Administer Virtual Machines and vApps == | == Administer Virtual Machines and vApps == | ||
'''Memory Overhead''' - Amount of determined by | |||
* No of vCPU's | |||
* Configured memory | |||
'''SplitRx''' | |||
* Allows multiple ESX CPU's to process incoming network traffic for a VM | |||
* Requires VMXNET3 adapter | |||
* Enabled in VMX/Config Parameters | |||
** <code>ethernetX.emuRxMode</code> set to <code>1</code> (eg use <code>ethernet1</code> for NIC 1 | |||
'''VMX Swap''' - VM Executable swap | |||
* Allows part of the VM memory overhead to swapped to disk (saves 10 - 50 MB ESX RAM per machine) | |||
'''vMotion''' | |||
* vCentre tries to reserve resources on source and dest ESX's, shared by concurrent vMotions | |||
* Migrations always proceed (> ESX 4.0, otherwise not if resources cannot be reserved) | |||
= Establish and Maintain Service Levels = | = Establish and Maintain Service Levels = | ||
== Create and Configure VMware Clusters == | == Create and Configure VMware Clusters == | ||
'''High Availability''' | |||
Now uses Fault Domain Manager (FDM) agent rather than AAM | |||
* 1 master host, remaining ESX's are all slaves | |||
* Uses both network and datastore heartbeating as a fallback should network isolation occur | |||
* Reliant on vCentre for reconfiguration (only) | |||
* Communication by IP (not DNS!) | |||
* Logs to standard syslog | |||
In the case of network problems an ESX will be either of; | |||
* Isolated ESX - No mgmt network connectivity to any other ESXs in cluster | |||
* Partitioned ESX - Lost connectivity to master, but can see other ESX's | |||
** One ESX in partition will be elected as master | |||
'''EVC''' | |||
* All ESX CPU's must be from single vendor | |||
* AMD-V / VT and NX / XD must be enabled in BIOS | |||
* All ESX's must be ESX v3.5U2 or later | |||
* All ESX's must be enabled for vMotion | |||
* All VM's with feature set greater than EVC must be evac'ed/powered down from cluster (in practise, just power down all VM's!) | |||
== Plan and Implement VMware Fault Tolerance == | == Plan and Implement VMware Fault Tolerance == | ||
'''Unsupported''' | |||
* Snapshots | |||
** Therefore also Storage vMotion, VM backups, and Linked Clones | |||
* SMP - Only single vCPU supported | |||
* Physical RDM (virtual OK) | |||
* Thin disks (must be eager-zeroed/support cluster features) | |||
* NPIV | |||
* IP v6 | |||
* NIC Passthrough | |||
* Hot-plug devices | |||
* Paravirtualised guests | |||
* CD-ROM attached ISO not on shared storage | |||
'''Limitations / Requirements''' | |||
* DRS EVC must be enabled | |||
** VM's can be hosted, but won't be automated by DRS in non-EVC cluster | |||
* Max 4 FT VMs per ESX | |||
** Override via <code>das.maxftvmsperhost</code> | |||
* Max 64 GB per VM | |||
== Create and Administer Resource Pools == | == Create and Administer Resource Pools == | ||
== Migrate Virtual Machines == | == Migrate Virtual Machines == | ||
Line 195: | Line 318: | ||
== Monitor ESXi, vCenter Server and Virtual Machines == | == Monitor ESXi, vCenter Server and Virtual Machines == | ||
== Create and Administer vCenter Server Alarms == | == Create and Administer vCenter Server Alarms == | ||
[[Category:VMware]] | |||
[[Category:VCP]] |
Latest revision as of 21:56, 16 June 2013
This page shows my crib notes created into to obtain the VCP5 certification. I've been using the technology for a while now, and I've only bothered to document the gaps in my own knowledge, so it is by no means concise and unlikely to cover much that would also apply to previous certs. Having taken the exam, its as rigorous as ever, you need to understand the fundamentals and older features (that were also applicable in VCP3 and VCP4 as much as the new features).
VMware's What's New course covers the following...
- List and describe key enhancements in vSphere 5.0
- Upgrade a deployment from vSphere 4.x to vSphere 5.0
- Use Image Builder to modify and export an image profile as part of Auto Deploy
- Use Auto Deploy to Install a stateless ESXi host
- Manage a version 8 virtual machine with the next-generation Web-based VMware vSphere Client
- List and describe key networking enhancements, including the ESXi firewall and new features in vNetwork distributed switches
- Upgrade and manage a VMware vSphere VMFS5 datastore
- Understand and configure policy-driven storage management
- Create a datastore cluster and configure Storage DRS
- Configure a VMware High Availability cluster based on the new Fault Domain Manager agents
- Use the Linux-based VMware vCenter Server Appliance
Plan, Install, Configure and Upgrade vCenter Server and VMware ESXi
Install and Configure vCenter Server
vCentre Server Appliance
- Deployed via OVF, requires 7 GB disk (max 80 GB)
- Supports up to 5 ESXs / 50 VMs with embedded db
vCentre Server Components
- Web Client (Server)
- Update Manager - requires 32-bit DSN db
- ESXi Dump Collector - requires IPv4
- Syslog Collector - requires IPv4
- Auto Deploy - deploys ESXi image direct to ESX memory
- Authentication Proxy - allows ESXi servers to join AD domain
vCentre Availability
- Must match any requirements to support Auto Deploy
- Must run on ESXi's not supported by Auto Deploy
Client version use cases
- vSphere Client - Primary vSphere management tool (for infrastructure sys admins)
- vSphere Web Client - Primarily intended for inventory display, and VM deployment/configuration (for infrastructure mgrs and consumers)
Install and Configure VMware ESXi
vSphere Auto Deploy Image Builder
- Creates ESXi images in VIB packages
Auto Deploy rules
- Rules identify an ESX by...
- boot MAC (as seen in PXE boot)
- SMBIOS info
- BIOS UUID
- Vendor / Model
- Fixed DHCP IP address
- Active Rule Set - used to match ESXs at boot time
- Working Rule Set - used test compliance prior to adding to rule to Active Rule Set
On an ESX's 1st boot
- ESX boots, gets IP from DHCP server
- ESX downloads and runs gPXE from TFTP server
- gPXE gets image from Auto-Deploy server over HTTP
- ESX boots image and registers with VC that the Auto Deploy server is registered with
- If Host Profile requires user entry, ESX will boot into Maintenance Mode
- If ESX is part of DRS cluster, ESX may receive VM's as soon as online
Memory Compression Cache
Mem.MemZipEnable
- set to 0 to disableMem.MemZipMaxPct
- changes %age of VM memory allowed to be compressed (default is 10%)
Plan and Perform Upgrades of vCenter Server and VMware ESXi
ESX
- ESX's can be upgraded from any ESX v4, via (ESX will retain MS-DOS based partitioning)
- Update Manager
- ISO installer
- Script
- Or installed fresh via (ESX will use GUID partition tables)
- esxcli
- Auto Deploy (not an upgrade, ESX is re-provisioned from scratch)
- ISO installer
VMFS
- v3 -> v5 Upgrade...
- Requires exclusive access to datastore
- No VM downtime required, but will retain MBR and block format
- Original block size is retained
- Requires exclusive access to datastore
vCenter Appliance
- Import new Appliance
- Open browsers to both old and new appliances
- Set old vCentre as source, and new appliance as destination
- Exchange appliance keys
- Once completed, old appliance is shutdown
Secure vCenter Server and ESXi
Role | Type | ESX / VC | Description |
---|---|---|---|
No Access | System | ESX & VC | No view or do. Can be used to stop permissions propagating. |
Read Only | System | ESX & VC | View all except Console, no do. |
Administrator | System | ESX & VC | Full rights |
VM User | Sample | VC only | VM start/stop, console, insert media (CD) |
VM Power User | Sample | VC only | As user plus hardware and snapshot operations |
Resource Pool Admin | Sample | VC Only | Akin to an OU admin, full rights for child objects
Cannot create new VM's without additional VM and datastore privileges. |
Datastore Consumer | Sample | VC Only | Allows creation of VMDK's or snapshots in datastore (additional VM privileges to action) |
Network Consumer | Sample | VC Only | Allows assignment of VM's to networks (additional VM privileges to action) |
vCentre Access
- Disabled logged in users lose access as next validation period (default is 24hrs)
ESXi Firewall
- New for v5
- Rule set XML files found in
/etc/vmware/firewall/
- Should be edited via GUI
ESXi and Active Directory
- ESX FQDN must match AD domain
- ESX and AD should be synced to same time
- ESX's DNS must be able to resolve the AD domain
- Add to OU container using domain name format
sandfordit.local/SiliconOU1/MondeoOU2
Identify vSphere Architecture and Solutions
vSphere Editions
Edition | vRAM | vCPU/VM | Features |
---|---|---|---|
Essentials | 32 GB | 8 vCPU | Max 3 ESX's with 2 CPUs each, managed by vCentre for Essentials |
Essentials Plus | 32 GB | 8 vCPU | + HA, Data Recovery, vMotion |
Standard | 32 GB | 8 vCPU | HA, Data Recovery, vMotion |
Enterprise | 64 GB | 8 vCPU | + DRS, DPM, SvMotion, FT, Hot Add, vShield |
Enterprise Plus | 96 GB | 32 vCPU | + dvSwitch, IO Control, Host Profiles, Auto-Deploy, sDRS |
Desktop | n/a | ?? | VDI only |
Public Clouds
- Service Offerings
- Basic - Pilot projects etc, pay-for-use
- Commited - Reserved allocation (subscription for resources)
- Dedicated - Dedicated hardware, aka virtual private cloud
Private Cloud
- Workloads
- Transient - temporary, suitable for pay as you go allocation pool model
- High Elastic - dynamic (customer led), suitable for allocation pool model
- Infrastructure - core services such as AD, print, email, etc, suitable for reservation pool model
- Service Tiers
- Premium - infrastructure and high perf workloads
- Standard - highly elastic workloads
- Development
Plan and Configure vSphere Networking
Configure vNetwork Standard Switches
- vSS - vNetwork Standard Switch (vSwitch)
Configure vNetwork Distributed Switches
- vDS - vNetwork Distributed Switch (dvSwitch)
Load Balancing
- Route based on originating virtual port
- Route based on IP hash - Must be used if using EtherChannel
- Route based on source MAC hash
- Route based on physical NIC load - Must used for I/O Control
- Use explicit failover order
Configure vSS and vDS Policies
- TCP Segmentation Offload - Replace VM's NIC with Enhanced vmxnet
- NetFlow is available for dvSwitch (vDS requires its own IP address)
Plan and Configure vSphere Storage
- Max no of storage paths per ESX - 1024
- Max no of VMFS per ESX - 256
- Max no of software FCoE adapters - 4
iSCSI Adapters
- Software Adapter - ESX software implementation using VMkernel
- Dependant Hardware Adapter - Standard NIC with iSCSI offload functionality, reliant on ESX's networking
- Independent Hardware Adapter - Full iSCSI card, not reliant on ESX networking
Boot from software iSCSI now supported. If disabled, adapter re-enables for boot.
Storage I/O Control
Allows VM's to be set storage IO shares and IOPS limits that take effect when datastore latency becomes too great (30ms by default)
- Requirements
- Datastores must be managed by single VC
- Fibre Channel, iSCSI, or NFS storage (RDMs bot supported)
- Single extents
- SAN's with automated storage tiering must be certified compatible with Storage I/O
Configure the Storage Virtual Appliance for vSphere
Allows 2 or 3 ESX's local storage to pooled together into a virtual datastore
- VSA - vSphere Storage Appliance - deployed to each ESX providing storage to VSA cluster
- Cannot be deployed to ESX that also host vCentre
- Has 3 IP addresses (mgmt, datastore, backend/replication)
- ESX
- Must have static IP on same subnet as vCentre
- Disks
- 2, 4 or 8 identical disks (same model & capacity, JBOD not supported)
- Must be in RAID10 on ESX, then RAID1'ed by VSA to another ESX
- VMs
- Cannot running on ESX prior to VSA cluster creation
- Running on VSA datastores cannot swap memory, use memory reservations
Create and Configure VMFS and NFS Datastores
VMFS5
- Supports > 2TB per extent, max 64 TB datastore size
- Supports max 64TB RDM
- Max VMDK size is 2TB less 512B
- 1 MB block size
- Uses hardware assisted ATS (Atomic Test and Set) locking where available
- Instead of SCSI reservations, locks disk sector rather than whole disk
VMFS3 -> VMFS5 Upgrade
- VM downtime not required
Storage DRS - sDRS
- 32 datastores per cluster
Datastore removal
- Removes from all ESX
- Deletes any contents on DS (destroys the VMFS)
Deploy and Administer Virtual Machines and vApps
Create and Deploy Virtual Machines
VM hardware version in now v8, but may be v7 if
- VM migrated from ESX v4
- Use a virtual disk created on ESX v4 in a VM
Create and Deploy vApps
Manage Virtual Machine Clones and Templates
Administer Virtual Machines and vApps
Memory Overhead - Amount of determined by
- No of vCPU's
- Configured memory
SplitRx
- Allows multiple ESX CPU's to process incoming network traffic for a VM
- Requires VMXNET3 adapter
- Enabled in VMX/Config Parameters
ethernetX.emuRxMode
set to1
(eg useethernet1
for NIC 1
VMX Swap - VM Executable swap
- Allows part of the VM memory overhead to swapped to disk (saves 10 - 50 MB ESX RAM per machine)
vMotion
- vCentre tries to reserve resources on source and dest ESX's, shared by concurrent vMotions
- Migrations always proceed (> ESX 4.0, otherwise not if resources cannot be reserved)
Establish and Maintain Service Levels
Create and Configure VMware Clusters
High Availability Now uses Fault Domain Manager (FDM) agent rather than AAM
- 1 master host, remaining ESX's are all slaves
- Uses both network and datastore heartbeating as a fallback should network isolation occur
- Reliant on vCentre for reconfiguration (only)
- Communication by IP (not DNS!)
- Logs to standard syslog
In the case of network problems an ESX will be either of;
- Isolated ESX - No mgmt network connectivity to any other ESXs in cluster
- Partitioned ESX - Lost connectivity to master, but can see other ESX's
- One ESX in partition will be elected as master
EVC
- All ESX CPU's must be from single vendor
- AMD-V / VT and NX / XD must be enabled in BIOS
- All ESX's must be ESX v3.5U2 or later
- All ESX's must be enabled for vMotion
- All VM's with feature set greater than EVC must be evac'ed/powered down from cluster (in practise, just power down all VM's!)
Plan and Implement VMware Fault Tolerance
Unsupported
- Snapshots
- Therefore also Storage vMotion, VM backups, and Linked Clones
- SMP - Only single vCPU supported
- Physical RDM (virtual OK)
- Thin disks (must be eager-zeroed/support cluster features)
- NPIV
- IP v6
- NIC Passthrough
- Hot-plug devices
- Paravirtualised guests
- CD-ROM attached ISO not on shared storage
Limitations / Requirements
- DRS EVC must be enabled
- VM's can be hosted, but won't be automated by DRS in non-EVC cluster
- Max 4 FT VMs per ESX
- Override via
das.maxftvmsperhost
- Override via
- Max 64 GB per VM