Difference between revisions of "VCP4"

Jump to navigation Jump to search
7,526 bytes added ,  21:56, 16 June 2013
Added VCP cat and Meta
m (→‎Enable a Fault Tolerant Virtual Machine: Added "Not Protected" reasons and some deffinitions)
(Added VCP cat and Meta)
 
(46 intermediate revisions by the same user not shown)
Line 2: Line 2:
* [http://communities.vmware.com/community/vmtn/certedu/certification/vcp VMware VCP Forum]
* [http://communities.vmware.com/community/vmtn/certedu/certification/vcp VMware VCP Forum]
* [http://mylearn.vmware.com/lcms/mL_faq/2726/VMware%20Certified%20Professional%20on%20vSphere%204%20Blueprint%208.13.09.pdf VCP4 Blueprint]
* [http://mylearn.vmware.com/lcms/mL_faq/2726/VMware%20Certified%20Professional%20on%20vSphere%204%20Blueprint%208.13.09.pdf VCP4 Blueprint]
* [http://www.vmware.com/support/pubs/vs_pages/vsp_pubs_esx40_vc40.html vSphere Documentation]
* VMware vSphere Documentation: [http://www.vmware.com/support/pubs/vs_pages/vsp_pubs_esx40_vc40.html PDF] [http://pubs.vmware.com/vsp40 HTML] (HTML version is good for searching)
* [http://thinkvirtually.co.uk/#/overview/4535842936 Scott Vessey]
* [http://www.simonlong.co.uk/blog/vcp-vsphere-upgrade-study-notes/ Simon Long blog]
* [http://www.simonlong.co.uk/blog/vcp-vsphere-upgrade-study-notes/ Simon Long blog]


Line 41: Line 42:
|-
|-
|  || <code> vmkcore </code> || 1.25GB || Core debugging dumps
|  || <code> vmkcore </code> || 1.25GB || Core debugging dumps
|}
'''Optional Partitions'''
{|cellpadding="4" cellspacing="0" border="1"
|- style="background-color:#bbddff;"
! Mount              !! Type !! Size  !! Description 
|-
| <code>/home</code> || ext3 || 512MB  || ESX user accounts
|-
| <code>/tmp</code>  || ext3 || 1024MB || Temp files!
|-
| <code>/usr </code> || ext3 ||        || User programs and data (3rd party apps)
|-
|<code>/var/log</code>|| ext3 || 2000MB || Log files
|-
|}
|}


Line 55: Line 72:
| Standard        || Essentials + HA   
| Standard        || Essentials + HA   
|-
|-
| Advanced        || Standard + 12 cores/CPU, Hot Add, FT, vShield, VMotion, Data Recovery
| Advanced        || Standard + 12 cores/CPU, Hot Add, FT, VMotion, vShield, Data Recovery
|-
|-
| Enterprise      || Advanced + 6 cores/CPU, Storage vMotion, Data Recovery, DRS
| Enterprise      || Advanced + 6 cores/CPU, Storage vMotion, Data Recovery, DRS
|-
|-
| Enterprise Plus || 12 cores/CPU, 8way vSMP, maxGB/ESX, vNetwork Distributed Switch, Host Profiles, 3rd Party Multipathing
| Enterprise Plus || 12 cores/CPU, 8way vSMP, 1TB/ESX, vNetwork Distributed Switch, Host Profiles, 3rd Party Multipathing
|-
|-
| vCentre Foundation || Fully featured, but limited to managing 3 ESX's
| vCentre Foundation || Fully featured, but limited to managing 3 ESX's
Line 65: Line 82:


== Upgrade VMware ESX/ESXi ==
== Upgrade VMware ESX/ESXi ==
'''Prerequisites'''
* <code> /boot </code> partition must be at least 100 MB
'''Pre-Upgrade Backups'''
'''Pre-Upgrade Backups'''
* Backup ESX Host Config
* Backup ESX Host Config
Line 111: Line 131:


== Secure VMware ESX/ESXi ==
== Secure VMware ESX/ESXi ==
* ESX firewall - primary source of protection for Service Console
* Weak ciphers are disabled, all communications are secured by SSL certificates
* Tomcat Web service has been modified to limited functionality (to avoid general Tomcat vulnerabilities)
* Insecure services (eg FTP, Telnet) are not installed, and ports blocked by the firewall
* TCP 443 - Service Console, vmware-authd
* TCP 902 - VMkernel, vmkauthd
== Install VMware ESX/ESXi on SAN Storage ==
'''Boot from SAN'''
* HBA must be located in lowest PCI bus and slot number
* HBA BIOS must designate the FC card as a boot controller
* The FC card must initiate a primative connection to the boot LUN
* Each ESX must have its own boot LUN
** SAN storage paths can be masked using <code> esxcli corestorage claimrule </code> (PSA claim) rules to select which available LUN's are claimed
* iSCSI must use a hardware initiator (impossible to boot using software iSCSI)


'''FC boot from SAN set-up'''
* Configure/create boot LUN
* Enable boot from HBA in system's BIOS and in HBA's BIOS
* Select the LUN to boot from in HBA BIOS


== Install VMware ESX/ESXi on SAN Storage ==
'''iSCSI boot from SAN set-up'''
* Configure storage ACL so that only correct ESX has access to correct boot LUN (must be LUN 0 or LUN 255)
* Enable boot from HBA in system's BIOS and in HBA's BIOS
* Configure target to boot from in HBA's BIOS


== Identify vSphere Architecture and Solutions ==
== Identify vSphere Architecture and Solutions ==
Line 120: Line 162:
* Server
* Server
* ESXi (standalone, free)
* ESXi (standalone, free)
'''vSphere Features etc'''
* '''VMsafe''' - API to enable 3rd party security products to control and protect
** Memory and CPU - Introspection of VM memory pages and CPU states
** Networking - Filtering of packets inside hypervisor (vSwitches)
** Process Execution - In guest (VM), in process API's effectively allowing monitoring and control of process execution (agent-less AV)
** Storage - VM disks can be mounted etc (agent-less AV)
* '''vShield''' - Appliance utilising VMsafe to provide security and compliance


'''Datacentre Solutions'''
'''Datacentre Solutions'''
Line 153: Line 203:
* VLAN - Traditional single VLAN assignment to a port group
* VLAN - Traditional single VLAN assignment to a port group
* VLAN Trunking - Multiple VLAN's can be assigned to a dv Port Group
* VLAN Trunking - Multiple VLAN's can be assigned to a dv Port Group
* Private VLAN - Allows Private VLANs (see http://en.wikipedia.org/wiki/Private_VLAN)
* Private VLAN - Allows Private VLANs
** VLANs over a VLAN, the VLAN equivalent of subnetting.  Hosts on differing subVLANs may be in same IP range, but need to go via router to communicate.
** Primary (promiscuous) VLAN uplinks to rest of network
** Ssee http://blog.internetworkexpert.com/2008/07/14/private-vlans-revisited/
 
'''Traffic Shaping'''
* Can be applied to both inbound and outbound traffic
* Can be set per dvPort (dvPort Group must allow overrides)


'''Service Console ports'''
'''Service Console ports'''
Line 165: Line 222:
= Configure ESX/ESXi Storage =
= Configure ESX/ESXi Storage =
== Configure FC SAN Storage ==
== Configure FC SAN Storage ==
'''Storage Device Naming'''
* '''Name''' - A ''friendly'' name based on storage type and manufacturer.  User changeable, kept consistent across ESX's
* '''Identifier''' - Globally unique, human unintelligible.  Persistent through reboot and consistent across ESX's
* '''Runtime Name''' - The first path to a device, created by host and not persistent.  Of format '''<code>vmhba#:C#:T#:L#''' </code>
** vmhba - Storage Adapter number
** C - Storage Channel number (software iSCSI uses this to represent multiple paths to same target)
** T - Target
** L - LUN (provided by storage system; if only 1 LUN its always L0)
'''PSA - Pluggable Storage Architecture'''
'''PSA - Pluggable Storage Architecture'''
* Manages storage multipathing
* Manages storage multipathing
Line 170: Line 237:
* Native Multipathing Plugin (NMP) provided by default, can have sub-plugins (can be either VMware or 3rd party)
* Native Multipathing Plugin (NMP) provided by default, can have sub-plugins (can be either VMware or 3rd party)
** Storage Array Type Plugin (SATP) - unique to a particular array (effectively an array driver, like a standard PC hardware driver)
** Storage Array Type Plugin (SATP) - unique to a particular array (effectively an array driver, like a standard PC hardware driver)
** Path Selection Plugin (PSP)
** Path Selection Plugin (PSP) - default assigned by NMP based on the SATP
* Multipathing Plugin (MPP) - 3rd party, can run alongside or in addition to Native Multipathing Plugin
* Multipathing Plugin (MPP) - 3rd party, can run alongside or in addition to Native Multipathing Plugin,
 


'''PSA operations'''
'''PSA operations'''
Line 182: Line 250:
* Handles physical path discovery and removal
* Handles physical path discovery and removal
* Provides logical device and physical path I/O stats
* Provides logical device and physical path I/O stats


'''MPP / NMP operations'''
'''MPP / NMP operations'''
Line 191: Line 260:
** Depending  on storage device, perform specific actions necessary to handle path failures and I/O cmd retries
** Depending  on storage device, perform specific actions necessary to handle path failures and I/O cmd retries
* Support management tasks, EG abort or reset of logical devices
* Support management tasks, EG abort or reset of logical devices
'''PSP types'''
Default (VMware) PSP Types (3rd party PSP's can be installed)...
* '''Most Recently Used''' - Good for either Active/Active or Active/Passive
* '''Fixed''' - Can cause path thrashing when used with Active/Passive
* '''Round Robin''' - Load balanced


'''PSA Claim Rules'''
'''PSA Claim Rules'''
Used to define paths should be used by a particular plugin module
* Used to define paths should be used by a particular plugin module
 


'''LUN Masking'''
'''LUN Masking'''
Used to prevent an ESX from seeing LUN's or using individual paths to a LUN
* Used to prevent an ESX from seeing LUN's or using individual paths to a LUN
Add and load a claim rule to apply
* Add and load a claim rule to apply


== Configure iSCSI SAN Storage ==
== Configure iSCSI SAN Storage ==
'''''Most of the FC SAN Storage info above is also applicable here'''''
'''CHAP Authentication'''
* '''One-way CHAP''' - Unidirectional, iSCSI target authenticates the initiator (ESX) only
* '''Mutual CHAP''' - Bidirectional, ESX also authenticates the iSCSI target (''Software iSCSI only'')
'''Multipathing (software iSCSI)'''
# Set-up a vSwitch with two VMkernel ports and two uplinks
# For each VMkernel port, edit ''NIC Teaming'' | ''Override vSwitch failover order'' to bind one uplink each
# Connect the iSCSI initiator to each VMkernel port
#* <code> esxcli swiscsi nic add -n <vmk_port_name> -d <vmhba_no> </code>
== Configure NFS Datastores ==
== Configure NFS Datastores ==
* ESX's manage exclusive access to files via <code>.lc-XXX</code> lock files
* ESX supports NFS v3 on TCP ''only''
* ESX's manage exclusive access to files via <code> .lc-XXX </code> lock files
* To use jumbo frames, enable on the vSwitch and the VMkernel port(s)
** Frames up to 9kB are supported


== Configure and Manage VMFS Datastores ==
== Configure and Manage VMFS Datastores ==
Line 229: Line 324:
{|cellpadding="4" cellspacing="0" border="1"
{|cellpadding="4" cellspacing="0" border="1"
|- style="background-color:#bbddff;"
|- style="background-color:#bbddff;"
! Plug-In              !! Description   
! Plug-In              !! Description  
|-
| Storage Monitoring    || [Default]    
|-
| Service Status        || [Default] Displays health of services on the VC
|-
| Hardware Status      || [Default] Displays ESX hardware health (CIM monitoring)
|-
|-
| Update Manager        ||  
| Update Manager        ||  
Line 252: Line 353:
* (Win) Sysprep must be installed on VC
* (Win) Sysprep must be installed on VC
* (Linux) Guest OS must have Perl installed
* (Linux) Guest OS must have Perl installed
'''vCenter Maps'''
* Provide an overview of relationships for
** Host Resources
** VM Resources
** Datastore Resources


== Configure Access Control ==
== Configure Access Control ==
Line 283: Line 390:
* VM Hardware v4 runs on ESX3 or ESX4, v7 runs on ESX4 only
* VM Hardware v4 runs on ESX3 or ESX4, v7 runs on ESX4 only
* VM's running MS Windows should have SCSI TimoutValue changed to 60 secs to allow Windows to tolerate delayed SAN I/O from path failovers
* VM's running MS Windows should have SCSI TimoutValue changed to 60 secs to allow Windows to tolerate delayed SAN I/O from path failovers


'''Disk Types'''
'''Disk Types'''
* Thick - traditional (can convert to Thin via Storage vMotion)
* Thick - traditional (can convert to Thin via Storage vMotion)
* Thin - minimal space usage (conversion to Thick is manual process)
* Thin - minimal space usage (conversion to Thick requires VM downtime)
Can't specify for NFS stores (controlled by the NFS server itself)
 


'''Memory'''
'''Memory'''
* Minimum of 4MB, increments of 4MB
* Minimum of 4MB, increments of 4MB
* Maximum for best performance - threshold over which a VM's preformance will be degraded if memory size exceeded (varies dependant on load on ESX)
* Maximum for best performance - threshold over which a VM's preformance will be degraded if memory size exceeded (varies dependant on load on ESX)


'''SCSI Controller Types'''
'''SCSI Controller Types'''
Line 300: Line 411:
** Only VM h/ware v7 with Win2k3, Win2k8 or Red Hat Ent v5
** Only VM h/ware v7 with Win2k3, Win2k8 or Red Hat Ent v5
** Not supported with
** Not supported with
*** Boot disks (use a standard adapter for VM's OS/boot disk)
*** Record/replay
*** Record/replay
*** Fault Tolerance
*** Fault Tolerance
*** MSCS Clustering (so also SQL clusters)
*** MSCS Clustering (so also SQL clusters)
*** ''[Boot disks - not an issue since ESX4.0 Update 1]''


'''N-port ID virtualization (NPIV)'''
'''N-port ID virtualization (NPIV)'''
Line 310: Line 422:
* ESX's HBA's must support NPIV
* ESX's HBA's must support NPIV
* NPIV enabled VM's are assigned 4 NPIV WWN's
* NPIV enabled VM's are assigned 4 NPIV WWN's
* Storage vMotion is not supported


'''vNICs'''
'''vNICs'''
Line 316: Line 430:
* '''VMXNET2''' - Aka enhanced VMXNET, supports jumbo frames and TSO, limited OS support
* '''VMXNET2''' - Aka enhanced VMXNET, supports jumbo frames and TSO, limited OS support
* '''VMXNET3''' - Performance driver, only supported on VM hardware v7, and limited OS's
* '''VMXNET3''' - Performance driver, only supported on VM hardware v7, and limited OS's
'''VMDirectpath'''
Allows direct access to PCI devices (aka passthrough devices), using inhibits
* VMotion
* Hot add
* Suspend and resume, Record and replay
* Fault Tolerance
* HA
An orange icon when trying to add a passthrough device indicates that the device has changed and the ESX must be bounced before it can be used.
'''VMI Paravirtualisation'''
Enables improved performance for supported VM (Linux only currently), by allowing VM to communicate with hypervisor
* Uses 1 of VM's 6 vPCI slots
* Must be supported by ESX (VM can be cold migrated to unsupported ESX, with perf hit)
'''vCenter Converter'''
Features/functionality...
* P2V
* Convert/import other format VM's (eg VMware Workstation, MS Virtual Server)
* Convert 3rd party backup or disk images
* Restore VCB backup images
* Export VM's to other VMware VM formats
* Make VM's bootable
* Customise existing VM's
Requires the following ports
* Windows: TCP 139, 443, 445, 902
* Linux: TCP 22, 443, 902, 903
'''Guided Consolidation'''
* Active Domains - Systems being analysed need to be a member of an active domain
* Add to Analysis to analyse new systems, max 100 concurrent, can take 1hr for new analysis to start
* Confidence - Degree to which VC collected perf data, and how good a candidate
** High confidence is shown after 24 hrs, if workload varies over greater interval, further analysis is required
* New VM's disk size = Amount used on physical x 1.25
* Convert manually to be able to specify new VM's settings


== Manage Virtual Machines ==
== Manage Virtual Machines ==
Line 322: Line 479:
* VM hardware version is v7
* VM hardware version is v7
* vCPU's can only be added if "CPU Hot Plug" is enabled in the VM's options
* vCPU's can only be added if "CPU Hot Plug" is enabled in the VM's options
'''Virtualized Memory Management Unit (MMU)'''
* Maintains mapping between VM's guest OS ''physical'' memory to underlying hosts ''machine'' memory
* Intercepts VM instructions that would manipulate memory, so that CPU's MMU is not updated directly.


== Deploy vApps ==
== Deploy vApps ==
vApp - An enhanced resource pool to run a contained group of VM's, can be created under the following conditions
* A host is selected in the inventory that is running ESX3 or later
* A DRS-enabled cluster is selected in the inventory
* Name up to 80 chars
'''Deploying an OVF template'''
'''Deploying an OVF template'''
* Non-OVF format appliances can be converted using the VMware vCentre Converter module
* Non-OVF format appliances can be converted using the VMware vCentre Converter module
Line 330: Line 497:
** Transient - VCentre manages a pool of available IP's
** Transient - VCentre manages a pool of available IP's
** DHCP
** DHCP
vApp - An enhanced resource pool to run a contained group of VM's


= Manage Compliance =
= Manage Compliance =
Line 353: Line 518:


== Establish and Apply ESX Host Profiles ==
== Establish and Apply ESX Host Profiles ==
* ESX 4 supported only
* Used to ensure consistent configuration across ESX's
* Used to ensure consistent configuration across ESX's
* Create a profile from a reference ESX, then apply to Cluster or ESX
* Create a profile from a reference ESX, then apply to Cluster or ESX
** Reference ESX can be changed
** Reference ESX can be changed
** Profile can be refreshed (if reference ESX config has been updated)
** Profile can be refreshed (if reference ESX config has been updated)
* ESX must be in maintenance mode for a profile to be applied (resolve compliance discrepancies)
* Can be imported/exported as .vpf files


= Establish Service Levels =
= Establish Service Levels =
Line 394: Line 562:
'''Prerequisites'''
'''Prerequisites'''
* Cluster
* Cluster
** HA and host monitoring must be enabled
** HA and host monitoring must be enabled (if monitoring isn't enabled new Secondary VM's aren't created)
** Host certificate checking must be enabled
** Host certificate checking must be enabled
* ESX's
* ESX's
Line 403: Line 571:
** Host BIOS must have Hardware Virtualisation (eg Intel VT) enabled
** Host BIOS must have Hardware Virtualisation (eg Intel VT) enabled
* VM's
* VM's
** VMDK files must be thick provisioned with Cluster Features enabled
** VMDK files must be thick provisioned with Cluster Features enabled and not Physical RDM
** Run supported OS (generally all, may require reboot to enable FT)
** Run supported OS (generally all, may require reboot to enable FT)


Line 423: Line 591:
# Turn on FT for appropriate VM's
# Turn on FT for appropriate VM's


'''Not Protected''' caused by...
'''Not Protected''' caused by Secondary VM not running, because...
* VM's are still starting up
* VM's are still starting up
* Secondary VM is not started, possible causes...
* Secondary VM cannot start, possible causes...
** No suitable host on which start secondary
** No suitable host on which start secondary
** A fail-over has occurred but FT network link down, so new secondary not started
** A fail-over has occurred but FT network link down, so new secondary not started
Line 448: Line 616:
* Can quiesce guest file system (req VMTools) to ensure consistent disk state
* Can quiesce guest file system (req VMTools) to ensure consistent disk state
* Independent disks are excluded from snapshots (Persistent writes to disk, Nonpersistent writes to redo log, discarded at power off)
* Independent disks are excluded from snapshots (Persistent writes to disk, Nonpersistent writes to redo log, discarded at power off)
* '''Migrating a VM with Snapshots'''
** Cannot use Storage VMotion
** All VM files must reside in single directory if being moved by cold storage migration
** Reversion after VMotion may cause VM to fail - only occurs if discrepancies in ESX hardware


'''VMware Data Recovery'''
'''VMware Data Recovery'''
Line 455: Line 627:
* Max 8 VM backups can run concurrently
* Max 8 VM backups can run concurrently
* Max 2 backup destinations used concurrently
* Max 2 backup destinations used concurrently
* Max 100 VM's per back appliance
* Max 100 VM's per back up appliance
* Backup's won't start if ESX CPU usage >90%


'''VMware Data Recovery Setup'''
'''VMware Data Recovery Setup'''
Line 499: Line 672:


[[Category:VMware]]
[[Category:VMware]]
[[Category:VCP]]

Navigation menu