Posts Tagged ‘ESXi’

What’s new in VMware vSphere 5.5

Written by Marco on . Posted in VMware

At VMworld 2013 VMware announced the new VMware vSphere ESXi 5.5 version. Along with more new products. This blog post describes the new features of vSphere ESXi 5.5 and the VMware vCenter 5.5 server.

vSphere ESXi Hypervisor Enhancements

Hot-Pluggable PCIe SSD Devices

The ability to hot-swap traditional storage devices such as SATA and SAS hard disks on a running vSphere host has been a huge benefit to systems administrators in reducing the amount of downtime for virtual machine workloads. Solid-state disks (SSDs) are becoming more prevalent in the enterprise datacenter, and this same capability has been expanded to support SSD devices. Similarly as with SATA and SAS hard disks, users are now able to hot-add or hot-remove an SSD device while a vSphere host is running, and the underlying storage stack detects the operation.
Support for Reliable Memory Technology.

The most critical component to vSphere ESXi Hypervisor is the VMkernel, which is a purpose-built operating system (OS) to run virtual machines. Because vSphere ESXi Hypervisor runs directly in memory, an error in it can potentially crash it and the virtual machines running on the host. To provide greater resiliency and to protect against memory errors, vSphere ESXi Hypervisor can now take advantage of Reliable Memory Technology, a CPU hardware feature through which a region of memory is reported from the hardware to vSphere ESXi Hypervisor as being more “reliable.” This information is then used to optimize the placement of the VMkernel and other critical components such as the initial thread, hostd and the watchdog process and helps guard against memory errors.
Enhancements to CPU C-States

In vSphere 5.1 and earlier, the balanced policy for host power management leveraged only the performance state (P-state), which kept the processor running at a lower frequency and voltage. In vSphere 5.5, the deep processor power state (C-state) also is used, providing additional power savings. Another potential benefit of reduced power consumption is with inherent increased performance, because turbo mode frequencies on Intel chipsets can be reached more quickly while other CPU cores in the physical package are in deep C-states.
Virtual Machine Enhancements

Virtual Machine Compatibility with VMware ESXi 5.5

vSphere 5.5 introduces a new virtual machine compatibility with several new features such as LSI SAS support for Oracle Solaris 11 OS, enablement for new CPU architectures, and a new advanced host controller interface (AHCI). This new virtual-SATA controller supports both virtual disks and CD-ROM devices that can connect up to 30 devices per controller, with a total of four controllers. This enables a virtual machine to have as many as 120 disk devices, compared to the previous limit of 60.

Table summarizes the virtual machine compatibility levels supported in vSphere 5.5.

vSphere Releases Virtual Machine

Hardware Version

vSphere 5.5 Compatibility
Virtual Infrastructure 3.5 Version 4 VMware ESX/ESXi 3.5

and later

vSphere 4.0 Version 4 VMware ESX/ESXi 4.0

and later

vSphere 4.1 Version 7 VMware ESX/ESXi 4.0

and later

vSphere 5.0 Version 8 VMware ESXi 5.0 and later
vSphere 5.1 Version 9 VMware ESXi 5.1 and later
vSphere 5.5 Version 10 VMware ESXi 5.5 and later

Expanded vGPU Support

vSphere 5.1 was the first vSphere release to provide support for hardware-accelerated 3D graphics—virtual graphics processing unit (vGPU)—inside of a virtual machine. That support was limited to only NVIDIA-based GPUs. With vSphere 5.5, vGPU support has been expanded to include both Intel- and AMD-based GPUs. Virtual machines with graphic-intensive workloads or applications that typically have required hardware-based GPUs can now take advantage of additional vGPU vendors, makes and models. See the VMware Compatibility Guide for details on supported GPU adapters.

There are three supported rendering modes for a virtual machine configured with a vGPU: automatic, hardware and software. Virtual machines still can leverage VMware vSphere vMotion® technology, even across a heterogeneous mix of vGPU vendors, without any downtime or interruptions to the virtual machine. If automatic mode is enabled and a GPU is not available at the destination vSphere host, software rendering automatically is enabled. If hardware mode is configured and a GPU does not exist at the destination vSphere host, a vSphere vMotion instance is not attempted.

vGPU support can be enabled using both the vSphere Web Client and VMware Horizon View™ for Microsoft Windows 7 OS and Windows 8 OS. The following Linux OSs also are supported: Fedora 17 or later, Ubuntu 12 or later and Red Hat Enterprise Linux (RHEL) 7. Controlling vGPU use in Linux OSs is supported using the vSphere Web Client.
Graphic Acceleration for Linux Guests

With vSphere 5.5, graphic acceleration is now possible for Linux guest OSs. Leveraging a GPU on a vSphere host can help improve the performance and scalability of all graphics-related operations. In providing this support, VMware also is the first to develop a new guest driver that accelerates the entire Linux graphics stack for modern Linux distributions. VMware also is contributing 100 percent of the Linux guest driver code back to the open-source community. This means that any modern GNU/Linux distribution can package the VMware guest driver and provide out-of-the-box support for accelerated graphics without any additional tools or package installation.

The following Linux distributions are supported:

  • Ubuntu: 12.04 and later
  • Fedora: 17 and later
  • RHEL 7

With the new guest driver, modern Linux distributions are enabled to support technologies such as the following:

  • OpenGL 2.1
  • DRM kernel mode setting
  • Xrandr
  • XRender
  • Xv

VMware vCenter Server Enhancements

vCenter Single Sign-On

vCenter Single Sign-On server 5.5, the authentication services of the vSphere management platform, can now be configured to connect to its Microsoft SQL Server database without requiring the customary user IDs and passwords, as found in previous versions. This enables customers to maintain a higher level of security when authenticating with a Microsoft SQL Server environment that also houses the vCenter Single Sign-On server database. The only requirement is that the virtual machine used for vCenter Single Sign-On server be joined to a Microsoft Active Directory domain. In this configuration, vCenter Single Sign-On server interacts with the database using the identity of the machine where it is running.
vSphere Web Client

The platform-agnostic vSphere Web Client, which replaces the traditional vSphere Client™, continues to exclusively feature all-new vSphere 5.5 technologies and to lead the way in VMware virtualization and cloud management technologies.

Increased platform support – With vSphere 5.5, full client support for Mac OS X is now available in the vSphere Web Client. This includes native remote console for a virtual machine. Administrators and end users can now access and manage their vSphere environment using the desktop platform they are most comfortable with. Fully supported browsers include both Firefox and Chrome.

Improved usability experience – The vSphere Web Client includes the following key new features that improve overall usability and provide the administrator with a more native application feel:

  • Drag and drop – Administrators now can drag and drop objects from the center panel onto the vSphere inventory, enabling them to quickly perform bulk actions. Default actions begin when the “drop” occurs, helping accelerate workflow actions. This enables administrators to perform “bulk” operations with ease. For example, to move multiple virtual machines, grab and drag them to the new host to start the migration workflow.
  • Filters – Administrators can now select properties on a list of displayed objects and selected filters to meet specific search criteria. Displayed objects are dynamically updated to reflect the specific filters selected. Using filters, administrators can quickly narrow down to the most significant objects. For example, two checkbox filters can enable an administrator to see all virtual machines on a host that are powered on and running Windows Server 2008.
  • Recent items – Administrators spend most of their day working on a handful of objects. The new recent-items navigation aid enables them to navigate with ease, typically by using one click between their most commonly used objects.

vCenter Server Appliance

The popularity of vCenter Server Appliance has grown over the course of its previous releases. Although it offers matched functionality to the installable vCenter Server version on Windows, administrators have found its widespread adoption prospects to be limited. One area of concern has been the embedded database that has previously been targeted for small datacenter environments. With the release of vSphere 5.5, vCenter Server Appliance now uses a reengineered, embedded vPostgres database that can now support as many as 500 vSphere hosts or 5,000 virtual machines. With new scalability maximums and simplified vCenter Server deployment and management, vCenter Server Appliance offers an attractive alternative to the Windows version of vCenter Server.
vSphere App HA

In versions earlier than vSphere 5.5, it was possible to enable virtual machine monitoring, which checks for the presence of “heartbeats” from VMware Tools™ as well as I/O activity from the virtual machine. If neither of these is detected in the specified amount of time, vSphere HA resets the virtual machine. In addition to virtual machine monitoring, users can leverage third-party application monitoring agents or create their own agents to work with vSphere HA using the VMware vSphere Guest SDK.

In vSphere 5.5, VMware has simplified application monitoring for vSphere HA with the introduction of vSphere App HA. This new feature works in conjunction with vSphere HA host monitoring and virtual machine monitoring to further improve application uptime. vSphere App HA can be configured to restart an application service when an issue is detected. It is possible to protect several commonly used, off-the-shelf applications.

vSphere HA can also reset the virtual machine if the application fails to restart.
Architecture Overview

vSphere App HA leverages VMware vFabric™ Hyperic® to monitor applications. Deploying vSphere App HA begins with provisioning two virtual appliances per vCenter Server: vSphere App HA and vFabric Hyperic.

vSphere App HA virtual appliance stores and manages vSphere App HA policies. vFabric Hyperic monitors applications and enforces vSphere App HA policies, which are discussed in greater detail in the following section. It is possible to deploy these virtual appliances to a cluster other than the one running the protected applications; for example, a management cluster.

After the simple process of deploying the vFabric Hyperic and vSphere App HA virtual appliances, vFabric Hyperic agents are installed in the virtual machines containing applications that will be protected by vSphere App HA. These agents must be able to reliably communicate with the vFabric Hyperic virtual appliance.
vSphere App HA Policies

vSphere App HA policies are easily configured in the administration section of the vSphere Web Client. Policies define items such as the number of times vSphere App HA will attempt to restart a service, the number of minutes it will wait for the service to start, and the options to reset the virtual machine if the service fails to start and to reset the virtual machine when the service is unstable. Policies also can be configured to trigger vCenter Server alarms when the service is down and the virtual machine is reset. Email notification is also available.
vSphere HA and vSphere Distributed Resource Scheduler Virtual Machine–Virtual Machine Affinity Rules

vSphere DRS can configure DRS affinity rules, which help maintain the placement of virtual machines on hosts within a cluster. Various rules can be configured. One such rule, a virtual machine–virtual machine affinity rule, specifies whether selected virtual machines should be kept together on the same host or kept on separate hosts.

A rule that keeps selected virtual machines on separate hosts is called a virtual machine–virtual machine antiaffinity rule and is typically used to manage the placement of virtual machines for availability purposes.

In versions earlier than vSphere 5.5, vSphere HA did not detect virtual machine–virtual machine antiaffinity rules, so it might have violated one during a vSphere HA failover event. vSphere DRS, if fully enabled, evaluates the environment, detects such violations and attempts a vSphere vMotion migration of one of the virtual machines to a separate host to satisfy the virtual machine–virtual machine antiaffinity rule. In a large majority of environments, this operation is acceptable and does not cause issues. However, some environments might have strict multitenancy or compliance restrictions that require consistent virtual machine separation. Another use case is an application with high sensitivity to latency; for example, a telephony application, where migration between hosts might cause adverse effects.

To address the need for maintaining placement of virtual machines on separate hosts—without vSphere vMotion migration—after a host failure, vSphere HA in vSphere 5.5 has been enhanced to conform with virtual machine–virtual machine antiaffinity rules. Application availability is maintained by controlling the placement of virtual machines recovered by vSphere HA without migration. This capability is configured as an advanced option in vSphere 5.5.
vSphere Big Data Extensions

vSphere Big Data Extensions (BDE) is a new addition in vSphere 5.5 for VMware vSphere Enterprise Edition™ and VMware vSphere Enterprise Plus Edition™. BDE is a tool that enables administrators to deploy and manage Hadoop clusters on vSphere from a familiar vSphere Web Client interface. It simplifies the provisioning of the infrastructure and software services required for multinode Hadoop clusters. BDE is based on technology from Project Serengeti, the VMware open-source virtual Hadoop management tool.

BDE is available as a plug-in for the vSphere Web Client. Administrators can deploy virtual Hadoop clusters through BDE, customizing variables such as number of Hadoop nodes in the cluster, size of Hadoop virtual machines, and choice of local or shared storage. BDE supports the deployment of all major Hadoop distributions, as well as ecosystem components such as Apache Pig, Apache Hive and Apache HBase.

BDE performs the following functions on the virtual Hadoop clusters it manages:

  • Creates, deletes, starts, stops and resizes clusters
  • Controls resource usage of Hadoop clusters
  • Specifies physical server topology information
  • Manages the Hadoop distributions available to BDE users
  • Automatically scales clusters based on available resources and in response to other workloads on the vSphere cluster

Using BDE, administrators can provide multiple tenants with elastic, virtual Hadoop clusters that scale as needed to share resources efficiently. Another benefit of Hadoop on vSphere is that critical services in these Hadoop clusters can be protected easily using vSphere HA and VMware vSphere Fault Tolerance (vSphere FT).

BDE offers ease of management and operational simplicity by automating many of these tasks for virtual Hadoop clusters.
vSphere Storage Enhancements

Support for 62TB VMDK

VMware is increasing the maximum size of a virtual machine disk file (VMDK) in vSphere 5.5. The previous limit was 2TB—512 bytes. The new limit is 62TB. The maximum size of a virtual Raw Device Mapping (RDM) is also increasing, from 2TB—512 bytes to 62TB. Virtual machine snapshots also support this new size for delta disks that are created when a snapshot is taken of the virtual machine.

This new size meets the scalability requirements of all application types running in virtual machines.
MSCS Updates

Microsoft Cluster Service (MSCS) continues to be deployed in virtual machines for application availability purposes. VMware is introducing a number of additional features to continue supporting customers that implement this application in their vSphere environments. In vSphere 5.5, VMware supports the following features related to MSCS:

  • Microsoft Windows 2012
  • Round-robin path policy for shared storage
  • iSCSI protocol for shared storage
  • Fibre Channel over Ethernet (FCoE) protocol for shared storage

Historically, shared storage was supported in MSCS environments only if the protocol used was Fibre Channel (FC). With the vSphere 5.5 release, this restriction has been relaxed to include support for FCoE and iSCSI.

With regard to the introduction of round-robin support, a number of changes were made concerning the SCSI locking mechanism used by MSCS when a failover of services occurs. To facilitate this new path policy, changes have been implemented that make it irrelevant which path is used to place the SCSI reservation; any path can free the reservation.
16GB E2E Support

In vSphere 5.0, VMware introduced support for 16Gb FC HBAs. However these HBAs were throttled down to work at 8Gb. In vSphere 5.1, VMware introduced support to run these HBAs at 16Gb. However, there is no support for full, end-to-end 16Gb connectivity from host to array. To get full bandwidth, a number of 8Gb connections must be created from the switch to the storage array.

In vSphere 5.5, VMware introduces 16Gb end-to-end FC support. Both the HBAs and array controllers can run at 16Gb as long as the FC switch between the initiator and target supports it.
PDL AutoRemove

Permanent device loss (PDL) is a situation that can occur when a disk device either fails or is removed from the vSphere host in an uncontrolled fashion. PDL detects if a disk device has been permanently removed—that is, the device will not return—based on SCSI sense codes. When the device enters this PDL state, the vSphere host can take action to prevent directing any further, unnecessary I/O to this device. This alleviates other conditions that might arise on the host as a result of this unnecessary I/O. With vSphere 5.5, a new feature called PDL AutoRemove is introduced. This feature automatically removes a device from a host when it enters a PDL state.

Because vSphere hosts have a limit of 255 disk devices per host, a device that is in a PDL state can no longer accept I/O but can still occupy one of the available disk device spaces. Therefore, it is better to remove the device from the host.

PDL AutoRemove occurs only if there are no open handles left on the device. The auto-remove takes place when the last handle on the device closes. If the device recovers, or if it is readded after having been inadvertently removed, it will be treated as a new device.
vSphere Replication Interoperability

In vSphere 5.0, there were interoperability concerns with VMware vSphere Replication and VMware vSphere Storage vMotion®, as well as with VMware vSphere Storage DRS™. There were considerations to be made at both the primary site and the replica site.

At the primary site, because of how vSphere Replication works, there are two separate cases of support for vSphere Storage vMotion and vSphere Storage DRS to be considered:

  • Moving a subset of the virtual machine’s disks
  • Moving the virtual machine’s home directory

This works fine in the first case—moving a subset of the virtual machine’s disks with vSphere Storage vMotion or vSphere Storage DRS. From the vSphere Replication perspective, the vSphere Storage vMotion migration is a “fast suspend/resume” operation, which vSphere Replication handles well.

The second case—a vSphere Storage vMotion migration of a virtual machine’s home directory—creates the issue with primary site migrations. In this case, the vSphere Replication persistent state files (.psf) are deleted rather than migrated. vSphere Replication detects this as a power-off operation, followed by a power-on of the virtual machine without the “.psf” files. This triggers a vSphere Replication “full sync,” wherein the disk contents are read and checksummed on each side, a fairly expensive and time-consuming task. vSphere 5.5 addresses this scenario.

At the primary site, migrations now move the persistent state files that contain pointers to the changed blocks along with the VMDKs in the virtual machine’s home directory, thereby removing the need for a full synchronization. This means that replicated virtual machines can now be moved between datastores, by vSphere Storage vMotion or vSphere Storage DRS, without incurring a penalty on the replication. The retention of the .psf means that the virtual machine can be brought to the new datastore or directory while retaining its current replication data and can continue with the procedure and with the “fast suspend/resume” operation of moving an individual VMDK.

At the replica site, the interaction is less complicated because vSphere Storage vMotion is not supported for the replicated disks. vSphere Storage DRS cannot detect the replica disks: They are simply “disks”—there is no “virtual machine.” While the .vmx file describing the virtual machine is there, the replicated disks are not actually attached until test or failover occurs. Therefore, vSphere Storage DRS cannot move these disks because it only detects registered virtual machines. This means that there are no low-level interoperability problems, but there is a high-level one because it is preferable that vSphere Storage DRS detect the replica disks and be able to move them out of the way if a datastore is filling up at the replica site. This scenario remains the same in the vSphere 5.5 release. With vSphere Replication, moving the target virtual machines is accomplished by manually pausing—not “stopping,” which deletes the replica VMDK—replication; cloning the VMDK, using VMware vSphere Command-Line Interface, into another directory; manually reconfiguring vSphere Replication to point to the new target; waiting for it to complete a full sync; and then deleting the old replica files.
vSphere Replication Multi-Point-in-Time (MPIT) Snapshot Retention

vSphere Replication through vSphere 5.1 worked by creating a redo log on the disk at the target location. When a replication was taking place, the vSphere Replication appliance received the changed blocks from the source host and immediately wrote them to the redo log on the target disk.

Because any given replication has a fixed size according to the number of changed blocks, vSphere Replication could determine when the complete replication bundle (the “lightweight delta”) had been received. Only then did it commit the redo log to the target VMDK file.

vSphere Replication then retained the most recent redo log as a snapshot, which would be automatically committed during a failover. This snapshot was retained in case of error during the commit; this would ensure that during crash or corruption, there was always a “last-known good snapshot” ready to be committed or recommitted. This prevents finding only corrupted data when recovering a virtual machine.

Historically, the snapshot was retained but the redo log was discarded. Each new replication overwrote the previous redo log, and each commit of the redo log overwrote the active snapshot. The recoverable point in time was always the most recent complete replication.

A new feature is introduced in vSphere 5.5 that enables retention of historical points in time. The old redo logs are not discarded; instead, they are retained and cleaned up on a schedule according to the MPIT retention policy.

For example, if the MPIT retention policy dictates that 24 snapshots must be kept over a one-day period, vSphere Replication retains 24 snapshots. If there is a 1-hour recovery-point objective (RPO) set for replication, vSphere Replication likely retains every replication during the day, because roughly 24 replicas will be made during that day.

If, however, a 15-minute RPO is set, approximately 96 replications will take place over a 24-hour period, thereby creating many more snapshots than are required for retention. On the basis of the retention policy cycle (for example, hourly—24 retained per day), vSphere Replication scans through the retained snapshots and discards those deemed unnecessary. If it finds four snapshots per hour (on a 15-minute RPO) but is retaining only one per hour (24-per-day retention policy), it retains the earliest replica snapshot in the retention cycle and discards the rest.

The most recent complete snapshot is always retained, to provide the most up-to-date data available for failover. This most recent complete point in time is always used for failover; there is no way to select an earlier point in time for failover. At the time of failover, the replicated VMDK is attached to the virtual machine within the replicated vmx, and the virtual machine is powered on. After failover, an administrator opens the snapshot manager for that virtual machine and selects from the retained historical points in time, as with any other snapshot.
Additional vSphere 5.5 Storage Feature Enhancements

VAAI UNMAP Improvements

vSphere 5.5 introduces a new and simpler VAAI UNMAP/Reclaim command:

# esxcli storage vmfs unmap

As before, this command creates temporary files and uses UNMAP primitive to inform the array that these blocks in this temporary file can be reclaimed. This enables a correlation between what the array reports as free space on a thin-provisioned datastore and what vSphere reports as free space. Previously, there was a mismatch between the host and the storage regarding the reporting of free space on thin-provisioned datastores.

There are two major enhancements in vSphere 5.5: the ability to specify the reclaim size in blocks rather than as a percentage value; dead space can now be reclaimed in increments rather than all at once.
VMFS Heap Improvements

In previous versions of vSphere, there was an issue with VMware vSphere VMFS heap: There were concerns when accessing open files of more than 30TB from a single vSphere host. vSphere 5.0 p5 and vSphere 5.1 Update 1 introduced a larger heap size to confront this. In vSphere 5.5, VMware introduces a much improved heap eviction process, so there is no need for the larger heap size, which consumes memory. vSphere 5.5, with a maximum of 256MB of heap, enables vSphere hosts to access all address space of a 64TB VMFS.
vSphere Flash Read Cache

vSphere 5.5 introduces a new storage solution called vSphere Flash Read Cache, a new Flash-based storage solution that is fully integrated with vSphere. Its design is based on a framework that enables the virtualization and management of local Flash-based devices in vSphere.

vSphere Flash Read Cache framework design is based on two major components:

  • vSphere Flash Read Cache infrastructure
  • vSphere Flash Read Cache software

vSphere Flash Read Cache enables the pooling of multiple Flash-based devices into a single consumable vSphere construct called vSphere Flash Resource, which is consumed and managed in the same way as CPU and memory are done today in vSphere.

The vSphere Flash Read Cache infrastructure is responsible for integrating the vSphere hosts’ locally attached Flash-based devices into the vSphere storage stack. This integration delivers a Flash management platform that enables the pooling of Flash-based devices into a vSphere Flash Resource.

vSphere hosts consume the vSphere Flash Resource as vSphere Flash Swap Cache, which replaces the Swap to SSD feature previously introduced with vSphere 5.0.

The vSphere Flash Read Cache software is natively built into the core vSphere ESXi Hypervisor.

vSphere Flash Read Cache provides a write-through cache mode that enhances virtual machines’ performance without the modification of applications and OSs. Virtual machines cannot detect the described performance and the allocation of vSphere Flash Read Cache.

The performance enhancements are introduced to virtual machines based on the placement of the vSphere Flash Read Cache, which is situated directly in the virtual machine’s virtual disk data path.

vSphere Flash Read Cache enhances virtual machine performance by accelerating read-intensive workloads in vSphere environments.

The tight integration of vSphere Flash Read Cache with vSphere 5.5 also delivers support and compatibility with vSphere Enterprise Edition features such as vSphere vMotion, vSphere HA and vSphere DRS.
vSphere Networking Enhancements

vSphere 5.5 introduces some key networking enhancements and capabilities to further simplify operations,

improve performance and provide security in virtual networks. VMware vSphere Distributed Switch™ is a centrally managed, datacenter-wide switch that provides advanced networking features on the vSphere platform. Having one virtual switch across the entire vSphere environment greatly simplifies management. The following are some of the key benefits of the features in this release:

  • The enhanced link aggregation feature provides choice in hashing algorithms and also increases the limit on number of link aggregation groups.
  • Additional port security is enabled through traffic filtering support.
  • Prioritizing traffic at layer 3 increases quality of service support.
  • A packet-capture tool provides monitoring at the various layers of the virtual switching stack.
  • Other enhancements include improved single-root I/O virtualization (SR-IOV) support and 40GB NIC support.

Link Aggregation Control Protocol (LACP) Enhancements

In vSphere 5.1, LACP is supported. LACP is a standards-based method to control the bundling of several physical network links together to form a logical channel for increased bandwidth and redundancy purposes.

It dynamically negotiates link aggregation parameters such as hashing algorithms, number of uplinks, and so on, across vSphere Distributed Switch and physical access layer switches. In case of any link failures or cabling mistakes, LACP automatically renegotiates parameters across the two switches. This reduces the manual intervention required to debug cabling issues.

The following key enhancements are available on vSphere Distributed Switch with vSphere 5.5:

  • Comprehensive load-balancing algorithm support – 22 new hashing algorithm options are available. For example, source and destination IP address and VLAN field can be used as the input for the hashing algorithm.
  • Support for multiple link aggregation groups (LAGs) – 64 LAGs per host and 64 LAGs per VMware vSphere VDS.
  • Because LACP configuration is applied per host, this can be very time consuming for large deployments. In this release, new workflows to configure LACP across a large number of hosts are made available through templates.

Traffic Filtering

Traffic filtering is the ability to filter packets based on the various parameters of the packet header. This capability is also referred to as access control lists (ACLs), and it is used to provide port-level security.

The VDS supports packet classification, based on the following three different types of qualifiers:

  • MAC SA and DA qualifiers
  • System traffic qualifiers – vSphere vMotion, vSphere management, vSphere FT, and so on
  • IP qualifiers – Protocol type, IP SA, IP DA, and port number

After the qualifier has been selected and packets have been classified, users have the option to either filter or tag those packets.

When the classified packets have been selected for filtering, users have the option to filter ingress, egress, or traffic in both directions.
Quality of Service Tagging

Two types of Quality of Service (QoS) marking/tagging common in networking are 802.1p Class of Service (CoS), applied on Ethernet/layer 2 packets, and Differentiated Service Code Point (DSCP), applied on IP packets.

The physical network devices use these tags to identify important traffic types and provide QoS based on the value of the tag. Because business-critical and latency-sensitive applications are virtualized and are run in parallel with other applications on an ESXi host, it is important to enable the traffic management and tagging features on VDS.

The traffic management feature on VDS helps reserve bandwidth for important traffic types, and the tagging feature enables the external physical network to detect the level of importance of each traffic type. It is a best practice to tag the traffic near the source and help achieve end-to-end QoS. During network congestion scenarios, the highly tagged traffic doesn’t get dropped, providing the traffic type with higher QoS.

VMware has supported 802.1p tagging on VDS since vSphere 5.1. The 802.1p tag is inserted in the Ethernet header before the packet is sent out on the physical network. In vSphere 5.5, the DSCP marking support enables users to insert tags in the IP header. IP header–level tagging helps in layer 3 environments, where physical routers function better with an IP header tag than with an Ethernet header tag.

After the packets are classified based on the qualifiers described in the “Traffic Filtering” section, users can choose to perform Ethernet (layer 2) or IP (layer 3) header–level marking. The markings can be configured at the port group level.
SR-IOV Enhancements

Single-root I/O virtualization (SR-IOV) is a standard that enables one PCI Express (PCIe) adapter to be presented as multiple, separate logical devices to virtual machines. In this release, the workflow of configuring the SR-IOV–enabled physical NICs is simplified. Also, a new capability is introduced that enables users to communicate the port group properties defined on the vSphere standard switch (VSS) or VDS to the virtual functions.

The new control path through VSS and VDS communicates the port group–specific properties to the virtual functions. For example, if promiscuous mode is enabled in a port group, that configuration is then passed to virtual functions, and the virtual machines connected to the port group will receive traffic from other virtual machines.
Enhanced Host-Level Packet Capture

Troubleshooting any network issue requires various sets of tools. In the vSphere environment, the VDS provides standard monitoring and troubleshooting tools, including NetFlow, Switched Port Analyzer (SPAN), Remote Switched Port Analyzer (RSPAN) and Encapsulated Remote Switched Port Analyzer (ERSPAN). In this release, an enhanced host-level packet capture tool is introduced. The packet capture tool is equivalent to the command-line tcpdump tool available on the Linux platform.

The following are some of the key capabilities of the packet capture tool:

  • Available as part of the vSphere platform and can be accessed through the vSphere host command prompt
  • Can capture traffic on VSS and VDS
  • Captures packets at the following levels
    • Uplink
    • Virtual switch port
    • vNIC
  • Can capture dropped packets
  • Can trace the path of a packet with time stamp details

40GB NIC Support

Support for 40GB NICs on the vSphere platform enables users to take advantage of higher bandwidth pipes to the servers. In this release, the functionality is delivered via Mellanox ConnextX-3 VPI adapters configured in Ethernet mode.
Source VMware document: What’s New in VMware vSphere 5.5 Platform

Template options grayed out

Written by Marco on . Posted in VMware

Today I was trying to remove an old template from my home lab. This was not an easy task because all the options where grayed out. See screenshot of the vSphere Client.


This is also the case in the vSphere Web Client. See second screenshot.

My home lab is running the latest version of VMware vSphere 5.1U1. I checked if the permissions are set correctly, in my case this is correct. I was using the root account of my vCenter appliance. This account had also rights to remove a template. I tried another account, same result.

I started Googling, and found some old posts that described the same problem.

Arne Fokkema created a PowerShell script to solve his problem, but in his case this is for a vSphere 4 environment. See http://ict-freak.nl/2009/08/06/vsphere-deploy-template-grayed-out/
I am using a vSphere 5.1 environment, I tried his solution with some adjustments. Because I have only one template that is not working correctly I had to striped his script into two separate commands. These are:

Set-Template Win2k8R2_Template -ToVM

And

Set-VM Win2k8R2_Template -ToTemplate

After running these commands I was able to delete my template. Problem solved.

A general system error occured: pending vpxa update.

Written by Marco on . Posted in VMware

When you upgrade a VMware vSphere environment the following error can occur when you just updated the VMware vCenter server.


In my case this was an upgrade of VMware vCenter 4.1 to VMware vCenter 5.1 with VMware vSphere ESX 4.1 hosts. In the installation of vCenter I checked the option to upgrade the vCenter agents on the hosts automatically. After the installation of vCenter I checked the status of the hosts. The following error was showing.


This sometimes happens. I started searching for a solution. I found VMware KB1002672 and VMware KB1003714 that describes my problem.

Reading both KB articles my first try was to manually restart the vpxa agent and the management agents. See VMware KB1003490

When reading the different articles my ESX host was already unreachable for my vCenter server. Communication to the host was no problem.

So I started SSH and restarted the management Agents. Then I restarted the vpxa agent service. Both where successful. Now select Connect in vCenter. The installation of the new vpxa agent started again and now it was successful after a couple of minutes. Problem solved.

If there are still problems, see VMware KB1003714 for a manual installation of the vpxa agent.

What’s New in VMware vSphere 5.1?

Written by Marco on . Posted in VMware

vSphere 5.1 is VMware’s latest release of its industry-leading virtualization platform. This new release contains the following new features and enhancements:

Compute

  • Larger virtual machines – Virtual machines can grow two times larger than in any previous release to support even the most advanced applications. Virtual machines can now have up to 64 virtual CPUs (vCPUs) and 1TB of virtual RAM (vRAM).
  • New virtual machine format – New features in the virtual machine format (version 9) in vSphere 5.1 include support for larger virtual machines, CPU performance counters and virtual shared graphics acceleration designed for enhanced performance.

Storage

  • Flexible, space-efficient storage for virtual desktop infrastructure (VDI) – A new disk format enables the correct balance between space efficiency and I/O throughput for the virtual desktop.

Network

  • vSphere Distributed Switch – Enhancements such as Network Health Check, Configuration Backup and Restore, Roll Back and Recovery, and Link Aggregation Control Protocol support and deliver more enterprise-class networking functionality and a more robust foundation for cloud computing.
  • Single-root I/O virtualization (SR-IOV) support – Support for SR-IOV optimizes performance for sophisticated applications.

Availability

  • vSphere vMotion® – Leverage the advantages of vMotion (zero-downtime migration) without the need for shared storage configurations. This new vMotion capability applies to the entire network.
  • vSphere Data Protection – Simple and cost effective backup and recovery for virtual machines. vSphere Data Protection is a newly architected solution based EMC Avamar technology that allows admins to back up virtual machine data to disk without the need of agents and with built-in deduplication. This feature replaces the vSphere Data Recovery product available with previous releases of vSphere.
  • vSphere Replication – vSphere Replication enables efficient array-agnostic replication of virtual machine data over the LAN or WAN. vSphere Replication simplifies management enabling replication at the virtual machine level and enables RPOs as low as 15 minutes.
  • Zero-downtime upgrade for VMware Tools – After you upgrade to the VMware Tools available with version 5.1, no reboots will be required for subsequent VMware Tools upgrades.

Security

  • VMware vShield Endpoint™ – Delivers a proven endpoint security solution to any workload with an approach that is simplified, efficient, and cloud-aware. vShield Endpoint enables 3rd party endpoint security solutions to eliminate the agent footprint from the virtual machines, offload intelligence to a security virtual appliance, and run scans with minimal impact.

Automation

  • vSphere Storage DRS™ and Profile-Driven Storage. New integration with VMware vCloud® Director™ enables further storage efficiencies and automation in a private cloud environment.
  • vSphere Auto Deploy™ – Two new methods for deploying new vSphere hosts to an environment make the Auto Deploy process more highly available then ever before.

Management (with vCenter Server)

  • vSphere Web Client –The vSphere Web Client is now the core administrative interface for vSphere. This new flexible, robust interface simplifies vSphere control through shortcut navigation, custom tagging, enhanced scalability, and the ability to manage from anywhere with Internet Explorer or Firefox-enabled devices.
  • vCenter Single Sign-On – Dramatically simplify vSphere admin- istration by allowing users to log in once to access all instances or layers of vCenter without the need for further authentication.
  • vCenter Orchestrator – Orchestrator simplifies installation and configuration of the powerful workflow engine in vCenter Server. Newly designed workflows enhance ease of use, and can also be launched directly from the new vSphere Web Client.

Learn More

For information on upgrading to vSphere 5.1, visit the vSphere Upgrade Center at http://www.vmware.com/products/vsphere/upgrade-center/overview.html

vSphere is also available with the new vCloud suites from VMware. For more information, visit http://www.vmware.com/go/vcloud-suite

 

Source: VMware What’s new in VMware vSphere 5.1 document. Link

Device Manager is running in read-only mode

Written by Marco on . Posted in Microsoft, VMware

Today I was creating a template for my VMware environment, when I was trying to change the graphical interface card I was running into some problems. This is the message the Device Manager is giving me.

The result is that I cannot change any drivers or devices.

The solution is very simple but not obvious. My computer name is longer than 15 characters. This is a problem for NetBIOS computers. It turns out this is also a problem for the device manager. So I changed my computer name to a name with less than 15 characters, now everything works normal again.

VMware vSphere 5 links

Written by Marco on . Posted in VMware

Eric Siebert created a list of links about all the VMware vSphere 5 content he could find. See vSphere-Land for the complete list. I’ve selected a few that are important to me or are worth reading.

What’s New Whitepapers

What’s New in VMware vSphere 5.0 Platform (VMware)
What’s New in VMware vSphere 5.0 Storage (VMware)
What’s New in VMware vSphere 5.0 Performance (VMware)
What’s New in VMware vSphere 5.0 Networking (VMware)
What’s New in VMware vSphere 5.0 Availability (VMware)
What’s New in VMware vCloud Director 1.5 (VMware)
What’s New in VMware vCenter Site Recovery Manager 5.0 (VMware)
What’s New in VMware Data Recovery 2.0 (VMware)

Documentation

VMware vSphere product documentation (VMware)
VMware vSphere Basics Guide
vSphere Installation and Setup Guide
vSphere Upgrade Guide
vCenter Server and Host Management Guide
vSphere Virtual Machine Administration Guide
vSphere Host Profiles Guide
vSphere Networking Guide
vSphere Storage Guide
vSphere Security Guide
vSphere Resource Management Guide
vSphere Availability Guide
vSphere Monitoring and Performance Guide
vSphere Troubleshooting
VMware vSphere Examples and Scenarios Guide

Licensing.

VMware vSphere 5.0 Licensing, Pricing and Packaging (VMware)
vSphere 5 Purchase Advisor (VMware)
vSphere 5 Entitlement Mapping (VMware)
Upgrading to VMware vSphere License Keys (VMware)
vSphere Desktop – vSphere Edition to host Desktop Virtualization FAQ (VMware)
Understanding the vSphere 5 vRAM Licensing Model (VMware – Rethink IT)

Storage

vSphere 5.0 Storage Features Part 1 – VMFS-5 (VMware vSphere Blog)
vSphere 5.0 Storage Features Part 2 – Storage vMotion (VMware vSphere Blog)
vSphere 5.0 Storage Features Part 3 – VAAI (VMware vSphere Blog)
vSphere 5.0 Storage Features Part 4 – Storage DRS – Initial Placement (VMware vSphere Blog)

Upgrade

VMware vSphere Upgrade Center (VMware)
Ivo Beerens upgrade blog (IvoBeerens.nl)

Best Practices

Performance Best Practices for VMware vSphere 5.0
VMware vSphere vMotion Architecture, Performance and Best Practices in VMware vSphere 5
VMware vSphere 5.0 Upgrade Best Practices

vCenter Server

vSphere 5 – vCenter as a linux VM (ESX Virtualization)
vSphere vCenter 5 Design Considerations (Kendrick Coleman)
vCenter 5 – To Appliance or Not? (Kendrick Coleman)
vSphere 5 vCenter Server Virtual Appliance Quick-Start Guide (VMwire)
VMware vCenter Server Virtual Appliance (VCSA) features and benefits (VMwire)

Books

Announcing Mastering VMware vSphere 5 (Scott Lowe)
Hot of the press: vSphere 5.0 Clustering Technical Deepdive (Yellow Bricks)

Certification & Training

vSphere 5 – New Training Courses: What’s New [V5.0] and VCP5 (NTPro.nl)
VMware vSphere: Install, Configure, Manage [V5.0] Training Course by VMware Education (ESX Virtualization)
VCP5 vs VCP4: Comparing exam blueprints (vmDK)

Download

vSphere 5 download link.

ESX and ESXi logfile locations

Written by Marco on . Posted in VMware

Location of ESX log files

You can see ESX logs:
  • From the Service Console
  • From the vSphere Client connected directly to the ESX host (click Home > Administration > System Logs)
  • From the VMware Infrastructure Client connected directly to the ESX host (click Administration > System Logs)
The vmkernel logs (which log everything related to the kernel/core of the ESX) are located at /var/log/vmkernel
The vmkwarning logs (which log warnings from the vmkernel) are located at /var/log/vmkwarning
The vmksummary logs (which provide a summary of system activities such as uptime, downtime, reasons for downtime) are located at /var/log/vmksummary
The hostd log (which is the log of the ESX management service of the ESX) are located at /var/log/vmware/hostd.log
The messages log (which log activity on the Service Console operating system) is located at /var/log/messages
The VirtualCenter Agent log is located at /var/log/vmware/vmware/vpx/vpxa.log
The Automatic Availability Manager (AAM) logs are located at /var/log/vmware/aam/vmware_<hostname>-xxx.log
The SW iSCSI logs are located at /var/log/vmkiscsid.log
The System boot log is located at /var/log/boot-logs/sysboot.log

Additional Information

For related information, see the main article in this series, Location of log files for VMware products (1021806).

Location of ESXi log files

The VMkernel, vmkwarning, and hostd logs are located at /var/log/messages
The Host Management service (hostd = Host daemon) log is located at /var/log/vmware/hostd.log
The VirtualCenter Agent log is located at /var/log/vmware/vmware/vpx/vpxa.log
The System boot log is located at /var/log/sysboot.log
The Automatic Availability Manager (AAM) logs are located at /var/log/vmware/aam/vmware_<hostname>-xxx.log
Note: The logs on an ESXi host can be rolled over and or removed after an ESXi host reboot. VMware recommends configuring the ESXi host with a syslog server. For more information on syslog server configuration, see your product versions Basic System Administration guide.

Additional Information

Best practices for virtual machine snapshots in the VMware environment

Written by Marco on . Posted in VMware

After troubleshooting some problems with snapshots I ran in to an article on the VMware Knowledge base with some good info. See document ID 1025279 at http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&externalId=1025279

This is the article.

Purpose

This article provides best practice information for snapshots. It also provides links to resources that help you understand snapshots and troubleshoot snapshot issues.

Resolution

Best practices

  • Snapshots are not backups. As the snapshot file is only a change log of the original virtual disk, do not rely upon it as a direct backup process. The virtual machine is running on the most current snapshot, not the original vmdk disk files.
  • The maximum supported amount in a chain is 32. However, VMware recommends that you use only 2-3 snapshots in a chain.
  • Use no single snapshot for more than 24-72 hours.
    • This prevents snapshots from growing so large as to cause issues when deleting/committing them to the original virtual machine disks. Take the snapshot, make the changes to the virtual machine, and delete/commit the snapshot as soon as you have verified the proper working state of the virtual machine.
    • Be especially diligent with snapshot use on high-transaction virtual machines such as email and database servers. These snapshots can very quickly grow in size, filling datastore space. Commit snapshots on these virtual machines as soon as you have verified the proper working state of the process you are testing.|
  • If using a third party product that takes advantage of snapshots (such as virtual machine backup software), regularly monitor systems configured for backups to ensure that no snapshots remain active for extensive periods of time.
    • Snapshots should only be present for the duration of the backup process.
    • Snapshots taken by third party software (called via API) may not show up in the vCenter Snapshot Manager. Routinely check for snapshots via the command-line.
  • An excessive number of snapshots in a chain or snapshots large in size may cause decreased virtual machine and host performance.
  • Configure automated vCenter Server alarms to trigger when a virtual machine is running from snapshots. For more information, see Configuring VMware vCenter Server to send alarms when virtual machines are running from snapshots (1018029).
  • Confirm that there no snapshots are present (via command line) before a Storage vMotion. If snapshots are present, delete them prior to the Storage vMotion. For more information, see Migrating an ESX 3.x virtual machine with snapshots in powered-off or suspended state to another datastore might cause data loss and make the virtual machine unusable (1020709).
  • Confirm that there are no snapshots present (via command line) before increasing the size of any Virtual Machine virtual disk or virtual RDM. If snapshots are present, delete them prior to increasing the size of the disk/s. Increasing the size of a disk with snapshots present can lead to corruption of the snapshots and potential data loss. For more information, see Increasing the Size of a Virtual Disk.

Renaming a virtual machine and its files

Written by Marco on . Posted in VMware

When I was renaming some virtual machines in my test lab, I discovered that the file names are not renamed with the virtual machine. In the future this may be causing some problems so I was looking for a method to rename the files also. On the VMware site I’ve found a knowledgebase article about it. It describes the correct way to rename also the files. See KB Article: 1029513. Original link: http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&externalId=1029513

Purpose

This article provides steps to rename a virtual machine and its files.
These steps may be useful if you rename a virtual machine, but its files retain the original file names. You may want to rename the virtual machine disk files to prevent possible confusion.

Resolution

The content of the displayName configuration option is updated when you rename a virtual machine. However, the underlying files are not renamed.

Renaming the files

To rename the files:
  1. Log into the VMware vSphere Client.
  2. Locate the virtual machine in your host inventory.
  3. Begin a Storage vMotion or an offline Storage Migration of the virtual machine.For more information, see :
  4. The destination copy’s file names are be updated to your desired values.

If this is not an option, you may also rename the files by hand after the virtual machine has been powered down.

Manually renaming virtual machine files

Warning: Before proceeding, ensure that:

  • The virtual machine has a current backup and that it has been powered down.
  • The virtual machine does not have snapshots or virtual disks shared with other virtual machines.

To manually rename the virtual machine’s files:

  1. Log into the VMware vSphere Client.
  2. Locate the virtual machine in your host inventory.
  3. Power down the virtual machine.
  4. Right-click on the virtual machine and choose Remove from inventory.
  5. Connect to the terminal of the ESX server via SSH, System Management Interface, or directly at its console, and log in.Note: For additional instructions for ESXi, see Tech Support Mode for Emergency Support (1003677).
  6. Navigate to the directory containing the virtual machine. For example, cd /vmfs/volumes/<datastore>/<virtual machine>/.
  7. Run this command to rename the virtual disk files:vmkfstools -E “originalname.vmdk” “newname.vmdk”Note: Is it unnecessary to once again rename the originalname-flat.vmdk file after running the vmkfstools command.
  8. Copy the virtual machine configuration file (.vmx) using the command:cp “originalname.vmx” “newname.vmx”
  9. Edit the copied configuration file, such as (newname.vmx), using the vi editor:vi “newname.vmx”Note: For VMware ESX hosts, the nano editor is also available. If you are uncomfortable using the vi editor, seek assistance from a Linux/Unix administrator or file a Support Request and contact VMware Technical Support.
  10. Within the configuration file, modify all old instances of the virtual machine’s file names to the new file names. There should be at least the following to adjust:nvram = “originalname.nvram”
    displayName = “originalname”
    extendedConfigFile = “originalname.vmxf”
    scsi0:0.fileName = “originalname.vmdk”
    [...]
    migrate.hostlog = “./originalname-UUID.hlog”

    Repeat this process for each virtual machine disk. Such as:

    scsi0:1.fileName = “originalname_1.vmdk”
    scsi0:2.fileName = “originalname_2.vmdk”

    Correct the VMkernel swap file reference:

    sched.swap.derivedName = “/vmfs/volumes/DatastoreUUID/originalname/originalname-UUID.vswp

    To

    sched.swap.derivedName = “/vmfs/volumes/DatastoreUUID/newname/newname-UUID.vswp

    Note: Be sure to rename both the .vswp file and the directory name for the swap file, bolded above.

  11. Correct any other remaining lines referencing the original path or file names.
  12. Save the file and exit the editor.
  13. Rename all the remaining files, except for the .vmx configuration file, to the new names desired. For example:mv “originalname.nvram” “newname.nvram”
  14. Change directory to the parent directory using cd .. and rename the directory for the virtual machine:mv “originalname” “newname”
  15. Using the VMware vSphere Client, browse the datastore and navigate to the renamed virtual machine directory.
  16. Right-click on the virtual machine’s configuration file (.vmx) and choose Add to inventory.Alternatively, you can use this command to inventory the virtual machine:vmware-cmd -s register “/vmfs/volues/DatastoreName/newname/newname.vmx”
  17. Power on the virtual machine.
  18. A question for the virtual machine displays in the Summary tab during power-on. Review the question by:
    • Clicking the Summary tab
    • Right-clicking the virtual machine in your inventory and selecting Answer question.When prompted, select I moved it, then click OK.Warning: Selecting I Copied It results in a change of the virtual machine’s UUID and MAC address, which may have detrimental effects on guest applications that are sensitive towards MAC address changes, and virtual machine backups that rely on UUIDs.

Additional Information

The command-line interpreter on ESX is case-sensitive and requires escaping of special characters used in some virtual machine file names. The above examples encourage the use of quotation marks around command arguments to ensure spaces and special characters are interpreted literally and do not require escape sequences.

For example, a virtual machine named “Original VM” is referenced either as:
“Original VM” with quotation marks, or Original\ VM.

Special characters such as opening and closing parentheses also require character escaping. For a virtual machine named “Original VM (1)”:
“Original VM (1)” with quotation marks, or Original\ VM\ \(1\).

The former quotation method simplifies the process considerably and improves readability.

Additional information on escape characters can be found in the Bash Reference Manual.

Upgrade paths for ESX/ESXi hosts

Written by Marco on . Posted in VMware

Purpose

There are several methods to upgrade ESX/ESXi. This article outlines the available upgrade paths.   Note: This is not a comprehensive guide on how to upgrade ESX/ESXi. For more information on performing an upgrade, see the links in this article.

Resolution

This table lists the methods available to upgrade your ESX/ESXi host, and identifies the version to which you can upgrade:

ESX/ESXi version that is currently installed Version that it can be upgraded to CD-ROM Installation Wizard *1 Using esxupdate from the Service Console *2 vSphere Remote CLI *3 Host Update Utility or Infrastructure Update *4 Update Manager *5 Offline Upgrade from Service Console *6
ESX 2.x ESX 3.0.x x x
ESX 3.0.x ESX 3.0.3, 3.0.3 U1, 3.5, 3.5 U1 – U5 x x
ESX 3.0.2 ESX 4.0 x x
ESX 3.0.3 ESX 3.5, 3.5 U1 – U5 x x x
ESX 3.0.3 ESX 4.0 x x x
ESX 3.5 ESX 3.5 U1 – U5 x x x x
ESX 3.5 ESX 4.1 x x
ESXi 3.5 ESXi 3.5 U1 – U5 x x x
ESX 3.5 ESX 4.0 x x x
ESXi 3.5 ESXi 4.0 x x
ESXi 3.5 ESXi 4.1 x
ESX 4.0 ESX 4.0 U1 – U2 x x x
ESX 4.0 ESX 4.1 x x x x
ESXi 4.0 ESXi 4.0 U1 – U2 x x x
ESXi 4.0 ESXi 4.1 x x

For more information about upgrading using:

Note: The Host Update Utility replaced Infrastructure Update and is available when you install the vSphere Client. It is a tool for upgrading ESX/ESXi hosts from 3.x to 4.0.x and for patching ESXi hosts only.  Patching ESX with this utility is not supported.  Also, as of vSphere 4.1 the Host Update Utility has been discontinued.