Author Archives: Marco

About Marco

Marco works for ViaData as a Senior Technical Consultant. He has over 15 years experience as a system engineer and consultant, specialized in virtualization. VMware VCP4, VCP5-DC & VCP5-DT. VMware vExpert 2013, 2014,2015 & 2016. Microsoft MCSE & MCITP Enterprise Administrator. Veeam VMSP, VMTSP & VMCE.

VCAP-DCA Objective 2.4 – Administer vNetwork Distributed Switch Settings

Knowledge
  • Explain relationship between vDS and logical vSSes
Skills and Abilities
  • Understand the use of command line tools to configure appropriate vDS settings on an ESX/ESXi host
  • Determine use cases for and apply Port Binding settings
  • Configure Live Port Moving
  • Given a set of network requirements, identify the appropriate distributed switch technology to use
  • Use command line tools to troubleshoot and identify configuration items from an existing vDS
Tools
  • ESX Configuration Guide
  • ESXi Configuration Guide
  • vSphere Command-Line Interface Installation and Scripting Guide
  • Product Documentation
  • vSphere Client
  • vSphere CLI
    • vicfg-*

 

Notes

Explain relationship between vDS and logical vSSes

vDS stands for Virtual Distributed Switch
vSS stands for Virtual Standard Switch

Both standard (vSS) and distributed (vDS) switches can exist at the same time.

You can view the switch configuration on a host (both vSS and dvS) using esxcfg-vswitch -l. It won’t show the ‘hidden’ switches used under the hood by the vDS although you can read more about those in this useful article at RTFM or at Geeksilver’s blog.

Source Geeksilver’s Blog: http://geeksilver.wordpress.com/2010/05/21/vds-vnetwork-distributed-switch-my-understanding-part-1/ and http://geeksilver.wordpress.com/2010/05/21/vds-vnetwork-distributed-switch-my-understanding-part-2/

So What is vDS? What’s difference between vSS and vDS from configure file structure?

vDS is a new Virtual Switch introduced by Vmware. The old vSS is more like local Host property. All switch data saved in the local Host. Other Host is not aware what kind of vSS other Host has. Not only vCenter can’t do anything about it, it causes trouble when you do vMotion. vDS is saved in both vCenter and Host. One copy in the vCenter, vDS is saved in the SQL database. In the local host, vDS has another local database cache copy sits at /etc/vmware/dvsdata.db. This local cache will be updated by vCenter every 5 minutes.

What’s difference between vSS and vDS on control level?

With vSS, everything should be controlled on local host. Basically, you go to Local Host->Configuration->Networking. Then, you start everything from there. But vDS is different. vDS divide control into 2 different level. I call them high level and low level.

High Level: High level is to create/remove, management teaming, distribution port group etc. This level sits at vCenter->Inventory->Networking.

Low Level: This level is to connect your vm, vmkernel, and your local physical cards to vDS. Please be aware that your vm, vmkernel, etc are connecting to distribute port group. Unlike local vSS (you have create same vswitch, same vswitch port group on all hosts), vDS is pushed from vCenter to all Hosts. As long as you are connecting to same vDS, you will have same distribute port group.

image

With local physical nic card, they need to connect to dvUPlink side. You can choose any number of local nics to connect or even no nic at all. But what you can’t do is to setup teaming (only work for 2 nics from same host), traffic shaping, VLAN because you need to setup on high level.

How does vDS work?

What will your instructor tell you? “Please don’t consider vDS is a switch connecting to Hosts. vDS is just a template” Well, that’s what you always heard from all your instructors. but template of what? The answer is vDS is template of HIDDEN vSwitch sitting on your local host. vDS(the template) is managed by vCenter(high level operation) and your local Host(low level operation). Let’s see a diagram.

image

From this diagram, you can see there are two hosts. Each host has hidden switch which received template (vDS) from vCenter. The local template will be updated every 5 minutes like what I mentioned in Part 1.

Now, let’s open this hidden switch and see what’s happening in there.

image

As you can see, the hidden switch has forwarding engine and teaming engine which will be configured and controlled by setting in vCenter. There are two IO filters (not just one) is to be used in VMSafe. So what VMSafe does is let third party software (for example, the Trend Micro) build a VM appliance and be certified by VMWARE to prove it won’t do any damage. That special VM will use special API to monitor traffice (like firewall) or check virus. Meaning, if you want to use VMSafe product, you have to use vDS, meaning you have buy Enterprise Plus license! I guess that’s why VMSafe product is not popular.

ok. Back to vDS. Let’s make a small conclusion. vDS is also a vSS. But it’s hidden in the Host. This hidden vSS is using template made by vCenter and Local Host so you can control traffic and share switch data between hosts.

 

Understand the use of command line tools to configure appropriate vDS settings on an ESX/ESXi host

See VMware KB1008127 Configuring vSwitch or vNetwork Distributed Switch from the command line in ESX/ESXi 4.x.

Apply these commands to vNetwork Distributed Switches:

esxcfg-vswitch -Q <vmnic> -V <dvPort ID of vmnic> <dvSwitch> #unlink a DVS uplink
esxcfg-vswitch -P <vmnic> -V <unused dvPort ID> <dvSwitch> #add a DVS uplink

To create the vswif and uplink it to the DVS port:
esxcfg-vswif -a -i <IP-address> -n <Netmask> -V <dvSwitch> -P <DVPort Id> vswif0

There are a few more command’s but not a lot for the vDS.

esxcfg-nics shows the physical nic information of the ESX host.

net-dvs is a debugging utility for the Distributed Switch. This is an unsupported command.

 

Determine use cases for and apply Port Binding settings

There are three types of Port Binding settings. Source: VMware KB1010593

  • Static Static Binding (Default): means that the dvPort is assigned to the virtual machine at configuration time. When all the ports are booked by virtual machines, it is not possible to connect to any more virtual machines, regardless of whether the connected virtual machines are powered up or not, and an error message is displayed.
  • Dynamic Dynamic Binding: means that the dvPort is assigned at the moment of powering the virtual machine up. This option allows for over committing the number of dvPorts.
  • None (Ephemeral ports): (Ephemeral Ports or No Binding) this behavior resembles the behavior in the standard vSwitch. If you select this option, the number of ports are automatically set to 0, and the Portgroup allocates one port for each connected virtual machine, up to the maximum number of ports available in the Switch.

Some more info and advantages and disadvantages can be found at the vexperienced.co.uk blog.

  • Static port binding
    • Default binding method for a dvPortGroup
    • Assigned to a VM when it’s added to the dvPortGroup
    • Conceptually like a static IP address
    • Port assignment persists to the VM across reboots, vMotions etc
  • Dynamic port binding
    • Used when you approach port limits (either on the particular dvPortGroup or on the vDS itself which has a maximum of 6000 dvPorts). If you have 10,000 VMs you only allocate a dvPort to powered on VMs
    • Conceptually like DHCP for a pool of desktops
    • dvPort assignment can change when VM is powered off. vCenter will attempt to use the same dvPort but no guarantee.
    • LIMITATION: Not all VMs can be powered on at the same time if you have more than 6000.
    • LIMITATION: vCenter must be available when powering on the VM, as it needs to assign a dvPort.
  • Ephemeral port binding
    • Port binding does NOT persist.
    • Number of VMs can exceed the number of ports on a given dvPortGroup (but are still bound by the total number of dvPorts on a vDS)
    • Equivalent to standard vSwitch behaviour
    • You can power on a VM using either vCenter or the VI client connected directly to a host.

 

Configure Live Port Moving

Live port migration means a standalone dvPort can be moved to a dvPortGroup and thus acquiring the all the configuration of the dvPortGroup and a dvPort which is a part of a dvPortGroup can be moved out from a dvPortGroup, the subsequent config changes to the dvPortGroup does not apply to this dvPort.

 

Given a set of network requirements, identify the appropriate distributed switch technology to use

Learn the differences between using the Nexus 1000v vs. VMware distributed virtual switch (vDS).

See http://searchnetworking.techtarget.com.au/articles/38282-VMware-vSwitch-vs-Cisco-Nexus-1-V for more information about this.

Also take a look at a whitepaper from VMware and Cisco called: Virtual Networking features of the VMware vNetwork Distributed Switch and Cisco Nexus 1000V Switch. This whitepaper can be found: http://www.vmware.com/files/pdf/technology/cisco_vmware_virtualizing_the_datacenter.pdf

 

Use command line tools to troubleshoot and identify configuration items from an existing vDS

See the Trainsignal Troubleshooting vSphere course lesson 14, lesson 15.

Another tool that can be used to troubleshoot is the net-dvs commandline tool. This is an unsupported command.

  • Located in /usr/lib/vmware/bin (not in the PATH variable so just typing net-dvs won’t work)
  • Can be used to see the vDS settings saved locally on an ESX/i host;
    • dvSwitch ID
    • dvPort assignments to VMs
    • VLAN, CDP information etc

 

Links

http://www.seancrookston.com/2010/09/09/vcap-dca-objective-2-4-administer-vnetwork-distributed-switch-settings/

http://www.kendrickcoleman.com/index.php?/Tech-Blog/vcap-datacenter-administration-exam-landing-page-vdca410.html

http://www.vexperienced.co.uk/2011/04/01/vcap-dca-study-notes-2-4-administer-vnetwork-distributed-switches/

http://damiankarlson.com/vcap-dca4-exam/objective-2-4-administer-vnetwork-distributed-switch-settings/

Documents and manuals

ESX Configuration Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_esx_server_config.pdf

ESXi Configuration Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_esxi_server_config.pdf

vSphere Command-Line Interface Installation and Scripting Guide: www.vmware.com/pdf/vsphere4/r41/vsp4_41_vcli_inst_script.pdf

Source

Reset Domain Administrator Password

A Client of our company had a problem, they got in an argument with their current IT Management company. Eventually the IT Management company wouldn’t give up the administrator account password of the complete domain. So this client was locked out of his own network. We were asked if there was a way to reset this password. So I started looking on the internet for some solutions.

The first one I tried in my own lab was the one that Daniel Petri describes in het Blog post at: http://www.petri.co.il/reset_domain_admin_password_in_windows_server_2003_ad.htm

There were some requirements for this trick.

  • Local access to the domain controller (DC).
  • The Local Administrator password.
  • Two tools provided by Microsoft in their Resource Kit: SRVANY and INSTSRV. Download them from HERE (24kb).

The first one was no problem, but the second one in my case was a problem, we didn’t no the local administrator password.

So this is how I did it, first of all download the DART tools (Diagnostics and Recovery Toolset) from the Microsoft website, this is available for MDOP License owners. See http://www.microsoft.com/windows/enterprise/products/mdop/dart.aspx

There are other ways to get your hands on this tool, go to the Technet site and get a Technet Subscription. And if you don’t have access to both sources, go search Google.

I restarted the domain domain controller, and boot into the DART tool. Go to the Locksmith tool, and reset the password of the Administator account. Reboot the server, in Active Directory Recovery mode. This is important because now the Local accounts are available for logon. The local accounts are disabled on a Domain Controller, this is by design. In recovery mode logon with your new local Administrator password. Now do the trick that Daniel Petri describes in his post.

This is how it works globally.

Step 1

Restart Windows 2003 in Directory Service Restore Mode.

Note: At startup, press F8 and choose Directory Service Restore Mode. It disables Active Directory. When the login screen appears, log on as Local Administrator. You now have full access to the computer resources, but you cannot make any changes to Active Directory.

clip_image001

Step 2

You are now going to install SRVANY. This utility can virtually run any programs as a service. The interesting point is that the program will have SYSTEM privileges (LSA) (as it inherits the SRVANY security descriptor), i.e. it will have full access on the system. That is more than enough to reset a Domain Admin password. You will configure SRVANY to start the command prompt (which will run the ‘net user’ command).

Copy SRVANY and INSTSRV to a temporary folder, mine is called D:\temp. Copy cmd.exe to this folder too (cmd.exe is the command prompt, usually located at %WINDIR%\System32).

Start a command prompt, point to d:\temp (or whatever you call it), and type:

instsrv PassRecovery "d:\temp\srvany.exe"

(change the path to suit your own).

It is now time to configure SRVANY.

Start Regedit, and navigate to

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\PassRecovery

Create a new subkey called Parameters and add two new values:

name: Application

type: REG_SZ (string)

value: d:\temp\cmd.exe

name: AppParameters

type: REG_SZ (string)

value: /k net user administrator 123456 /domain

Replace 123456 with the password you want. Keep in my mind that the default domain policy require complex passwords (including digits, respecting a minimal length etc) so unless you’ve changed the default domain policy use a complex password such as P@ssw0rd

Now open the Services applet (Control Panel\Administrative Tools\Services) and open the PassRecovery property tab. Check the starting mode is set to Automatic.

clip_image002

Go to the Log On tab and enable the option Allow service to interact with the desktop.

Restart Windows normally, SRVANY will run the NET USER command and reset the domain admin password.

Step 3

Log on with the Administrator’s account and the password you’ve set in step #2.

Use this command prompt to uninstall SRVANY (do not forget to do it!) by typing:

net stop PassRecovery

sc delete PassRecovery

Now delete d:\temp and change the admin password if you fancy.

Done!

VCAP-DCA Objective 2.3 – Deploy and Maintain Scalable Virtual Networking

Knowledge
  • Identify VMware NIC Teaming policies
  • Identify common network protocols
Skills and Abilities
  • Understand the NIC Teaming failover types and related physical network settings
  • Determine and apply Failover settings
  • Configure explicit failover to conform with VMware best practices
  • Configure port groups to properly isolate network traffic
Tools
  • ESX Configuration Guide
  • ESXi Configuration Guide
  • vSphere Command-Line Interface Installation and Scripting Guide
  • Product Documentation
  • vSphere Client
  • vSphere CLI
    • vicfg-*

Notes

Identify VMware NIC Teaming policies

There are 5 different NIC Teaming Policies.

  • Route based on the originating virtual port ID
  • Route based on IP hash
  • Route based on source MAC hash
  • Route based on physical NIC load (vSphere 4.1 only)
  • Use explicit failover order

Route based on the originating virtual switch port ID

Choose an uplink based on the virtual port where the traffic entered the virtual switch. This is the default configuration and the one most commonly deployed. When you use this setting, traffic from a given virtual Ethernet adapter is consistently sent to the same physical adapter unless there is a failover to another adapter in the NIC team. Replies are received on the same physical adapter as the physical switch learns the port association. This setting provides an even distribution of traffic if the number of virtual Ethernet adapters is greater than the number of physical adapters. A given virtual machine cannot use more than one physical Ethernet adapter at any given time unless it has multiple virtual adapters. This setting places slightly less load on the ESX Server host than the MAC hash setting.

Note: If you select either srcPortID or srcMAC hash, you should not configure the physical switch ports as any type of team or bonded group.

Route based on IP hash

Choose an uplink based on a hash of the source and destination IP addresses of each packet. (For non-IP packets, whatever is at those offsets is used to compute the hash.) Evenness of traffic distribution depends on the number of TCP/IP sessions to unique destinations. There is no benefit for bulk transfer between a single pair of hosts. You can use link aggregation — grouping multiple physical adapters to create a fast network pipe for a single virtual adapter in a virtual machine. When you configure the system to use link aggregation, packet reflections are prevented because aggregated ports do not retransmit broadcast or multicast traffic. The physical switch sees the client MAC address on multiple ports. There is no way to predict which physical Ethernet adapter will receive inbound traffic. All adapters in the NIC team must be attached to the same physical switch or an appropriate set of stacked physical switches. (Contact your switch vendor to find out whether 802.3ad teaming is supported across multiple stacked chassis.) That switch or set of stacked switches must be 802.3ad-compliant and configured to use that link-aggregation standard in static mode (that is, with no LACP). All adapters must be active. You should make the setting on the virtual switch and ensure that it is inherited by all port groups within that virtual switch.

Route based on source MAC hash

Choose an uplink based on a hash of the source Ethernet MAC address. When you use this setting, traffic from a given virtual Ethernet adapter is consistently sent to the same physical adapter unless there is a failover to another adapter in the NIC team. Replies are received on the same physical adapter as the physical switch learns the port association. This setting provides an even distribution of traffic if the number of virtual Ethernet adapters is greater than the number of physical adapters. A given virtual machine cannot use more than one physical Ethernet adapter at any given time unless it uses multiple source MAC addresses for traffic it sends.

Route based on physical NIC load

Source Frank Denneman blog: http://frankdenneman.nl/2010/07/load-based-teaming/

The option “Route based on physical NIC load” takes the virtual machine network I/O load into account and tries to avoid congestion by dynamically reassigning and balancing the virtual switch port to physical NIC mappings. The three existing load-balancing policies, Port-ID, Mac-Based and IP-hash use a static mapping between virtual switch ports and the connected uplinks. The VMkernel assigns a virtual switch port during the power-on of a virtual machine, this virtual switch port gets assigned to a physical NIC based on either a round-robin- or hashing algorithm, but all algorithms do not take overall utilization of the pNIC into account. This can lead to a scenario where several virtual machines mapped to the same physical adapter saturate the physical NIC and fight for bandwidth while the other adapters are underutilized. LBT solves this by remapping the virtual switch ports to a physical NIC when congestion is detected. After the initial virtual switch port to physical port assignment is completed, Load Based teaming checks the load on the dvUplinks at a 30 second interval and dynamically reassigns port bindings based on the current network load and the level of saturation of the dvUplinks. The VMkernel indicates the network I/O load as congested if transmit (Tx) or receive (Rx) network traffic is exceeding a 75% mean over a 30 second period. (The mean is the sum of the observations divided by the number of observations). An interval period of 30 seconds is used to avoid MAC address flapping issues with the physical switches. Although an interval of 30 seconds is used, it is recommended to enable port fast (trunk fast) on the physical switches, all switches must be a part of the same layer 2 domain.

Use explicit failover order

This allows you to override the default ordering of failover on the uplinks. The only time I can see this being useful is if the uplinks are connected to multiple physical switches and you wanted to use them in a particular order. Either that or you think a pNIC In the ESX(i) host is not working correctly. If you use this setting it is best to configure those vmnics or adapters as standby adapters as any active adapters will be used from the highest in the order and then down.

For more information see Simon Greaves Blog at: http://simongreaves.co.uk/drupal/NIC_Teaming_Design

Identify common network protocols

On Wikipedia there is a complete list of network protocols. See http://en.wikipedia.org/wiki/List_of_network_protocols

 

Understand the NIC Teaming failover types and related physical network settings

The five available policies are:

  • Route based on virtual port ID (default)
  • Route based on IP Hash (MUST be used with static Etherchannel – no LACP). No beacon probing.
  • Route based on source MAC address
  • Route based on physical NIC load (vSphere 4.1 only)
  • Explicit failover

NOTE: These only affect outbound traffic. Inbound load balancing is controlled by the physical switch.

Failover types and related physical network settings Failover types

  • Cable pull/failure
  • Switch failure
  • Upstream switch failure
  • Change NIC teaming for FT logging (use IP hash) – VMwareKB1011966

Use uplink failure detection (also known as link state tracking) to handle physical network failures outside direct visibility of the host.

With blades you typically don’t use NIC teaming as each blade has a 1 to 1 mapping from its multiple pNIC to the blade chassis switch. That switch in turn may use an Etherchannel to an upstream switch but from the blade (and hence ESX perspective) it simply has multiple independent NICs (hence route on virtual port ID is the right choice).

Source: http://www.vexperienced.co.uk/2011/04/01/vcap-dca-study-notes-2-3-deploy-and-maintain-scalable-virtual-networks/

Determine and apply Failover settings

Configurable from the NIC teaming tab of the vSwitch.

See page 46 of the ESXi Configuration Guide or page 48 of the ESX Configuration Guide.

Load Balancing Settings

  • Route based on the originating port ID. Choose an uplink based on the virtual port where the traffic entered the virtual switch.
  • Route based on ip hash. Choose an uplink based on a hash of the source and destination IP addresses of each packet. For non-IP packets, whatever is at those offsets is used to compute the hash.
  • Route based on source MAC hash. Choose an uplink based on a hash of the source Ethernet.
  • Use explicit failover order. Always use the highest order uplink from the list of Active adapters which passes failover detection criteria.

NOTE IP-based teaming requires that the physical switch be configured with EtherChannel. For all other options, EtherChannel should be disabled.

Network Failover Detection

  • Link Status only. Relies solely on the link status that the network adapter provides. This option detects failures, such as cable pulls and physical switch power failures, but not configuration errors, such as a physical switch port being blocked by spanning tree or that is misconfigured to the wrong VLAN or cable pulls on the other side of a physical switch.
  • Beacon Probing. Sends out and listens for beacon probes on all NICs in the team and uses this information, in addition to link status, to determine link failure. This detects many of the failures previously mentioned that are not detected by link status alone.

Notify Switches

Select Yes or No to notify switches in the case of failover. If you select Yes, whenever a virtual NIC is connected to the vSwitch or whenever that virtual NIC’s traffic would be routed over a different physical NIC in the team because of a failover event, a notification is sent out over the network to update the lookup tables on physical switches. In almost all cases, this process is desirable for the lowest latency of failover occurrences and migrations with VMotion.

NOTE Do not use this option when the virtual machines using the port group are using Microsoft Network Load Balancing in unicast mode. No such issue exists with NLB running in multicast mode.

Failback

Select Yes or No to disable or enable failback.

This option determines how a physical adapter is returned to active duty after recovering from a failure. If failback is set to Yes (default), the adapter is returned to active duty immediately upon recovery, displacing the standby adapter that took over its slot, if any. If failback is set to No, a failed adapter is left inactive even after recovery until another currently active adapter fails, requiring its replacement.

Failover Order

Specify how to distribute the work load for uplinks. If you want to use some uplinks but reserve others for emergencies in case the uplinks in use fail, set this condition by moving them into different groups:

  • Active Uplinks. Continue to use the uplink when the network adapter connectivity is up and active.
  • Standby Uplinks. Use this uplink if one of the active adapter’s connectivity is down.
  • Unused Uplinks. Do not use this uplink.

Configure explicit failover to conform with VMware best practices

To configure explicit failover, just go to the NIC teaming tab of the vSwitch properties to configure this. Set Load balancing to ‘Use explicit failover order’ and configure the appropriate order for NIC’s in your environment.

Configure port groups to properly isolate network traffic

The following are generally accepted best practices. (Source vexperienced.co.uk blog)

  • Separate VM traffic and infrastructure traffic (vMotion, NFS, iSCSI, FT)
  • Use separate pNICs and vSwitches where possible
  • VLANs can be used to isolate traffic(both from a broadcast and security perspective)
  • When using NIC teams use pNICs from separate buses (ie don’t have a team comprising two pNICs on the same PCI card – use one onboard adapter and one from an expansion card)
  • Keep FT logging on a separate pNIC and vSwitch(ideally 10GB)
  • Use dedicated network infrastructure (physical switches etc) for storage (iSCSI and NFS)

When you move to 10GB networks isolation is implemented differently (often using some sort of IO virtualisation like FlexConnect, Xsigo, or UCS) but the principals are the same. VMworld 2010 session TA8440 covers the move to 10GB and FCoE.

Links

http://www.seancrookston.com/2010/09/08/vcap-dca-objective-2-3-deploy-and-maintain-scalable-virtual-networking/

http://www.kendrickcoleman.com/index.php?/Tech-Blog/vcap-datacenter-administration-exam-landing-page-vdca410.html

http://damiankarlson.com/vcap-dca4-exam/objective-2-3-deploy-and-maintain-scalable-virtual-networking/

http://www.vexperienced.co.uk/2011/04/01/vcap-dca-study-notes-2-3-deploy-and-maintain-scalable-virtual-networks/

Documents and manuals

ESX Configuration Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_esx_server_config.pdf

ESXi Configuration Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_esxi_server_config.pdf

 

Source

VCAP-DCA Objective 2.2 – Configure and Maintain VLANs, PVLANs and VLAN Settings

Knowledge
  • Identify types of VLANs and PVLANs
Skills and Abilities
  • Determine use cases for and configure VLAN Trunking
  • Determine use cases for and configure PVLANs
  • Use command line tools to troubleshoot and identify VLAN configurations
Tools
  • vSphere Command-Line Interface Installation and Scripting Guide
  • ESX Configuration Guide
  • ESXi Configuration Guide
  • Product Documentation
  • vSphere Client
  • vSphere CLI
    • vicfg-*
Notes

Identify types of VLANs and PVLANs

What is a VLAN?

Source Wikipedia, http://en.wikipedia.org/wiki/Virtual_LAN

A virtual LAN, commonly known as a VLAN, is a group of hosts with a common set of requirements that communicate as if they were attached to the same broadcast domain, regardless of their physical location. A VLAN has the same attributes as a physical LAN, but it allows for end stations to be grouped together even if they are not located on the same network switch. Network reconfiguration can be done through software instead of physically relocating devices.

To physically replicate the functions of a VLAN, it would be necessary to install a separate, parallel collection of network cables and switches/hubs which are kept separate from the primary network. However unlike a physically separate network, VLANs must share bandwidth; two separate one-gigabit VLANs using a single one-gigabit interconnection can both suffer reduced throughput and congestion. It virtualizes VLAN behaviors (configuring switch ports, tagging frames when entering VLAN, lookup MAC table to switch/flood frames to trunk links, and untagging when exit from VLAN.)

VLANs are created to provide the segmentation services traditionally provided by routers in LAN configurations. VLANs address issues such as scalability, security, and network management. Routers in VLAN topologies provide broadcast filtering, security, address summarization, and traffic flow management. By definition, switches may not bridge IP traffic between VLANs as it would violate the integrity of the VLAN broadcast domain.

This is also useful if someone wants to create multiple Layer 3 networks on the same Layer 2 switch. For example, if a DHCP server (which will broadcast its presence) is plugged into a switch it will serve any host on that switch that is configured to get its IP from a DHCP server. By using VLANs you can easily split the network up so some hosts won’t use that DHCP server and will obtain link-local addresses, or obtain an address from a different DHCP server.

VLANs are essentially Layer 2 constructs, compared with IP subnets which are Layer 3 constructs. In an environment employing VLANs, a one-to-one relationship often exists between VLANs and IP subnets, although it is possible to have multiple subnets on one VLAN or have one subnet spread across multiple VLANs. VLANs and IP subnets provide independent Layer 2 and Layer 3 constructs that map to one another and this correspondence is useful during the network design process.

By using VLANs, one can control traffic patterns and react quickly to relocations. VLANs provide the flexibility to adapt to changes in network requirements and allow for simplified administration.

What is a PVLAN?

Source Wikipedia, http://en.wikipedia.org/wiki/Private_VLAN

A private VLAN is a technique in computer networking where a VLAN contains switch ports that are restricted, such that they can only communicate with a given “uplink”. The restricted ports are called “private ports”. Each private VLAN typically contains many private ports, and a single uplink. The uplink will typically be a port (or link aggregation group) connected to a router, firewall, server, provider network, or similar central resource.

The switch forwards all frames received on a private port out the uplink port, regardless of VLAN ID or destination MAC address. Frames received on an uplink port are forwarded in the normal way (i.e., to the port hosting the destination MAC address, or to all VLAN ports for unknown destinations or broadcast frames). “Peer-to-peer” traffic is blocked. Note that while private VLANs provide isolation at the data link layer, communication at higher layers may still be possible.

A typical application for a private VLAN is a hotel or Ethernet to the home network where each room or apartment has a port for Internet access. Similar port isolation is used in Ethernet-based ADSL DSLAMs. Allowing direct data link layer communication between customer nodes would expose the local network to various security attacks, such as ARP spoofing, as well as increasing the potential for damage due to misconfiguration.

Another application of private VLANs is to simplify IP address assignment. Ports can be isolated from each other at the data link layer (for security, performance, or other reasons), while belonging to the same IP subnet. In such a case direct communication between the IP hosts on the protected ports is only possible through the uplink connection by using MAC-Forced Forwarding or a similar Proxy ARP based solution.

VMware has created a KB document about Private VLAN (PVLAN) on vNetwork Distributed Switch – Concept Overview. See VMware KB:1010691

The definition of Private VLAN is:

  • Virtual LAN (VLAN) is a mechanism to divide a broadcast domain into several logical broadcast domains.
  • Private VLAN is an extension to the VLAN standard, already available in several (most recent) physical switches. It adds a further segmentation of the logical broadcast domain, to create “Private” groups.
  • Private means that the hosts in the same PVLAN are not able to be seen by the others, except the selected ones in the promiscuous PVLAN.
  • Standard 802.1Q Tagging indicates there is no encapsulation of a PVLAN inside a VLAN, everything is done with one tag per packet.
  • No Double Encapsulation indicates that the packets are tagged according to the switch port configuration (EST mode), or they arrive already tagged if the port is a trunk (VST mode).
  • Switch software decides which ports to forward the frame, based on the tag and the PVLAN tables.

A Private VLAN is further divided into the groups:

  • Primary PVLAN – The original VLAN that is being divided into smaller groups is called Primary, and all the secondary PVLANs exist only inside the primary.
  • Secondary PVLANs – The secondary PVLANs exist only inside the primary. Each Secondary PVLAN has a specific VLAN ID associated to it, and each packet travelling through it is tagged with an ID as if it were a normal VLAN, and the physical switch associates the behavior (Isolated, Community or Promiscuous) depending on the VLAN ID found in each packet.

Note: Depending upon the type of the groups involved, hosts are not able to communicate with each other, even if they belong to the same group.

Three types of Secondary PVLANs:

  • Promiscuous – A node attached to a port in a promiscuous secondary PVLAN may send and receive packets to any node in any others secondary VLAN associated to the same primary. Routers are typically attached to promiscuous ports.
  • Isolated – A node attached to a port in an isolated secondary PVLAN may only send to and receive packets from the promiscuous PVLAN.
  • Community – A node attached to a port in a community secondary PVLAN may send to and receive packets from other ports in the same secondary PVLAN, as well as send to and receive packets from the promiscuous PVLAN.

Notes:

  • Promiscuous PVLANs have the same VLAN ID both for Primary and Secondary VLAN.
  • Community and Isolated PVLANs traffic travels tagged as the associated Secondary PVLAN.
  • Traffic inside PVLANs is not encapsulated (no Secondary PVLAN encapsulated inside a Primary PVLAN Packet).
  • Traffic between virtual machines on the same PVLAN but on different ESX hosts go through the Physical Switch. Therefore, the Physical Switch must be PVLAN aware and configured appropriately, to allow the secondary PVLANs to reach destination.
  • Switches discover MAC addresses per VLAN. This can be a problem for PVLANs because each virtual machine appears to the physical switch to be in more than one VLAN, or at least, it appears that there is no reply to the request, because the reply travels back in a different VLAN. For this reason, it is a requirement that each physical switch, where ESX with PVLANs are connected, must be PVLAN aware.

More information on how to configure Private VLAN (PVLAN) on vNetwork Distributed Switch see VMware KB:1010703

Determine use cases for and configure VLAN Trunking

See VMware KB:1010778 Configuring Virtual Switch VLAN Tagging (VST) mode on a vNetwork Distributed Switch.

Set the physical port connection between ESX and physical switch to TRUNK mode. ESX only supports IEEE 802.1Q (dot1q) trunking.

VLAN configuration is required on ESX side. Define ESX VLANs on the physical switch. Set ESX dvPortgroup to belong to a certain VLAN ID.

Caution: Native VLAN ID on ESX VST Mode is not supported. Do not assign a VLAN to a portgroup that is the same as the native VLAN ID of the physical switch.

Native VLAN packets are not tagged with VLAN ID on the out going traffic toward ESX host. Therefore, if ESX is set VST mode, it drops the packets that are lacking a VLAN tag.

To configure VST on dvPortGroup:

  1. In vCenter, go to Home > Inventory > Networking.
  2. Right-click dvPortGroup and click Edit Settings.
  3. Under dvPortGroup > Settings > VLAN > Policies, set the VLAN type to VLAN.
  4. Select a VLAN ID between 1- 4094
    Note: Do not use VLAN ID 4095.
  5. Click OK.

Why not to use VLAN ID 4095, see Duncan Epping blog Yellow Bricks. He has written an article about the VLAN ID 4096, see http://www.yellow-bricks.com/2010/06/10/vlan-id-4095/

This particular VLAN ID is only to be used for “Virtual Guest Tagging” (VGT). It basically means that the VLAN ID is stripped off at the Guest OS layer and not at the portgroup layer. In other words the VLAN trunk(multiple VLANs on a single wire) is extended to the virtual machine and the virtual machine will need to deal with it.

When will you use this? To be honest there aren’t many use cases any more. In the past it was used to increase the number of VLANs for a VM. The limit of 4 NICs for VI3 meant a maximum of 4 portgroups / VLANs per VM. However with vSphere the maximum amount of NICs went up to 10 and as such the amount of VLANs for a single VM also went up to 10.

Also see VMware KB:1004074 Sample configuration of virtual switch VLAN tagging (VST Mode) and ESX

To configure Virtual Switch (vSwitch) VLAN Tagging (VST) on ESX host:

  1. Assign the VLAN on vSwitch and or portgroup. Supported VLAN range (1-4094)
  2. Set the switch NIC teaming policy to Route based on originating virtual port ID, this is set by default.
    • VLAN ID 0 (Zero) Disables VLAN tagging on port group (EST Mode)
    • VLAN ID 4095 enables trunking on port group ( VGT Mode)

Note: Incoming traffic NIC teaming is called Ether-channel / LACP. For more information, see Sample configuration using EthernetChannel, ESX 3.0 and a Cisco switch (1004048).

To configure VLAN on the portgroup within the Virtual Infrastructure Client:

  1. Highlight the ESX host.
  2. Click the Configuration tab.
  3. Click the Networking link.
  4. Click Properties.
  5. Highlight the virtual switch in the Ports tab and click Edit.
  6. Click the General tab.
  7. Assign a VLAN number in VLAN ID (optional).
  8. Click the NIC Teaming tab.
  9. From the Load Balancing dropdown, choose Route based on originating virtual port ID.
  10. Verify that there is at least one network adapter listed underActive Adapters.
  11. Verify VST configuration by utilizing the ping command to confirm connection between ESX host and gateway interfaces and other host on the same VLAN.

Note: For additional information on VLAN configuration of a VirtualSwitch (vSwitch) port group, see Configuring a VLAN on a portgroup (VMware KB:1003825).

To configure via command line:

esxcfg-vswitch -p “<portgroup name>” -v <VLAN_ID> <virtual switch name>

The illustration attached to this article is the sample VST mode topology and configuration with two ESX hosts, each with two NICs connecting to the Cisco switch.

Other good reads for VLAN Trunking are:

 

Determine use cases for and configure PVLANs

See VMware KB:1010703 Configuration of Private VLAN (PVLAN) on vNetwork Distributed Switch.

For more information about PVLAN concept, see Private VLAN (PVLAN) on vNetwork Distributed Switch concept (1010691).

To create the PVLAN table in the dvSwitch:

  1. In vCenter, go to Home > Inventory > Networking.
  2. Click Edit Setting for the dvSwitch.
  3. Choose the Private VLAN tab.
  4. On the Primary tab, add the VLAN that is used outside the PVLAN domain. Enter a private VLAN ID and/or choose one from the list.
  5. On the Secondary tab, create the PVLANs of the desired type. Enter a VLAN ID in the VLAN ID field.
  6. Select the Type for the Secondary VLANID. Choose one of the options from the dropdown menu.
    • Isolated
    • Community
  7. Click Ok.

Note: There can be only one Promiscuous PVLAN and is created automatically for you.

Beware: Before deleting any primary/secondary PVLANs, make sure that they are not in use or the operation is not be performed.

To set PVLAN in the dvPortGroup:

  1. Highlight dvPortGroup and click Edit Settings.
  2. Click General> VLAN > Policies.
  3. Using the dropdown, set the VLAN type to Private.
  4. Select VLAN from the Private VLAN Entry dropdown.

Note: The VLANs created in step 1 are listed here.

Eric Sloof from www.ntpro.nl has created a training video on his blog at: http://www.ntpro.nl/blog/archives/1465-Online-Training-Configure-Private-VLAN-IDs.html 

See also Trainsignal VMware vSphere Troubleshooting Training, lesson 17.

 

Use command line tools to troubleshoot and identify VLAN configurations

See the vSphere Command-Line Interface Installation and Scripting Guide, chapter 10 Managing vSphere Networking.

The important commands are:

  • vicfg‑vswitch
  • vicfg-vmknic
  • vicfg‑nics

vicfg‑vswitch

vicfg-vswitch – create and configure virtual switches and port groups.

The vicfg-vswitch command adds or removes virtual switches or modifies virtual switch settings. A virtual switch is an abstracted network device. It can route traffic internally between virtual machines and link to external networks. The ESX Configuration Guide and the ESXi Configuration Guide discuss virtual switches, vNetwork Distributed Switches (vDS), port groups, and vDS port groups. The vSphere CLI manual presents some sample scenarios.

By default,each ESX/ESXi host has a single virtual switch called vSwitch0.

See for more information about this command, http://www.vmware.com/support/developer/vcli/vcli41/doc/reference/vicfg-vswitch.html

vicfg-vmknic

vicfg-vmknic – configure virtual network adapters

The vicfg-vmknic command configures VMkernel NICs (virtual network adapters).

Use the esxcli swisis nic command to specify NIC bindings for VMkernel NICs.

See for more information about this command, http://www.vmware.com/support/developer/vcli/vcli41/doc/reference/vicfg-vmknic.html

vicfg‑nics

vicfg-nics – get information, set speed and duplex for ESX/ESXi physical NICs

The vicfg-nics command manages uplink adapters, that is, the Ethernet switches used by an ESX/ESXi host. You can use vicfg-nics to list the VMkernel name for the uplink adapter, its PCI ID, driver, link state, speed, duplex setting, MAC address and a short PCI description of the card. You can also specify speed and duplex settings for an uplink adapter.

See for more information about this command, http://www.vmware.com/support/developer/vcli/vcli41/doc/reference/vicfg-nics.html

 
 
Links

http://www.seancrookston.com/2010/09/07/vcap-dca-objective-2-2-configure-and-maintain-vlans-pvlans-and-vlan-settings/

http://www.kendrickcoleman.com/index.php?/Tech-Blog/vcap-datacenter-administration-exam-landing-page-vdca410.html

http://damiankarlson.com/vcap-dca4-exam/objective-2-2-configure-and-maintain-vlans-pvlans-and-vlan-settings/

 
Video

Carlos Vargas has created a video about this objective see: http://virtual-vargi.blogspot.com/2011/02/vcap-dca-section-2-22.html 

 
Documents and manuals

ESX Configuration Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_esx_server_config.pdf

ESXi Configuration Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_esxi_server_config.pdf

vSphere Command-Line Interface Installation and Scripting Guide: www.vmware.com/pdf/vsphere4/r41/vsp4_41_vcli_inst_script.pdf

 
Source

VCAP-DCA Objective 2.1 – Implement and Manage Complex Virtual Networks

Knowledge
  • Identify common virtual switch configurations
Skills and Abilities
  • Determine use cases for and apply IPv6
  • Configure NetQueue
  • Configure SNMP
  • Determine use cases for and apply VMware DirectPath I/O
  • Migrate a vSS network to a Hybrid or Full vDS solution
  • Configure vSS and vDS settings using command line tools
  • Analyze command line output to identify vSS and vDS configuration details
Tools
  • vSphere Command-Line Interface Installation and Scripting Guide
  • vNetwork Distributed Switch: Migration and Configuration
  • ESX Configuration Guide
  • ESXi Configuration Guide
  • Product Documentation
  • vSphere Client
  • vSphere CLI
    • vicfg-*
Notes

Identify common virtual switch configurations

See VMware document: VMware Virtual Networking Concepts.

See VMware document: VMware Networking Best Practices.

See VMware document: What’s new in VMware vSphere 4: Virtual Networking. 

Kendrick Coleman has created some designs with different network cards.

  • Design with 6 network cards.
  • Design with 10 network cards.
  • Design with 12 network cards.

Determine use cases for and apply IPv6

VMware KB document 1021769. Configure IPv6 with ESX and ESXi 4.1.

In vSphere 4.1, IPv6 is disabled by default for the COS, the VMkernel, and ESXi.

You can enable IPv6 for the COS and VMkernel from the command line and from Networking Properties.

To enable IPv6 from the command line:

  1. Run this command:
    • For ESX 4.1 – #esxcfg-vswif -6 true
    • For ESXi 4.1 – #esxcfg-vmknic -6 true
  2. Reboot.

To enable IPv6 from Networking Properties:

  1. In vCenter Server, select the host, click Configuration > Networking > Properties.
  2. Select Enable IPv6 support on this host system.
  3. Click OK.

IPsec

Internet Protocol Security (IPsec) is used for secure communication and is included in the TCP/IP protocol stack. Secure communication between two partners is based on cryptographic algorithms with Public or Pre-Shared keys. To establish a secure communication, keys must be exchanged Manually or Automatically:

  • Manual Key Exchange (Pre-Shared) – Keys are exchanged through a different communication channel
  • Automated Key Exchange – Public keys are exchanged during initialization sequence using the same channel

Notes:

  • ESX supports IPv4 with Internet Key Exchange Version 2 (IKEv2). However, IPv6 does not support IKEv2. IPv6 only supports manual keying.
  • IPv6 in ESX supports IPsec with manual keying.

A new set of commands (esxcfg-ipsec and vicfg-ipsec) provides an interface to configure IPsec properties:

  • The command # esxcfg-ipsec –h gives further information
  • To add a Security Association (SA), run the command:
    # esxcfg-ipsec –add-sa –sa-src x:x:x:: –sa-dst x:x:x:: –sa-mode transport –ealgo null –spi 0x200 –ialgo hmac-sha1 –ikey key saname
  • To add a Security Policy (SP), run the command:
    # esxcfg-ipsec –add-sp –sp-src x:x::/x –sp-dst x:x::/x –src-port 100 –dst-port 200 –ulproto tcp –dir out –action ipsec –sp-mode transport –sa-name saname spname
  • To add a generic SP with default options, run the command:
    # esxcfg-ipsec –add-sp –sp-src any -sp-dst any –src-port any –dst-port any –ulproto any –dir out –action ipsec –sp-mode transport –sa-name saname spname
  • To add a SP (such as a firewall rule), run the command:
    #esxcfg-ipsec –add-sp –sp-src x:x::/x –sp-dst x:x::/x –src-port 100 –dst-port 200 –ulproto tcp –dir out –action discard spname
  • To delete an SA, run the command:
    esxcfg-ipsec –remove-sa saname
  • To delete a SP, run the command:
    esxcfg-ipsec –remove-sp spname
  • To flush all SPs, run the command:
    esxcfg-ipsec –flush-sp

Note: IPsec cannot be configured through vSphere Client.

Other article, VMware KB 1010812. Configure IPv6 on ESX 4.0.x

Supporting IPv6 ESX.

  • ESX 3.5 supports virtual machines configured for IPv6.
  • ESX 4.0 supports IPv6 with the following restrictions:
    • IPv6 Storage (software iSCSI and NFS) is experimental in ESX 4.0.
    • ESX does not support TCP Segmentation Offload (TSO) with IPv6.
    • VMware High Availability and Fault Tolerance do not support IPv6.

Guest operating systems.

  • Windows Vista and Windows Server 2008 fully support IPv6.
  • Windows 2003 SP1 and Windows XP SP2 have the infrastructure for IPv6, but components of the system and applications are not fully compliant. For more information, see http://technet.microsoft.com/en-us/library/cc776103.aspx
  • Linux version 2.6 is fully compliant.

Configure NetQueue

NetQueue in ESX/ESXi takes advantage of the ability of some network adapters to deliver network traffic to the system in multiple receive queues that can be processed separately, allowing processing to be scaled to multiple CPUs, improving receive-side networking performance.

How to configure Netqueue on a ESXi host see: VMware ESXi Configuration Guide page 63.

Enable NetQueue on an ESXi Host

NetQueue is enabled by default. To use NetQueue after it has been disabled, you must reenable it.

Prerequisites

Familiarize yourself with the information on configuring NIC drivers in the VMware vSphere Command-Line Interface Installation and Reference guide.

Procedure

  1. In the VMware vSphere CLI, use the command vicfg-advcfg –set true VMkernel.Boot.netNetQueueEnable.
  2. Use the VMware vSphere CLI to configure the NIC driver to use NetQueue.
  3. Reboot the ESXi host.

Disable NetQueue on an ESXi Host

NetQueue is enabled by default.

Procedure

  1. In the VMware vSphere CLI, use the command vicfg-advcfg –set false VMkernel.Boot.netNetQueueEnable.
  2. To disable NetQueue on the NIC driver, use the vicfg-module -s “” module name command.
    For example, if you are using the s2io NIC driver, use vicfg-module -s “” s2io.
  3. Reboot the host.

How to configure Netqueue on an ESX host see: VMware ESX Configuration Guide page 65.

Enable NetQueue on an ESX Host

NetQueue is enabled by default. To use NetQueue after it has been disabled, you must reenable it.

Prerequisites

Familiarize yourself with the information on configuring NIC drivers in the VMware vSphere Command-Line Interface Installation and Reference guide.

Procedure

  1. In the VMware vSphere CLI, use the command vicfg-advcfg –set true
    VMkernel.Boot.netNetQueueEnable.
  2. Use the VMware vSphere CLI to configure the NIC driver to use NetQueue.
  3. Reboot the ESX host.

Disable NetQueue on an ESX Host

NetQueue is enabled by default.

Procedure

  1. In the VMware vSphere CLI, use the command vicfg-advcfg –set false
    VMkernel.Boot.netNetQueueEnable.
  2. To disable NetQueue on the NIC driver, use the vicfg-module -s “” module name command.
    For example, if you are using the s2io NIC driver, use vicfg-module -s “” s2io.
  3. Reboot the host.

Configure SNMP

See vSphere Datacenter Administration guide page 129.

Simple Network Management Protocol (SNMP) allows management programs to monitor and control a variety of networked devices.

Managed systems run SNMP agents, which can provide information to a management program in at least one of the following ways:

  • In response to a GET operation, which is a specific request for information from the management system.
  • By sending a trap, which is an alert sent by the SNMP agent to notify the management system of a particular event or condition.

Management Information Base (MIB) files define the information that can be provided by managed devices.

The MIB files contain object identifiers (OIDs) and variables arranged in a hierarchy.

vCenter Server and ESX/ESXi have SNMP agents. The agent provided with each product has differing capabilities.

Configure SNMP for ESX/ESXi

ESX/ESXi includes an SNMP agent embedded in hostd that can both send traps and receive polling requests such as GET requests. This agent is referred to as the embedded SNMP agent.

Versions of ESX prior to ESX 4.0 included a Net-SNMP-based agent. You can continue to use this Net-NMPbased agent in ESX 4.0 with MIBs supplied by your hardware vendor and other third-party management applications. However, to use the VMware MIB files, you must use the embedded SNMP agent.

By default, the embedded SNMP agent is disabled. To enable it, you must configure it using the vSphere CLI command vicfg-snmp. For a complete reference to vicfg-snmp options, see vSphere Command-Line Interface Installation and Scripting Guide and vSphere Command-Line Interface Reference.

Prerequisites

SNMP configuration for ESX/ESXi requires the vSphere CLI. For information on installing and using the vSphere CLI, see vSphere Command-Line Interface Installation and Scripting Guide and vSphere Command-Line Interface Reference.

Procedure

  1. Configure SNMP Communities on page 131
    Before you enable the ESX/ESXi embedded SNMP agent, you must configure at least one community for the agent.
  2. Configure the SNMP Agent to Send Traps on page 131
    You can use the ESX/ESX embedded SNMP agent to send virtual machine and environmental traps to management systems. To configure the agent to send traps, you must specify a target address and community.
  3. Configure the SNMP Agent for Polling on page 132
    If you configure the ESX/ESXi embedded SNMP agent for polling, it can listen for and respond to requests from SNMP management client systems, such as GET requests.
  4. Configure SNMP Management Client Software on page 133
    After you have configured a vCenter Server system or an ESX/ESXi host to send traps, you must configure your management client software to receive and interpret those traps.

Configure SNMP Communities

Before you enable the ESX/ESXi embedded SNMP agent, you must configure at least one community for the agent.

An SNMP community defines a group of devices and management systems. Only devices and management systems that are members of the same community can exchange SNMP messages. A device or management system can be a member of multiple communities.

Prerequisites

SNMP configuration for ESX/ESXi requires the vSphere CLI. For information on installing and using the vSphere CLI, see vSphere Command-Line Interface Installation and Scripting Guide and vSphere Command-Line Interface Reference.

Procedure

  • From the vSphere CLI, type
    vicfg-snmp.pl –server hostname –username username –password password -c com1
    Replace com1 with the community name you wish to set. Each time you specify a community with this command, the settings you specify overwrite the previous configuration. To specify multiple communities, separate the community names with a comma.
    For example, to set the communities public and internal on the host host.example.com, you might type vicfg-snmp.pl –server host.example.com –username user –password password -c public,internal.

Configure the SNMP Agent to Send Traps

You can use the ESX/ESX embedded SNMP agent to send virtual machine and environmental traps to management systems. To configure the agent to send traps, you must specify a target address and community. To send traps with the SNMP agent, you must configure the target (receiver) address, community, and an optional port. If you do not specify a port, the SNMP agent sends traps to UDP port 162 on the target management system by default.

Prerequisites

SNMP configuration for ESX/ESXi requires the vSphere CLI. For information on installing and using the vSphere CLI, see vSphere Command-Line Interface Installation and Scripting Guide and vSphere Command-Line Interface Reference.

Procedure

  1. From the vSphere CLI, type
    vicfg-snmp.pl –server hostname –username username –password password – t target_address@port/community.
    Replace target_address, port, and community with the address of the target system, the port number to send the traps to, and the community name, respectively. Each time you specify a target with this command, the settings you specify overwrite all previously specified settings. To specify multiple targets, separate them with a comma.
    For example, to send SNMP traps from the host host.example.com to port 162 on target.example.com using the public community, type
    vicfg-snmp.pl –server host.example.com –username user –password password –t target.example.com@162/public.
  2. (Optional) Enable the SNMP agent by typing
    vicfg-snmp.pl –server hostname –username username –password password –enable.
  3. (Optional) Send a test trap to verify that the agent is configured correctly by typing
    vicfg-snmp.pl –server hostname –username username –password password –test. The agent sends a warmStart trap to the configured target.

Configure the SNMP Agent for Polling

If you configure the ESX/ESXi embedded SNMP agent for polling, it can listen for and respond to requests from SNMP management client systems, such as GET requests. By default, the embedded SNMP agent listens on UDP port 161 for polling requests from management systems. You can use the vicfg-snmp command to configure an alternative port. To avoid conflicting with other services, use a UDP port that is not defined in /etc/services.

IMPORTANT Both the embedded SNMP agent and the Net-SNMP-based agent available in the ESX service console listen on UDP port 161 by default. If you enable both of these agents for polling on an ESX host, you must change the port used by at least one of them.

Prerequisites

SNMP configuration for ESX/ESXi requires the vSphere CLI. For information on installing and using the vSphere CLI, see vSphere Command-Line Interface Installation and Scripting Guide and vSphere Command-Line Interface Reference.

Procedure

  1. From the vSphere CLI, type vicfg-snmp.pl –server hostname –username username –password password -p port.
    Replace port with the port for the embedded SNMP agent to use for listening for polling requests.
  2. (Optional) If the SNMP agent is not enabled, enable it by typing
    vicfg-snmp.pl –server hostname –username username –password password –enable.

Configure SNMP Settings for vCenter Server

To use SNMP with vCenter Server, you must configure SNMP settings using the vSphere Client.

Prerequisites

To complete the following task, the vSphere Client must be connected to a vCenter Server. In addition, you need the DNS name and IP address of the SNMP receiver, the port number of the receiver, and the community identifier.

Procedure

  1. Select Administration > vCenter Server Settings.
  2. If the vCenter Server is part of a connected group, in Current vCenter Server, select the appropriate server.
  3. Click SNMP in the navigation list.
  4. Enter the following information for the Primary Receiver of the SNMP traps.
  5. (Optional) Enable additional receivers in the Enable Receiver 2, Enable Receiver 3, and Enable Receiver 4 options.
  6. Click OK.
Option Description
Receiver URL The DNS name or IP address of the SNMP receiver.
Receiver port The port number of the receiver to which the SNMP agent sends traps.If the port value is empty, vCenter Server uses the default port, 162.
Community The community identifier.

The vCenter Server system is now ready to send traps to the management system you have specified.

Determine use cases for and apply VMware DirectPath I/O

On the VMware VROOM! Blog there is an article called: Performance and Use Cases of VMware DirectPath I/O for Networking This article explains more details about DirectPath I/O and the performance.

VMware DirectPath I/O is a technology, available from vSphere 4.0 and higher that leverages hardware support (Intel VT-d and AMD-Vi) to allow guests to directly access hardware devices. In the case of networking, a VM with DirectPath I/O can directly access the physical NIC instead of using an emulated (vlance, e1000) or a para-virtualized (vmxnet, vmxnet3) device. While both para-virtualized devices and DirectPath I/O can sustain high throughput (beyond 10Gbps), DirectPath I/O can additionally save CPU cycles in workloads with very high packet count per second (say > 50k/sec). However, DirectPath I/O does not support many features such as physical NIC sharing, memory overcommit, vMotion and Network I/O Control. Hence, VMware recommends using DirectPath I/O only for workloads with very high packet rates, where CPU savings from DirectPath I/O may be needed to achieve desired performance.

DirectPath I/O for Networking

VMware vSphere 4.x provides three ways for guests to perform network I/O: device emulation, para-virtualization and DirectPath I/O. A virtual machine using DirectPath I/O directly interacts with the network device using its device drivers. The vSphere host (running ESX or ESXi) is only involved in virtualizing interrupts of the network device. In contrast, a virtual machine (VM) using an emulated or para-virtualized device (referred to as virtual NIC or virtualized mode henceforth) interacts with a virtual NIC that is completely controlled by the vSphere host. The vSphere host handles the physical NIC interrupts, processes packets, determines the recipient of the packet and copies them into the destination VM, if needed. The vSphere host also mediates packet transmissions over the physical NIC.

In terms of network throughput, a para-virtualized NIC such as vmxnet3 matches the performance of DirectPath I/O in most cases. This includes being able to transmit or receive 9+ Gbps of TCP traffic with a single virtual NIC connected to a 1-vCPU VM. However, DirectPath I/O has some advantages over virtual NICs such as lower CPU costs (as it bypasses execution of the vSphere network virtualization layer) and the ability to use hardware features that are not yet supported by vSphere, but might be supported by guest drivers (e.g., TCP Offload Engine or SSL offload). In the virtualized mode of operation, the vSphere host completely controls the virtual NIC and hence it can provide a host of useful features such as physical NIC sharing, vMotion and Network I/O Control. By bypassing this virtualization layer, DirectPath I/O trades off virtualization features for potentially lower networking-related CPU costs. Additionally, DirectPath I/O needs memory reservation to ensure that the VM’s memory has not been swapped out when the physical NIC tries to access the VM’s memory.

Source: VMware Configuration Examples and Troubleshooting for VMDirectPath

ESX Host Requirements

VMDirectPath supports a direct device connection for virtual machines running on Intel Xeon 5500 systems, which feature an implementation of the I/O memory management unit (IOMMU) called Virtual Technology for Directed I/O (VT‐d). VMDirectPath can work on AMD platforms with I/O Virtualization Technology (AMD IOMMU), but this configuration is offered as experimental support. Some machines might not have this technology enabled in the BIOS by default. Refer to your hardware documentation to learn how to enable this technology in the BIOS.

Enable or Disable VMDirectPath

Enable or disable VMDirectPath through the hardware advanced settings page of the vSphere Client. Reboot the ESX host after enabling or disabling VMDirectPath. Disable VMDirectPath and reboot the ESX host before removing physical devices.

To find the VMDirectPath Configuration page in the vSphere Client

  1. Select the ESX host from Inventory.
  2. Select the Configuration tab.
  3. Select Advanced Settings under Hardware.

To disable and disconnect the PCI Device

  1. Use the vSphere Client to disable or remove the VMDirectPath configuration.
  2. Reboot the ESX host.
  3. Physically remove the device from the ESX host.

More info about how to enable DirectPath I/O can be found here: http://www.petri.co.il/vmware-esxi4-vmdirectpath.htm

Migrate a vSS network to a Hybrid or Full vDS solution

See VMware document: VMware vNetwork Distributed Switch: Migration and Configuration

Read it to gain a better understanding of vDS and reasoning on why a Hybrid solution may or may not work. This is a good excerpt from the document below, see page 4:

In a hybrid environment featuring a mixture of vNetwork Standard Switches and vNetwork Distributed Switches, VM networking should be migrated to vDS in order to take advantage of Network VMotion. As Service Consoles and VMkernel ports do not migrate from host to host, these can remain on a vSS. However, if you wish to use some of the advanced capabilities of the vDS for these ports, such as Private VLANs or bi-directional traffic shaping, or, team with the same NICs as the VMs (for example, in a two port 10GbE environment), then you will need to migrate all ports to the vDS.

Scaling maximums should be considered when migrating to a vDS. The following virtual network configuration maximums are supported in the release of vSphere 4.1:

  • 350 ESX/ESXi Hosts per vDS
  • 32 Distributed Switches (vDS or Nexus 1000V) per vCenter Server
  • 5000 Static or Dynamic Port groups per vCenter
  • 1016 Ephemeral Port groups per vCenter
  • 20000 Distributed Virtual Switch Ports per vCenter
  • 4096 total vSS and vDS virtual switch ports per host

Note: These configuration maximums are subject to change. Consult the Configuration Maximums for vSphere 4 documents at vmware.com for the most current scaling information. See Configuration Maximums vSphere 4.1

Configure vSS and vDS settings using command line tools

See VMware KB1008127 Configuring vSwitch or vNetwork Distributed Switch from the command line in ESX/ESXi 4.x.

More information about the command line tools see chapter 10 of the vSphere Command-Line Interface Installation and Scripting Guide.

Setting Up vSphere Networking with vNetwork Standard Switches

Create or manipulate virtual switches using vicfg-vswitch. By default, each ESX/ESXi host has one virtual switch, vSwitch0.

Make changes to the uplink adapter using vicfg-nics

Use vicfg-vswitch to add port groups to the virtual switch.

Use vicfg-vswitch to establish VLANs by associating port groups with VLAN IDs.

Use vicfg-vmknic to configure the VMkernel network interfaces.

For more info about the vicfg-vswitch command line see: http://www.vmware.com/support/developer/vcli/vcli41/doc/reference/vicfg-vswitch.html

For more info about the vicfg-nics command line see: http://www.vmware.com/support/developer/vcli/vcli41/doc/reference/vicfg-nics.html

For more info about the vicfg-vmknic command line see: http://www.vmware.com/support/developer/vcli/vcli41/doc/reference/vicfg-vmknic.html

extra information links:

Analyze command line output to identify vSS and vDS configuration details

vicfg-vswitch -l (to get DVSwitch, DVPort, and vmnic names)

esxcfg-vswif -l (get vswif IP address, netmask, dvPort id, etc. ESX Only)

Links

http://www.seancrookston.com/2010/08/30/vcap-dca-objective-2-1-implement-and-manage-complex-virtual-networks/

http://www.kendrickcoleman.com/index.php?/Tech-Blog/vcap-datacenter-administration-exam-landing-page-vdca410.html

http://damiankarlson.com/vcap-dca4-exam/objective-2-1-implement-and-manage-complex-virtual-networks/


Documents and manuals

vSphere Command-Line Interface Installation and Scripting Guide: www.vmware.com/pdf/vsphere4/r41/vsp4_41_vcli_inst_script.pdf

vNetwork Distributed Switch: Migration and Configuration: www.vmware.com/files/pdf/vsphere-vnetwork-ds-migration-configuration-wp.pdf

ESX Configuration Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_esx_server_config.pdf

ESXi Configuration Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_esxi_server_config.pdf

vSphere Command-Line Interface Installation and Scripting Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp4_41_vcli_inst_script.pdf

Video

Carlos Vargas has created a video about this objective see: http://virtual-vargi.blogspot.com/2011/02/vcap-dca-section-2-21.html

Source

VCAP-DCA Objective 1.3 – Configure and Manage Complex Multipathing and PSA Plug-ins

Knowledge
  • Explain the Pluggable Storage Architecture (PSA) layout
Skills and Abilities
  • Install and Configure PSA plug-ins
  • Understand different multipathing policy functionalities
  • Perform command line configuration of multipathing options
  • Change a multipath policy
  • Configure Software iSCSI port binding
Tools
  • vSphere Command-Line Interface Installation and Scripting Guide
  • ESX Configuration Guide
  • ESXi Configuration Guide
  • Fibre Channel SAN Configuration Guide
  • iSCSI SAN Configuration Guide
  • Product Documentation
  • vSphere Client
  • vSphere CLI
    • esxcli

 

Notes

Explain the Pluggable Storage Architecture (PSA) layout

What is PSA. See VMware KB1011375 What is Pluggable Storage Architecture (PSA) and Native Multipathing (NMP)?

Pluggable Storage Architecture (PSA)

To manage storage multipathing, ESX/ESXi uses a special VMkernel layer, Pluggable Storage Architecture (PSA). The PSA is an open modular framework that coordinates the simultaneous operation of multiple multipathing plugins (MPPs). PSA is a collection of VMkernel APIs that allow third party hardware vendors to insert code directly into the ESX storage I/O path. This allows 3rd party software developers to design their own load balancing techniques and failover mechanisms for particular storage array. The PSA coordinates the operation of the NMP and any additional 3rd party MPP.

Native Multipathing Plugin (NMP)

The VMkernel multipathing plugin that ESX/ESXi provides, by default, is the VMware Native Multipathing Plugin (NMP). The NMP is an extensible module that manages subplugins. There are two types of NMP subplugins: Storage Array Type Plugins (SATPs), and Path Selection Plugins (PSPs). SATPs and PSPs can be built-in and provided by VMware, or can be provided by a third party.

If more multipathing functionality is required, a third party can also provide an MPP to run in addition to, or as a replacement for, the default NMP.

VMware provides a generic Multipathing Plugin (MPP) called Native Multipathing Plugin (NMP).

What does NMP do?

  • Manages physical path claiming and unclaiming.
  • Registers and de-registers logical devices.
  • Associates physical paths with logical devices.
  • Processes I/O requests to logical devices:
    • Selects an optimal physical path for the request (load balance)
    • Performs actions necessary to handle failures and request retries.
  • Supports management tasks such as abort or reset of logical devices.

 

More information about PSA can be found in VMware document: Fibre Channel SAN Configuration Guide

To manage storage multipathing, ESX/ESXi uses a special VMkernel layer, the Pluggable Storage Architecture

(PSA). The PSA is an open, modular framework that coordinates the simultaneous operation of multiple

multipathing plug-ins (MPPs).

The VMkernel multipathing plug-in that ESX/ESXi provides by default is the VMware Native Multipathing

Plug-In (NMP). The NMP is an extensible module that manages sub plug-ins. There are two types of NMP sub

plug-ins, Storage Array Type Plug-Ins (SATPs), and Path Selection Plug-Ins (PSPs). SATPs and PSPs can be

built-in and provided by VMware, or can be provided by a third party.

If more multipathing functionality is required, a third party can also provide an MPP to run in addition to, or

as a replacement for, the default NMP.

When coordinating the VMware NMP and any installed third-party MPPs, the PSA performs the following

tasks:

  • Loads and unloads multipathing plug-ins.
  • Hides virtual machine specifics from a particular plug-in.
  • Routes I/O requests for a specific logical device to the MPP managing that device.
  • Handles I/O queuing to the logical devices.
  • Implements logical device bandwidth sharing between virtual machines.
  • Handles I/O queueing to the physical storage HBAs.
  • Handles physical path discovery and removal.
  • Provides logical device and physical path I/O statistics.

As Figure illustrates, multiple third-party MPPs can run in parallel with the VMware NMP. When installed,

the third-party MPPs replace the behavior of the NMP and take complete control of the path failover and the

load-balancing operations for specified storage devices.

image

Figure. Pluggable Storage Architecture

The multipathing modules perform the following operations:

  • Manage physical path claiming and unclaiming.
  • Manage creation, registration, and deregistration of logical devices.
  • Associate physical paths with logical devices.
  • Support path failure detection and remediation.
  • Process I/O requests to logical devices:
    • Select an optimal physical path for the request.
    • Depending on a storage device, perform specific actions necessary to handle path failures and I/O command retries.
  • Support management tasks, such as abort or reset of logical devices.

VMware Multipathing Module

By default, ESX/ESXi provides an extensible multipathing module called the Native Multipathing Plug-In

(NMP).

Generally, the VMware NMP supports all storage arrays listed on the VMware storage HCL and provides a

default path selection algorithm based on the array type. The NMP associates a set of physical paths with a

specific storage device, or LUN. The specific details of handling path failover for a given storage array are

delegated to a Storage Array Type Plug-In (SATP). The specific details for determining which physical path is

used to issue an I/O request to a storage device are handled by a Path Selection Plug-In (PSP). SATPs and PSPs

are sub plug-ins within the NMP module.

Upon installation of ESX/ESXi, the appropriate SATP for an array you use will be installed automatically. You

do not need to obtain or download any SATPs.

VMware SATPs

Storage Array Type Plug-Ins (SATPs) run in conjunction with the VMware NMP and are responsible for arrayspecific operations.

ESX/ESXi offers a SATP for every type of array that VMware supports. It also provides default SATPs that

support non-specific active-active and ALUA storage arrays, and the local SATP for direct-attached devices.

Each SATP accommodates special characteristics of a certain class of storage arrays and can perform the arrayspecific operations required to detect path state and to activate an inactive path. As a result, the NMP module itself can work with multiple storage arrays without having to be aware of the storage device specifics.

After the NMP determines which SATP to use for a specific storage device and associates the SATP with the

physical paths for that storage device, the SATP implements the tasks that include the following:

  • Monitors the health of each physical path.
  • Reports changes in the state of each physical path.
  • Performs array-specific actions necessary for storage fail-over. For example, for active-passive devices, it can activate passive paths.

VMware PSPs

Path Selection Plug-Ins (PSPs) run with the VMware NMP and are responsible for choosing a physical path

for I/O requests.

The VMware NMP assigns a default PSP for each logical device based on the SATP associated with the physical

paths for that device. You can override the default PSP.

By default, the VMware NMP supports the following PSPs:

Most Recently Used (VMW_PSP_MRU) Selects the path the ESX/ESXi host used most recently to access the given device.If this path becomes unavailable, the host switches to an alternative path andcontinues to use the new path while it is available. MRU is the default path

policy for active-passive arrays.

Fixed (VMW_PSP_FIXED) Uses the designated preferred path, if it has been configured. Otherwise, it usesthe first working path discovered at system boot time. If the host cannot usethe preferred path, it selects a random alternative available path. The host

reverts back to the preferred path as soon as that path becomes available. Fixed

is the default path policy for active-active arrays.

CAUTION If used with active-passive arrays, the Fixed path policy might cause

path thrashing.

VMW_PSP_FIXED_AP Extends the Fixed functionality to active-passive and ALUA mode arrays.
Round Robin (VMW_PSP_RR) Uses a path selection algorithm that rotates through all available active pathsenabling load balancing across the paths.

VMware NMP Flow of I/O

When a virtual machine issues an I/O request to a storage device managed by the NMP, the following process

takes place.

  1. The NMP calls the PSP assigned to this storage device.
  2. The PSP selects an appropriate physical path on which to issue the I/O.
  3. The NMP issues the I/O request on the path selected by the PSP.
  4. If the I/O operation is successful, the NMP reports its completion.
  5. If the I/O operation reports an error, the NMP calls the appropriate SATP.
  6. The SATP interprets the I/O command errors and, when appropriate, activates the inactive paths.
  7. The PSP is called to select a new path on which to issue the I/O.

 

On the GeekSilverBlog there is a good article about PSA and NMP, see: http://geeksilver.wordpress.com/2010/08/17/vmware-vsphere-4-1-psa-pluggable-storage-architecture-understanding/

Duncan Epping has written an article on his Yellow-Bricks website. http://www.yellow-bricks.com/2009/03/19/pluggable-storage-architecture-exploring-the-next-version-of-esxvcenter/

Install and Configure PSA plug-ins

The info is already discussed in the other objects. I found an article with movie on the Blog site of Eric Sloof. This article is called: StarWind iSCSI multi pathing with Round Robin and esxcli.

Another example can be found at the NTG Consult Weblog and is called: vSphere4 ESX4: How to configure iSCSI Software initiator on ESX4 against a HP MSA iSCSI Storage system

 

Understand different multipathing policy functionalities

vStorage Multi Paths Options in vSphere See: http://blogs.vmware.com/storage/2009/10/vstorage-multi-paths-options-in-vsphere.html Isn’t available at this time, recovered from the Google Cache.

Multi Path challenges

In most SAN deployments it is considered best practice to have redundant connections configured between the storage and the server. This often includes redundant host based adaptors, Fibre Channel switches, and controller ports to the storage array. This results in having four or more separate paths connecting the server to the same storage target. These multiple paths offers protection against a single point of failure and permit load balancing.

In a VMware virtualization environment a single datastore can have active IO on only a single path to the underling storage target (LUN or NFS mountpoint) at a given time. The storage stack in an ESX server can be configured to use any of the available paths but only one at a time prior to vSphere. In the event of the active path failing, the ESX server will detect the failure and failover to an alternate path.

The two challenges that VMware Native Multi Path addresses has been the 1) presentation of a single path when several are available (aggregation) and 2) handling of failover and failback when the active path is compromised (Failover). In addition the ability to alternate which path is active was supported through the introduction of round robin path selection in ESX release 3.5. Through the command line one can change the default parameters to have the path switched after a certain number of blocks or a set number of transactions going across an active path. These values are not currently available in the vCenter interface and can only be adjusted via the command line

Another issue which the ESX server storage stack has to address was how to treat a given storage device as some offered and active/active (A/A) controllers and others had active/passive controllers. A given LUN can be addressed by more than one storage controller at the same time in an active/active array. Where as the active/passive (A/P) array may access a LUN from one controller at a time and requires the transfer control to the other controller when access thru that controller is needed. To deal with this difference in array capabilities, VMware had to place logic in the code for each type of storage array supported. That logic defined what capabilities the array had with regards to active/active or active/passive as well as other attributes.

Path selection and failback options included: Fixed, Most Recently Used and Round Robin. The Fixed path policy is recommended for A/A arrays. It allows the use to configure static load balancing of paths across ESX hosts. The MRU policy is recommended for A/P arrays. It prevents the possibility of path thrashing due to a partitioned SAN environment. Round robin may be configured for either A/A or A/P arrays, but care must be taken to insure that A/P arrays are not configured to automatically switch controllers. If A/P arrays are configured to automatically switch controllers (sometimes called pseudo A/A), performance may be degraded due to path ping ponging.

vStorage API for MultiPathing

Pluggable storage architecture (PSA) was introduced in vSphere 4.0 to improve code modularisation, code encapsulation, and to enable 3rd party storage vendors multipath software to be leveraged in the ESX server. It provides support for the storage devices on the existing ESX storage HCL through the use of Storage Array Type Plugins (SATP) for the new Native Multipathing Plugins (NMP) module. It also introduces an API for 3rd party storage vendors to build and certify multipath modules to be plugged into the VMware storage stack.

In addition to the VMware NMP, 3rd party storage vendors can create there own pluggable storage modules to either replace the SATP that VMware offered. Or they could write a Path Selection Module (PSP) or MultiPath Plug (MPP) in that could leverage their own storage array specific intelligence for increased performance and availability.

As with earlier versions of ESX server, the VI admin can set certain path selection preferences via the vCenter path selection preferences. The path selection options of Fixed, MRU and Round Robin still exist as options in vSphere. A default value, set by the SATP that matched the storage array could be changed even if not advised as best practice. But with the new 3rd party SATP the vendor could set what defaults were optimized for their array. Through this interface, ALUA became an option that could be supported.

The Vmware Native MultiPath module has sub-plugins for failover and load balancing.

  • Storage Array Type Plug-in (SATP) to handle failover and the
  • Path Selection Plug-in (PSP) to handle load-balancing.
    • NMP “associates” a SATP with the set of paths from a given type of array.
    • NMP “associates” a PSP with a logical device.
    • NMP specifies a default PSP for every logical device based on the SATP associated with the physical paths for that device.
    • NMP allows the default PSP for a device to be overridden.

 

Rules can be associated with NMP as well as 3rd party provided MPP modules. Those rules stored in the /etc/vmware/esx.conf file.

Rules govern the operation of both the NMP and the MPP modules.

In ESX 4.0 rules are configurable only thru the CLI and not through vCenter.

To see the list of rules defined use the following command:

# esxcli corestorage claimrule list

To see the list of rules associated with a certain STAP use:

# esxcli nmp satp listrules -s <specific SATP>

The primary functions of an SATP are to:

  • Implements the switching of physical paths to the array when a path has failed.
  • Determines when a hardware component of a physical path has failed.
  • Monitors the hardware state of the physical paths to the storage array.

VMware provides a default SATP for each supported array as well as a generic SATP (an active/active version and an active/passive version) for non-specified storage arrays.

To see the complete list of defined Storage Array Type Plugins (SATP) for the VMware Native Multipath Plugin (NMP), use the following commands:

# esxcli nmp satp list

Name Default PSP Description

VMW_SATP_ALUA_CX VMW_PSP_FIXED Supports EMC CX that use the ALUA protocol

VMW_SATP_SVC VMW_PSP_FIXED Supports IBM SVC

VMW_SATP_MSA VMW_PSP_MRU Supports HP MSA

VMW_SATP_EQL VMW_PSP_FIXED Supports EqualLogic arrays

VMW_SATP_INV VMW_PSP_FIXED Supports EMC Invista

VMW_SATP_SYMM VMW_PSP_FIXED Supports EMC Symmetrix

VMW_SATP_LSI VMW_PSP_MRU Supports LSI and other arrays compatible with the SIS 6.10 in non-AVT mode

VMW_SATP_EVA VMW_PSP_FIXED Supports HP EVA

VMW_SATP_DEFAULT_AP VMW_PSP_MRU Supports non-specific active/passive arrays

VMW_SATP_CX VMW_PSP_MRU Supports EMC CX that do not use the ALUA protocol

VMW_SATP_ALUA VMW_PSP_MRU Supports non-specific arrays that use the ALUA protocol

VMW_SATP_DEFAULT_AA VMW_PSP_FIXED Supports non-specific active/active arrays

VMW_SATP_LOCAL VMW_PSP_FIXED Supports direct attached devices

To take advantage of certain storage specific characteristics of an array, one can install a 3rd party SATP provided by the vendor of the storage array, or by a software company specializing in optimizing the use of your storage array.

Path Selection Plug-in (PSP).

The primary function of the PSP module is to determine which physical path is to be used for an I/O request to a storage device. The PSP is a sub-plug-in to the NMP and handles load balancing. Vmware offers three PSPs:

To see a complete list of Path Selection Plugins (PSP) use the following command:

# esxcli nmp psp list

Name Description

VMW_PSP_MRU Most Recently Used Path Selection

VMW_PSP_RR Round Robin Path Selection

VMW_PSP_FIXED Fixed Path Selection

Fixed — Uses the designated preferred path, if it has been configured. Otherwise, it uses the first working path discovered at system boot time. If the ESX host cannot use the preferred path, it selects a random alternative available path. The ESX host automatically reverts back to the preferred path as soon as the path becomes available.

Most Recently Used (MRU) — Uses the first working path discovered at system boot time. If this path becomes unavailable, the ESX host switches to an alternative path and continues to use the new path while it is available.

Round Robin (RR) – Uses an automatic path selection rotating through all available paths and enabling load balancing across the paths. It only uses active paths and is of most use on Active/Active arrays. However in Active/Passive arrays, it will load balance between on ports to the same Storage Processor. However, Round Robin policy is not supported in MSCS environments.

3rd party vendors can provide their own PSP to take advantage of more complex I/O load balancing algorithms that their storage array might offer. However, 3rd party storage vendors are not required to create plug-ins. All the storage array specific code supported in ESX release 3.x has been ported to ESX release 4 as a SATP plug-in.

MultiPath Plugins (MPP) or Multipath Enhancement Module (MEM)

Multipah Plugin (also referred to as Multipath Enhancement Modules (MEM)) enable storage partners to leverage intelligence within their array to present one connection that is backed by an aggregation of several connections. Using the intelligence in the array with their own vendor module in the ESX server, these modules can increase the performance with load balancing and increase availability of the multiple connections. These modules offer the most resilient and highly available multipathing for VMware ESX servers by providing coordinated path management between both the ESX server and the storage.

A MPP “claims” a physical path and “manages” or “exports” a logical device. The MPP is the only code that can associate a physical path with a logical device. So that path can not be managed by both a NMP and MPP at the same time.

The PSA introduces support for two more features of the SCSI 3 (SPC-3) protocol: TPGS and ALUA.

However, although SCSI-3 is supported for some VMFS functions (LSI emulation for MSCS environments) the volume manager functions within VMFS are still SCSI-2 compliant. As such, the 2TB per LUN limit still applies.

Target Port Group Support (TPGS) – Is a mechanism that gives storage devices the ability to specify path performance, and other characteristics to a host, like an ESX server. A Target Port Group (TPG) is defined as a set of target ports that are in the same target port Asymmetric Access State (AAS) at all times with respect to a given LUN.

Hosts can use AAS of a TPG to prioritize paths and make failover and load balancing decisions.

Control and management of ALUA AASs can operate in 1 of 4 defined modes:

  1. Not Supported (Report and Set TPGs commands invalid).
  2. Implicit (TPG AASs are set and managed by the array only, and reported with the Report TPGs command)
  3. Explicit (TPG AASs are set and managed by the host only with the Set TPGs command, and reported with the Report TPGs command)
  4. Both (TPG AASs can be set and managed by either the array or the host)

Asymmetric Logical Unit Access (ALUA) is a standard-based method for discovering and managing multiple paths to a SCSI LUN. It is described in the T10 SCSI-3 specification SPC-3, section 5.8. It provides a standard way to allow devices to report the states of their respective target ports to hosts. Hosts can prioritize paths and make failover/load balancing decisions.

Since target ports could be on different physical units, ALUA allows different levels of access for target ports to each LUN. ALUA will route I/O to a particular port to achieve best performance.

ALUA Follow-over Feature:

A Follow-over scheme implemented to minimize path thrashing in cluster environments with shared LUNs. Follow-over supported for explicit ALUA only and is not part of the ALUA standard.

When an ESX host detects a TPG AAS change that it did not cause:

  • It will not try to revert this change even if it only has access to non-optimized paths
  • Thus, it follows the TPG of the array

For more information see two blogposts at VirtualAusterity.

Perform command line configuration of multipathing options

See VMware KB 1003973. Obtaining LUN pathing information for ESX hosts

To obtain LUN multipathing information from the ESX host command line:

1.Log in to the ESX host console.

2.Type esxcfg-mpath -l and press Enter.

The output appears similar to:

Runtime Name: vmhba1:C0:T0:L0

Device: naa.6006016095101200d2ca9f57c8c2de11

Device Display Name: DGC Fibre Channel Disk (naa.6006016095101200d2ca9f57c8c2de11)

Adapter: vmhba1 Channel: 0 Target: 0 LUN: 0

Adapter Identifier: fc.2000001b32865b73:2100001b32865b73

Target Identifier: fc.50060160b020f2d9:500601603020f2d9

Plugin: NMP

State: active

Transport: fc

Adapter Transport Details: WWNN: 20:00:00:1b:32:86:5b:73 WWPN: 21:00:00:1b:32:86:5b:73

Target Transport Details: WWNN: 50:06:01:60:b0:20:f2:d9 WWPN: 50:06:01:60:30:20:f2:d9

To List all Paths with abbreviated information

# esxcfg-mpath -L

vmhba2:C0:T0:L0 state:active naa.6006016043201700d67a179ab32fdc11 vmhba2 0 0 0 NMP active san fc.2000001b32017d07:2100001b32017d07 fc.50060160b021b9df:500601603021b9df

vmhba35:C0:T0:L0 state:active naa.6000eb36d830c008000000000000001c vmhba35 0 0 0 NMP active san iqn.1998-01.com.vmware:cs-tse-f116-6a88c8f1 00023d000001,iqn.2003-10.com.lefthandnetworks:vi40:28:vol0,t,1

List all Paths with adapter and device mappings.

# esxcfg-mpath -m

vmhba2:C0:T0:L0 vmhba2 fc.2000001b32017d07:2100001b32017d07 fc.50060160b021b9df:500601603021b9df naa.6006016043201700d67a179ab32fdc11

vmhba35:C0:T0:L0 vmhba35 iqn.1998-01.com.vmware:cs-tse-f116-6a88c8f1 00023d000001,iqn.2003-10.com.lefthandnetworks:vi40:28:vol0,t,1 naa.6000eb36d830c008000000000000001c

List all devices with their corresponding paths.

# esxcfg-mpath -b

naa.6000eb36d830c008000000000000001c

vmhba35:C0:T0:L0

naa.6006016043201700f008a62dec36dc11

vmhba2:C0:T1:L2

vmhba2:C0:T0:L2

Configuring multipath settings for your storage in vSphere Client

To configure multipath settings for your storage in vSphere Client:

  1. Click Storage.
  2. Select a datastore or mapped LUN.
  3. Click Properties.
  4. In the Properties dialog, select the desired extent, if necessary.
  5. Click Extent Device > Manage Paths and configure the paths in the Manage Path dialog.

Change a multipath policy

Generally, you do not have to change the default multipathing settings your host uses for a specific storage

device. However, if you want to make any changes, you can use the Manage Paths dialog box to modify a path

selection policy and specify the preferred path for the Fixed policy.

Procedure

  1. Open the Manage Paths dialog box either from the Datastores or Devices view.
  2. Select a path selection policy. By default, VMware supports the following path selection policies. If you have a third-party PSP installed on your host, its policy also appears on the list.
    • Fixed (VMW_PSP_FIXED)
    • Fixed AP (VMW_PSP_FIXED_AP)
    • Most Recently Used (VMW_PSP_MRU)
    • Round Robin (VMW_PSP_RR)
  3. For the fixed policy, specify the preferred path by right-clicking the path you want to assign as the preferred path, and selecting Preferred.
  4. Click OK to save your settings and exit the dialog box.

Disable Paths

You can temporarily disable paths for maintenance or other reasons. You can do so using the vSphere Client.

Procedure

  1. Open the Manage Paths dialog box either from the Datastores or Devices view.
  2. In the Paths panel, right-click the path to disable, and select Disable.
  3. Click OK to save your settings and exit the dialog box.

You can also disable a path from the adapter’s Paths view by right-clicking the path in the list and selecting

Disable.

Configure Software iSCSI port binding

See: iSCSI SAN Configuration Guide.

With the software-based iSCSI implementation, you can use standard NICs to connect your host to a remote

iSCSI target on the IP network. The software iSCSI adapter that is built into ESX/ESXi facilitates this connection

by communicating with the physical NICs through the network stack.

When you connect to a vCenter Server or a host with the vSphere Client, you can see the software iSCSI adapter on the list of your storage adapters. Only one software iSCSI adapter appears. Before you can use the software iSCSI adapter, you must set up networking, enable the adapter, and configure parameters such as discovery addresses and CHAP. The software iSCSI adapter configuration workflow includes these steps:

  1. Configure the iSCSI networking by creating ports for iSCSI traffic.
  2. Enable the software iSCSI adapter.
  3. If you use multiple NICs for the software iSCSI multipathing, perform the port binding by connecting all iSCSI ports to the software iSCSI adapter.
  4. If needed, enable Jumbo Frames. Jumbo Frames must be enabled for each vSwitch through the vSphere CLI.
  5. Configure discovery addresses.
  6. Configure CHAP parameters.

Configure the iSCSI networking by creating ports for iSCSI traffic

If you use the software iSCSI adapter or dependent hardware iSCSI adapters, you must set up the networking

for iSCSI before you can enable and configure your iSCSI adapters. Networking configuration for iSCSI

involves opening a VMkernel iSCSI port for the traffic between the iSCSI adapter and the physical NIC.

Depending on the number of physical NICs you use for iSCSI traffic, the networking setup can be different.

  • If you have a single physical NIC, create one iSCSI port on a vSwitch connected to the NIC. VMware recommends that you designate a separate network adapter for iSCSI. Do not use iSCSI on 100Mbps or slower adapters.
  • If you have two or more physical NICs for iSCSI, create a separate iSCSI port for each physical NIC and use the NICs for iSCSI multipathing.

image

Create iSCSI Port for a Single NIC

Use this task to connect the VMkernel, which runs services for iSCSI storage, to a physical NIC. If you have just one physical network adapter to be used for iSCSI traffic, this is the only procedure you must perform to set up your iSCSI networking.

Procedure

  1. Log in to the vSphere Client and select the host from the inventory panel.
  2. Click the Configuration tab and click Networking.
  3. In the Virtual Switch view, click Add Networking.
  4. Select VMkernel and click Next.
  5. Select Create a virtual switch to create a new vSwitch.
  6. Select a NIC you want to use for iSCSI traffic.
    IMPORTANT If you are creating a port for the dependent hardware iSCSI adapter, make sure to select the NIC that corresponds to the iSCSI component.
  7. Click Next.
  8. Enter a network label. Network label is a friendly name that identifies the VMkernel port that you are creating, for example, iSCSI.
  9. Click Next.
  10. Specify the IP settings and click Next.
  11. Review the information and click Finish.

 

Using Multiple NICs for Software and Dependent Hardware iSCSI

If your host has more than one physical NIC for iSCSI, for each physical NIC, create a separate iSCSI port using

1:1 mapping.

To achieve the 1:1 mapping, designate a separate vSwitch for each network adapter and iSCSI port pair.

image

An alternative is to add all NIC and iSCSI port pairs to a single vSwitch. You must override the

default setup and make sure that each port maps to only one corresponding active NIC.

image

After you map iSCSI ports to network adapters, use the esxcli command to bind the ports to the iSCSI adapters.

With dependent hardware iSCSI adapters, perform port binding, whether you use one NIC or multiple NICs.

Create Additional iSCSI Ports for Multiple NICs

Use this task if you have two or more NICs you can designate for iSCSI and you want to connect all of your iSCSI NICs to a single vSwitch. In this task, you associate VMkernel iSCSI ports with the network adapters using 1:1 mapping.

You now need to connect additional NICs to the existing vSwitch and map them to corresponding iSCSI ports.

NOTE If you use a vNetwork Distributed Switch with multiple dvUplinks, for port binding, create a separate

dvPort group per each physical NIC. Then set the team policy so that each dvPort group has only one active

dvUplink.

For detailed information on vNetwork Distributed Switches, see the Networking section of the ESX/ESXi

Configuration Guide.

Prerequisites

You must create a vSwitch that maps an iSCSI port to a physical NIC designated for iSCSI traffic.

Procedure

  1. Log in to the vSphere Client and select the host from the inventory panel.
  2. Click the Configuration tab and click Networking.
  3. Select the vSwitch that you use for iSCSI and click Properties.
  4. Connect additional network adapters to the vSwitch.
    • In the vSwitch Properties dialog box, click the Network Adapters tab and click Add.
    • Select one or more NICs from the list and click Next. With dependent hardware iSCSI adapters, make sure to select only those NICs that have a corresponding iSCSI component.
    • Review the information on the Adapter Summary page, and click Finish. The list of network adapters reappears, showing the network adapters that the vSwitch now claims.
  5. Create iSCSI ports for all NICs that you connected. The number of iSCSI ports must correspond to the number of NICs on the vSwitch.
    • In the vSwitch Properties dialog box, click the Ports tab and click Add.
    • Select VMkernel and click Next.
    • Under Port Group Properties, enter a network label, for example iSCSI, and click Next.
    • Specify the IP settings and click Next. When you enter subnet mask, make sure that the NIC is set to the subnet of the storage system it connects to.
    • Review the information and click Finish.CAUTION If the NIC you use with your iSCSI adapter, either software or dependent hardware, is not in the same subnet as your iSCSI target, your host is not able to establish sessions from this network adapter to the target.
  6. Map each iSCSI port to just one active NIC. By default, for each iSCSI port on the vSwitch, all network adapters appear as active. You must override this setup, so that each port maps to only one corresponding active NIC. For example, iSCSI port vmk1 maps to vmnic1, port vmk2 maps to vmnic2, and so on.
    • On the Ports tab, select an iSCSI port and click Edit.
    • Click the NIC Teaming tab and select Override vSwitch failover order.
    • Designate only one adapter as active and move all remaining adapters to the Unused Adapters category.
  7. Repeat the last step for each iSCSI port on the vSwitch.

What to do next

After performing this task, use the esxcli command to bind the iSCSI ports to the software iSCSI or dependent

hardware iSCSI adapters.

Bind iSCSI Ports to iSCSI Adapters

Bind an iSCSI port that you created for a NIC to an iSCSI adapter. With the software iSCSI adapter, perform this task only if you set up two or more NICs for the iSCSI multipathing. If you use dependent hardware iSCSI adapters, the task is required regardless of whether you have multiple adapters or one adapter.

Procedure

image

  1. Identify the name of the iSCSI port assigned to the physical NIC. The vSphere Client displays the port’s name below the network label. In the following graphic, the ports’ names are vmk1 and vmk2.
  2. Use the vSphere CLI command to bind the iSCSI port to the iSCSI adapter.
    esxcli swiscsi nic add -n port_name -d vmhba
    IMPORTANT For software iSCSI, repeat this command for each iSCSI port connecting all ports with the software iSCSI adapter. With dependent hardware iSCSI, make sure to bind each port to an appropriate corresponding adapter.
  3. Verify that the port was added to the iSCSI adapter.
    esxcli swiscsi nic list -d vmhba
  4. Use the vSphere Client to rescan the iSCSI adapter.

 

Binding iSCSI Ports to iSCSI Adapters

Review examples about how to bind multiple ports that you created for physical NICs to the software iSCSI

adapter or multiple dependent hardware iSCSI adapters.

Example 1. Connecting iSCSI Ports to the Software iSCSI Adapter

This example shows how to connect the iSCSI ports vmk1 and vmk2 to the software iSCSI adapter vmhba33.

  1. Connect vmk1 to vmhba33: esxcli swiscsi nic add -n vmk1 -d vmhba33.
  2. Connect vmk2 to vmhba33: esxcli swiscsi nic add -n vmk2 -d vmhba33.
  3. Verify vmhba33 configuration: esxcli swiscsi nic list -d vmhba33.
    Both vmk1 and vmk2 should be listed.

If you display the Paths view for the vmhba33 adapter through the vSphere Client, you see that the adapter uses two paths to access the same target. The runtime names of the paths are vmhba33:C1:T1:L0 and vmhba33:C2:T1:L0. C1 and C2 in this example indicate the two network adapters that are used for multipathing.

Example 2. Connecting iSCSI Ports to Dependent Hardware iSCSI Adapters

This example shows how to connect the iSCSI ports vmk1 and vmk2 to corresponding hardware iSCSI adapters

vmhba33 and vmhba34.

  1. Connect vmk1 to vmhba33: esxcli swiscsi nic add -n vmk1 -d vmhba33.
  2. Connect vmk2 to vmhba34: esxcli swiscsi nic add -n vmk2 -d vmhba34.
  3. Verify vmhba33 configuration: esxcli swiscsi nic list -d vmhba33.
  4. Verify vmhba34 configuration: esxcli swiscsi nic list -d vmhba34.

Disconnect iSCSI Ports from iSCSI Adapters

If you need to make changes in the networking configuration that you use for iSCSI traffic, for example, remove a NIC or an iSCSI port, make sure to disconnect the iSCSI port from the iSCSI adapter.

IMPORTANT If active iSCSI sessions exist between your host and targets, you cannot disconnect the iSCSI port.

Procedure

  1. Use the vSphere CLI to disconnect the iSCSI port from the iSCSI adapter.
    esxcli swiscsi nic remove -n port_name -d vmhba
  2. Verify that the port was disconnected from the iSCSI adapter.
    esxcli swiscsi nic list -d vmhba
  3. Use the vSphere Client to rescan the iSCSI adapter.

 

Links

http://www.seancrookston.com/2010/09/12/vcap-dca-objective-1-3-configure-and-manage-complex-multipathing-and-psa-plug-ins-2/

http://www.kendrickcoleman.com/index.php?/Tech-Blog/vcap-datacenter-administration-exam-landing-page-vdca410.html

Documents and manuals

vSphere Command-Line Interface Installation and Scripting Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp4_41_vcli_inst_script.pdf

ESX Configuration Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_esx_server_config.pdf

ESXi Configuration Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_esxi_server_config.pdf

Fibre Channel SAN Configuration Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_san_cfg.pdf

iSCSI SAN Configuration Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_iscsi_san_cfg.pdf

Source

If there are things missing or incorrect please let me know.

VCAP-DCA Objective 1.2 – Manage Storage Capacity in a vSphere Environment

Knowledge
  • Identify storage provisioning methods
  • Identify available storage monitoring tools, metrics and alarms
Skills and Abilities
  • Apply space utilization data to manage storage resources
  • Provision and manage storage resources according to Virtual Machine requirements
  • Understand interactions between virtual storage provisioning and physical storage provisioning
  • Apply VMware storage best practices
  • Configure datastore alarms
  • Analyze datastore alarms and errors to determine space availability
Tools
  • vSphere Datacenter Administration Guide
  • Fibre Channel SAN Configuration Guide
  • iSCSI SAN Configuration Guide
  • vSphere Command-Line Interface Installation and Scripting Guide
  • Product Documentation
  • vSphere Client
  • vSphere CLI
    • vmkfstools
Notes

Identify storage provisioning methods

There are two methods for provisioning storage.

  • Thin provisioning
  • Thick (FAT) provisioning

What is a thin provisioned disk?

When creating a virtual disk file, by default, VMware ESX uses a thick type of virtual disk. The thick disk pre-allocates all of the space specified during the creation of the disk. For example, if you create a 10 megabyte disk, all 10 megabytes are pre-allocated for that virtual disk. In contrast, a thin virtual disk does not pre-allocate all of the space. Blocks in the VMDK file are not allocated and backed by physical storage until they are written during the normal course of business. A read to an unallocated block returns zeroes, but not back the block with physical storage until it is written. See VMware KB 1005418

Considerations

The following are some considerations when implementing thin provisioning in your VMware environment:

  • Thin provisioned disks can grow to the full size specified at the time of virtual disk creation, but do not shrink. Once the blocks have been allocated, they cannot be un-allocated.
  • By implementing thin provisioned disks, you are able to over-allocate storage. If storage is over allocated, thin virtual disks can grow to fill an entire datastore if left unchecked.
  • VMware ESX 3.x is not aware of thin provisioning when reporting disk space usage using VMware Infrastructure Client and VirtualCenter.
  • VMware ESX 4.x is aware of thin provisioning in the form of the storage views plugin for vCenter.
  • When a thin provisioned disk grows, the VMFS metadata must be locked to order to make changes to a file. The VMware ESX host must make a SCSI reservation to make this changes. For more information about SCSI reservations, see Analysing SCSI Reservation conflicts on VMware Infrastructure 3.x and vSphere 4.x (1005009).
  • In order for a guest operating system to make use of a virtual disk, the guest operating system must first partition and format the disk to a file system it can recognize. Depending on the type of format selected within the guest operating system, the format may cause the thin provisioned disk to grow to a full size.

For example, if you present a thin provisioned disk to a Microsoft Windows operating system and format the disk, unless you explicitly select the Quick Format option, the Microsoft Windows format tool writes information to all of the sectors on the disk, which in turn inflates the thin provisioned disk.

The performance of a thin disk is the same as a Thick disk, that is what the document called Performance Study of VMware vStorage Thin Provisioning is saying see http://www.vmware.com/pdf/vsp_4_thinprov_perf.pdf

 

 

Identify available storage monitoring tools, metrics and alarms

Storage can be monitored with a few tools from VMware, there are also a few nice ones from 3rd party software companies like Veaam or Vizoncore.

The VMware tools are:

  • ESXTOP and RESXTOP
  • vscsiStats
  • VMware vCenter
  • VMware vSphere Client.

Eric Siebert has written an article about VMware vSphere’s built-in performance monitoring tools, see his article at http://searchvmware.techtarget.com/tip/VMware-vSpheres-built-in-performance-monitoring-tools

VMware Knowledgebase article 1008205 describes how to use ESXTOP and or RESXTOP to identify performance issues. See http://kb.vmware.com/kb/1008205

For some more info about the vscsiStats see Yellow Bricks article http://www.yellow-bricks.com/2009/12/17/vscsistats/

There is also a special VMware training available, this is called: VMware vSphere: Skills for Storage Administrators. See Course datasheet at http://mylearn.vmware.com/mgrreg/courses.cfm?ui=www_edu&a=det&id_course=85301

Scott Sauer has created an article about Performance Troubleshooting on Storage, see http://www.virtualinsanity.com/index.php/2010/03/16/performance-troubleshooting-vmware-vsphere-storage/ He also discusses some tools used on different levels in the VMware infrastructure.

 

Apply space utilization data to manage storage resources

In general you never want to have less than 20% space free. Other than that studying should be focused around on how to check these statistics out. Source Sean Crookston VCPA-DCA Study guide.

How to check these statistics, see the other study objects in this chapter. Some methods are setting alarms in vCenter on the Datastores.

Jeremy Waldrop has an blog article written about the new alarms that are available in vCenter 4. See http://jeremywaldrop.wordpress.com/2010/01/24/vmware-vsphere-vcenter-storage-alarms/

 

Provision and manage storage resources according to Virtual Machine requirements

The blog Simple-talk.com did an article about Virtual Machine Storage Provisioning and best practices. See http://www.simple-talk.com/sysadmin/virtualization/virtual-machine-storage-provisioning-and-best-practises/

VMware has also created an document called StorageWorkload Characterization and Consolidation in Virtualized Environments. You can find it here: http://www.vmware.com/files/pdf/partners/academic/vpact-workloads.pdf

Microsoft has a document to analyse storage performance, see http://blogs.technet.com/b/cotw/archive/2009/03/18/analyzing-storage-performance.aspx

Understand interactions between virtual storage provisioning and physical storage provisioning

Source: http://www.seancrookston.com/2010/09/12/vcap-dca-objective-1-2-manage-storage-capacity-in-a-vsphere-environment-2/

When you thin provision virtual machines you must account for the possibility of these virtual machines growing. It is often common nowadays to overprovision storage with thin provisioning and the risk is there that you could run out of physical storage as result. This is a very good use case for alarms in vCenter.

Additionally the physical storage provisioned will affect the performance of the guest. Read the other topics on storage already covered in object 1.1 to understand the different raid levels and how they can affect performance.

Apply VMware storage best practices

Source: http://www.vmware.com/technical-resources/virtual-storage/best-practices.html

Many of the best practices for physical storage environments also apply to virtual storage environments. It is best to keep in mind the following rules of thumb when configuring your virtual storage infrastructure:

 

Configure and size storage resources for optimal I/O performance first, then for storage capacity.

This means that you should consider throughput capability and not just capacity. Imagine a very large parking lot with only one lane of traffic for an exit. Regardless of capacity, throughput is affected. It’s critical to take into consideration the size and storage resources necessary to handle your volume of traffic—as well as the total capacity.

Aggregate application I/O requirements for the environment and size them accordingly.

As you consolidate multiple workloads onto a set of ESX servers that have a shared pool of storage, don’t exceed the total throughput capacity of that storage resource. Looking at the throughput characterization of physical environment prior to virtualization can help you predict what throughput each workload will generate in the virtual environment.

Base your storage choices on your I/O workload.

Use an aggregation of the measured workload to determine what protocol, redundancy protection and array features to use, rather than using an estimate. The best results come from measuring your applications I/O throughput and capacity for a period of several days prior to moving them to a virtualized environment.

Remember that pooling storage resources increases utilization and simplifies management, but can lead to contention.

There are significant benefits to pooling storage resources, including increased storage resource utilization and ease of management. However, at times, heavy workloads can have an impact on performance. It’s a good idea to use a shared VMFS volume for most virtual disks, but consider placing heavy I/O virtual disks on a dedicated VMFS volume or an RDM to reduce the effects of contention.

VMware document: Dynamic Storage Provisioning. Considerations and Best Practices for Using

Virtual Disk Thin Provisioning

VMware document: Best Practices for Running vSphere on NFS Storage

 

 

Configure datastore alarms

See the vSphere Datacenter administration guide chapter 13, Working with Alarms. http://www.vmware.com/pdf/vsphere4/r41/vsp_41_dc_admin_guide.pdf

Jeremy Waldrop has an blog article written about the new alarms that are available in vCenter 4. See http://jeremywaldrop.wordpress.com/2010/01/24/vmware-vsphere-vcenter-storage-alarms/

 

 

Analyse datastore alarms and errors to determine space availability

There are different posts in the VMware community site of VMware. See the following.

 

Links

http://www.seancrookston.com/2010/09/12/vcap-dca-objective-1-2-manage-storage-capacity-in-a-vsphere-environment-2/

http://www.kendrickcoleman.com/index.php?/Tech-Blog/vcap-datacenter-administration-exam-landing-page-vdca410.html

Documents and manuals

Vsphere Datacenter administration Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_dc_admin_guide.pdf

Fibre Channel SAN Configuration Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_san_cfg.pdf

iSCSI SAN Configuration Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_iscsi_san_cfg.pdf

vSphere Command-Line Interface Installation and Scripting Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_iscsi_san_cfg.pdf

Sources

 

If there are things missing or incorrect please let me know.

VCAP-DCA Objective 1.1 – Implement and Manage Complex Storage Solutions

Knowledge
  • Identify RAID levels
  • Identify supported HBA types
  • Identify virtual disk format types
Skills and Abilities
  • Determine use cases for and configure VMware DirectPath I/O
  • Determine requirements for and configure NPIV
  • Determine appropriate RAID level for various Virtual Machine workloads
  • Apply VMware storage best practices
  • Understand use cases for Raw Device Mapping
  • Configure vCenter Server storage filters
  • Understand and apply VMFS resignaturing
  • Understand and apply LUN masking using PSA-related commands
  • Analyze I/O workloads to determine storage performance requirements
Tools
  • Fibre Channel SAN Configuration Guide
  • iSCSI SAN Configuration Guide
  • ESX Configuration Guide
  • ESXi Configuration Guide
  • vSphere Command-Line Interface Installation and Scripting Guide
  • Product Documentation
  • vSphere Client
  • vscsiStats
  • vSphere CLI
    • vicfg-*
    • vifs
    • vmkfstools
    • esxtop/resxtop
Notes.

Identify RAID levels.

Following is a brief textual summary of the most commonly used RAID levels. (Source: http://en.wikipedia.org/wiki/RAID)

  • RAID 0 (block-level striping without parity or mirroring) has no (or zero) redundancy. It provides improved performance and additional storage but no fault tolerance. Hence simple stripe sets are normally referred to as RAID 0. Any disk failure destroys the array, and the likelihood of failure increases with more disks in the array (at a minimum, catastrophic data loss is twice as likely compared to single drives without RAID). A single disk failure destroys the entire array because when data is written to a RAID 0 volume, the data is broken into fragments called blocks. The number of blocks is dictated by the stripe size, which is a configuration parameter of the array. The blocks are written to their respective disks simultaneously on the same sector. This allows smaller sections of the entire chunk of data to be read off the drive in parallel, increasing bandwidth. RAID 0 does not implement error checking, so any error is uncorrectable. More disks in the array means higher bandwidth, but greater risk of data loss.
  • In RAID 1 (mirroring without parity or striping), data is written identically to multiple disks (a “mirrored set”). Although many implementations create sets of 2 disks, sets may contain 3 or more disks. Array provides fault tolerance from disk errors or failures and continues to operate as long as at least one drive in the mirrored set is functioning. With appropriate operating system support, there can be increased read performance, and only a minimal write performance reduction. Using RAID 1 with a separate controller for each disk is sometimes called duplexing.
  • In RAID 2 (bit-level striping with dedicated Hamming-code parity), all disk spindle rotation is synchronized, and data is striped such that each sequential bit is on a different disk. Hamming-code parity is calculated across corresponding bits on disks and stored on one or more parity disks. Extremely high data transfer rates are possible.
  • In RAID 3 (byte-level striping with dedicated parity), all disk spindle rotation is synchronized, and data is striped such that each sequential byte is on a different disk. Parity is calculated across corresponding bytes on disks and stored on a dedicated parity disk. Very high data transfer rates are possible.
  • RAID 4 (block-level striping with dedicated parity) is identical to RAID 5 (see below), but confines all parity data to a single disk, which can create a performance bottleneck. In this setup, files can be distributed between multiple disks. Each disk operates independently which allows I/O requests to be performed in parallel, though data transfer speeds can suffer due to the type of parity. The error detection is achieved through dedicated parity and is stored in a separate, single disk unit.
  • RAID 5 (block-level striping with distributed parity) distributes parity along with the data and requires all drives but one to be present to operate; drive failure requires replacement, but the array is not destroyed by a single drive failure. Upon drive failure, any subsequent reads can be calculated from the distributed parity such that the drive failure is masked from the end user. The array will have data loss in the event of a second drive failure and is vulnerable until the data that was on the failed drive is rebuilt onto a replacement drive. A single drive failure in the set will result in reduced performance of the entire set until the failed drive has been replaced and rebuilt.
  • RAID 6 (block-level striping with double distributed parity) provides fault tolerance from two drive failures; array continues to operate with up to two failed drives. This makes larger RAID groups more practical, especially for high-availability systems. This becomes increasingly important as large-capacity drives lengthen the time needed to recover from the failure of a single drive. Single-parity RAID levels are as vulnerable to data loss as a RAID 0 array until the failed drive is replaced and its data rebuilt; the larger the drive, the longer the rebuild will take. Double parity gives time to rebuild the array without the data being at risk if a single additional drive fails before the rebuild is complete.

Identify supported HBA types

There is a VMware document about all the supported HBA types. See the Storage/SAN Compatibility Guide on the VMware website. http://www.vmware.com/resources/compatibility/pdf/vi_san_guide.pdf

Identify virtual disk format types

  • The supported disk formats in ESX and ESXi are. (Source: VMware KB Article:1022242)
  • zeroedthick (default) – Space required for the virtual disk is allocated during the creation of the disk file. Any data remaining on the physical device is not erased during creation, but is zeroed out on demand at a later time on first write from the virtual machine. The virtual machine does not read stale data from disk.
  • eagerzeroedthick – Space required for the virtual disk is allocated at creation time. In contrast to zeroedthick format, the data remaining on the physical device is zeroed out during creation. It might take much longer to create disks in this format than to create other types of disks.
  • thick – Space required for the virtual disk is allocated during creation. This type of formatting does not zero out any old data that might be present on this allocated space. A non-root user cannot create disks of this format.
  • thin – Space required for the virtual disk is not allocated during creation, but is supplied and zeroed out, on demand at a later time.
  • rdm – Virtual compatibility mode for raw disk mapping.
  • rdmp – Physical compatibility mode (pass-through) for raw disk mapping.
  • raw – Raw device.
  • 2gbsparse – A sparse disk with 2GB maximum extent size. You can use disks in this format with other VMware products, however, you cannot power on sparse disk on a ESX host till you reimport the disk with vmkfstools in a compatible format, such as thick or thin.
  • monosparse – A monolithic sparse disk. You can use disks in this format with other VMware products.
  • monoflat – A monolithic flat disk. You can use disks in this format with other VMware products.

Determine use cases for and configure VMware DirectPath I/O

VMDirectPath allows guest operating systems to directly access an I/O device, bypassing the virtualization layer. This direct path, or passthrough can improve performance for VMware ESX and VMware ESXi systems that utilize high‐speed I/O devices, such as 10 Gigabit Ethernet.

ESX Host Requirements

VMDirectPath supports a direct device connection for virtual machines running on Intel Xeon 5500 systems, which feature an implementation of the I/O memory management unit (IOMMU) called Virtual Technology for Directed I/O (VT‐d). VMDirectPath can work on AMD platforms with I/O Virtualization Technology (AMD IOMMU), but this configuration is offered as experimental support. Some machines might not have this technology enabled in the BIOS by default. Refer to your hardware documentation to learn how to enable this technology in the BIOS.

Enable or Disable VMDirectPath

Enable or disable VMDirectPath through the hardware advanced settings page of the vSphere Client. Reboot the ESX host after enabling or disabling VMDirectPath.

Disable VMDirectPath and reboot the ESX host before removing physical devices.

To find the VMDirectPath Configuration page in the vSphere Client

1 Select the ESX host from Inventory.

2 Select the Configuration tab.

3 Select Advanced Settings under Hardware.

To disable and disconnect the PCI Device

1 Use the vSphere Client to disable or remove the VMDirectPath configuration.

2 Reboot the ESX host.

3 Physically remove the device from the ESX host.

For more information see VMware document Configuration Examples and Troubleshooting for VMDirectPath

See also VMware KB article 1010789 Configuring VMDirectPath I/O pass-through devices on an ESX host.

For more information see Petri IT Knowledgebase http://www.petri.co.il/vmware-esxi4-vmdirectpath.htm

Determine requirements for and configure NPIV

This information is from a blog article made by Simon Long. See http://www.simonlong.co.uk/blog/2009/07/27/npiv-support-in-vmware-esx4/

What does NPIV do? NPIV is a useful Fibre Channel feature which allows a physical HBA (Host BUS Adapter) to have multiple Node Ports. Normally, a physical HBA would have only 1 N_Port ID. The use of NPIV enables you to have multiple unique N_Port ID’s per physical HBA. NPIV can be used by ESX4 to allow more Fibre Channel connections than the maximum physical allowance which is currently 8 HBA’s per Host or 16 HBA Ports per Host.

What are the Advantages of using NPIV?

  • Standard storage management methodology across physical and virtual servers.
  • Portability of access privileges during VM migration.
  • Fabric performance, as NPIV provides quality of service (QoS) and prioritization for ensured VM-level bandwidth assignment.
  • Auditable data security due to zoning (one server, one zone).

Can NPIV be used with VMware ESX4? Yes! But NPIV can only be used with RDM disks and will not work for virtual disks. VM’s with regular virtual disks use the WWN’s of the host’s physical HBA’s. To use NPIV with ESX4 you need the following;

  • FC Switches that are used to access storage must be NPIV-Aware.
  • The ESX Server host’s physical HBA’s must support NPIV.

How does NPIV work with VMware ESX4? When NPIV is enabled on a Virtual Machine, 8 WWN (Worldwide Name) pairs (WWPN (Port) & WWNN (Node) ) are specified for that VM on creation. Once the VM has been Powered On the VMKernel initiates a VPORT (Virtual Port) on the physical HBA which is used to access the Fibre Channel network. Once the VPORT is ready the VM then uses each of these WWN pairs in sequence to try to discover an access path to the Fibre Channel network.

VPORT’s appear to the FC network as a physical HBA because of its unique WWN’s, but an assigned VPORT will be removed from the ESX Host when the VM is Powered Off.

How is NPIV configured in VMware ESX4?

Before you try to enable NPIV on a VM, the VM must have an RDM added. If your VM does not, the NPIV options are greyed out and you will see a warning.

For a document with some screenshots on a ESX4 server see the Brocade Technical Brief: How to Configure NPIV on VMware vSphere 4.0

http://www.brocade.com/downloads/documents/white_papers/white_papers_partners/NPIV_ESX4_0_GA-TB-145-01.pdf

Determine appropriate RAID level for various Virtual Machine workloads

There are some very interesting sites and documents to check out.

There is a VMware document about the best practices for VMFS, http://www.vmware.com/pdf/vmfs-best-practices-wp.pdf

The Yellow Bricks Blog article IOps?, http://www.yellow-bricks.com/2009/12/23/iops/

The VMToday blog article about Storage Basics – Part VI: Storage Workload Characterization, http://vmtoday.com/2010/04/storage-basics-part-vi-storage-workload-characterization/

All these blogs provide information about how to select the best RAID level for Virtual Machine workloads.

Apply VMware storage best practices

Best Practices for Configuring Virtual Storage. Source http://www.vmware.com/technical-resources/virtual-storage/best-practices.html

Many of the best practices for physical storage environments also apply to virtual storage environments. It is best to keep in mind the following rules of thumb when configuring your virtual storage infrastructure:

Configure and size storage resources for optimal I/O performance first, then for storage capacity.

This means that you should consider throughput capability and not just capacity. Imagine a very large parking lot with only one lane of traffic for an exit. Regardless of capacity, throughput is affected. It’s critical to take into consideration the size and storage resources necessary to handle your volume of traffic—as well as the total capacity.

Aggregate application I/O requirements for the environment and size them accordingly.As you consolidate multiple workloads onto a set of ESX servers that have a shared pool of storage, don’t exceed the total throughput capacity of that storage resource. Looking at the throughput characterization of physical environment prior to virtualization can help you predict what throughput each workload will generate in the virtual environment.

Base your storage choices on your I/O workload.Use an aggregation of the measured workload to determine what protocol, redundancy protection and array features to use, rather than using an estimate. The best results come from measuring your applications I/O throughput and capacity for a period of several days prior to moving them to a virtualized environment.

Remember that pooling storage resources increases utilization and simplifies management, but can lead to contention.There are significant benefits to pooling storage resources, including increased storage resource utilization and ease of management. However, at times, heavy workloads can have an impact on performance. It’s a good idea to use a shared VMFS volume for most virtual disks, but consider placing heavy I/O virtual disks on a dedicated VMFS volume or an RDM to reduce the effects of contention.

There is also a VMware document named Performance Best Practices for VMware vSphere 4.1

http://www.vmware.com/pdf/Perf_Best_Practices_vSphere4.1.pdf Page 11 describes some Storage considerations.

Understand use cases for Raw Device Mapping

The source for this article is http://www.douglaspsmith.com/home/2010/7/18/understand-use-cases-for-raw-device-mapping.html

Raw device mapping (RDM) is a method for a VM to have direct access to a LUN on a Fibre Channel or iSCSI system. RDM is a mapping file in a separate VMFS volume that acts as a proxy for a raw physical storage device. The RDM allows a virtual machine to directly access and use the storage device. The RDM contains metadata for managing and redirecting disk access to the physical device.

RDM offers several benefits:

  • User-Friendly Persistent Names
  • Dynamic Name Resolution
  • Distributed File Locking
  • File Permissions
  • File System Operations
  • Snapshots
  • vMotion
  • SAN Management Agents
  • N-Port ID Virtualization

Certain limitations exist when you use RDMs:

  • Not available for block devices or certain RAID devices
  • Available with VMFS-2 and VMFS-3 volumes only
  • No snapshots in physical compatibility mode
  • No partition mapping

You need to use raw LUNs with RDMs in the following situations:

  • When SAN snapshot or other layered applications are run in the virtual machine. The RDM better enables scalable backup offloading systems by using features inherent to the SAN.
  • In any MSCS clustering scenario that spans physical hosts — virtual-to-virtual clusters as well as physical-to-virtual clusters. In this case, cluster data and quorum disks should be configured as RDMs rather than as files on a shared VMFS.

Configure vCenter Server storage filters

The information was gathered from the ESX Configuration Guide.

vCenter Server provides storage filters to help you avoid storage device corruption or performance degradation that can be caused by an unsupported use of LUNs. These filters are available by default:

  • VMFS Filter – Filters out storage devices, or LUNs, that are already used by a VMFS datastore on any host managed by vCenter Server.
  • RDM Filter – Filters out LUNs that are already referenced by an RDM on any host managed by vCenter Server.
  • Same Host and Transports Filter – Filters out LUNs ineligible for use as VMFS datastore extents because of host or storage type incompatibility.
  • Host Rescan Filter – Automatically rescans and updates VMFS datastores after you perform datastore management operations.

Procedure

1. In the vSphere Client, select Administration > vCenter Server Settings.

2. In the settings list, select Advanced Settings.

3. In the Key text box, type a key.

config.vpxd.filter.vmfsFilter -> VMFS Filter
config.vpxd.filter.rdmFilter -> RDM Filter
config.vpxd.filter.SameHostAndTransportsFilter -> Same Host and Transports Filter
config.vpxd.filter.hostRescanFilter -> Host Rescan Filter

4. In the Value text box, type False for the specified key.

5. Click Add.

6. Click OK.

For a more background info see the Yellow Bricks blog article Storage Filters. http://www.yellow-bricks.com/2010/08/11/storage-filters/

Currently 4 filters have been made public:

  • config.vpxd.filter.hostRescanFilter
  • config.vpxd.filter.vmfsFilter
  • config.vpxd.filter.rdmFilter
  • config.vpxd.filter.SameHostAndTransportsFilter

The “Host Rescan Filter” makes it possible to disable the automatic storage rescan that occurs on all hosts after a VMFS volume has been created. The reason you might want to avoid this is when you adding multiple volumes and want to avoid multiple rescans but just initiate a single rescan after you create your final volume. By setting “config.vpxd.filter.hostRescanFilter” to false the automatic rescan is disabled. In short the steps needed:

1. Open up the vSphere Client

2. Go to Administration -> vCenter Server

3. Go to Settings -> Advanced Settings

4. If the key “config.vpxd.filter.hostRescanFilter” is not available add it and set it to false

To be honest this is the only storage filter I would personally recommend using. For instance “config.vpxd.filter.rdmFilter” when set to “false” will enable you to add a LUN as an RDM to a VM while this LUN is already used as an RDM by a different VM. Now that can be useful in very specific situations like when MSCS is used, but in general should be avoided as data could be corrupted when the wrong LUN is selected.

The filter “config.vpxd.filter.vmfsFilter” can be compared to the RDM filter as when set to false it would enable you to overwrite a VMFS volume with VMFS or re-use as an RDM. Again, not something I would recommend enabling as it could lead to loss of data which has a serious impact on any organization.

Same goes for “config.vpxd.filter.SameHostAndTransportsFilter”. When it is set to “False” you can actually add an “incompatible LUN” as an extend to an existing volume. An example of an incompatible LUN would for instance be a LUN which is not presented to all hosts that have access to the VMFS volume it will be added to. I can’t really think of a single reason to change the defaults on this setting to be honest besides troubleshooting, but it is good to know they are there.

Most of the storage filters have its specific use cases. In general storage filters should be avoided, except for “config.vpxd.filter.hostRescanFilter” which has proven to be useful in specific situations.

Understand and apply VMFS resignaturing

The information was gathered from the ESX Configuration Guide.

Resignaturing VMFS Copies.

Use datastore resignaturing to retain the data stored on the VMFS datastore copy. When resignaturing a VMFS

copy, ESX assigns a new UUID and a new label to the copy, and mounts the copy as a datastore distinct from

the original.

The default format of the new label assigned to the datastore is snap-snapID-oldLabel, where snapID is an

integer and oldLabel is the label of the original datastore.

When you perform datastore resignaturing, consider the following points:

  • Datastore resignaturing is irreversible.
  • The LUN copy that contains the VMFS datastore that you resignature is no longer treated as a LUN copy.
  • A spanned datastore can be resignatured only if all its extents are online.
  • The resignaturing process is crash and fault tolerant. If the process is interrupted, you can resume it later.
  • You can mount the new VMFS datastore without a risk of its UUID colliding with UUIDs of any other datastore, such as an ancestor or child in a hierarchy of LUN snapshots.

Resignature a VMFS Datastore Copy.

Use datastore resignaturing if you want to retain the data stored on the VMFS datastore copy.

To resignature a mounted datastore copy, first unmount it. Before you resignature a VMFS datastore, perform a storage rescan on your host so that the host updates its view of LUNs presented to it and discovers any LUN copies.

Procedure

1. Log in to the vSphere Client and select the server from the inventory panel.

2. Click the Configuration tab and click Storage in the Hardware panel.

3. Click Add Storage.

4. Select the Disk/LUN storage type and click Next.

5. From the list of LUNs, select the LUN that has a datastore name displayed in the VMFS Label column and click Next. The name present in the VMFS Label column indicates that the LUN is a copy that contains a copy of an existing VMFS datastore.

6. Under Mount Options, select Assign a New Signature and click Next.

7. In the Ready to Complete page, review the datastore configuration information and click Finish.

Understand and apply LUN masking using PSA-related commands

What is Lun Masking?

LUN (Logical Unit Number) Masking is an authorization process that makes a LUN available to some hosts and unavailable to other hosts.

LUN Masking is implemented primarily at the HBA (Host Bus Adapater) level. LUN Masking implemented at this level is vulnerable to any attack that compromises the HBA.

Some storage controllers also support LUN Masking.

LUN Masking is important because Windows based servers attempt to write volume labels to all available LUN’s. This can render the LUN’s unusable by other operating systems and can result in data loss.

A Blogpost from Jason Langer describes this. See http://virtuallanger.wordpress.com/2010/10/08/understand-and-apply-lun-masking-using-psa-related-commands/

Masking Paths will allow you to prevent an ESX/ESXi host from accessing storage devices or LUNs or from using individual paths to a LUN. When you mask paths, you create claim rules that assign the MASK_PATH plug-in to the specified paths. Use the vSphere CLI commands to mask the paths.

Look at the Multipath Plug-ins currently install on your ESX/ESXi host:

esxcfg-mpath –G

The output indicates that there are, at a minimum, 2 plug-ins: the VMware Native Multipath Plug-in (NMP) and the MASK_PATH plug-in, which is used for masking LUNs

List all the claimrules currently on the ESX/ESXi host:

esxcli corestorage claimrule list

There are two MASK_PATH entries: one of class runtime and the other of class file. The runtime is the rules currently running in the PSA. The file is a reference to the rules defined in /etc/vmware/esx.conf. These are identical, but they could be different if you are in the process of modifying the /etc/vmware/esx.conf.

Add a rule to hide the LUN with the command

esxcli corestorage claimrule add –rule <number> -t location –A <hba_adapter> -C <channel> -T <target> -L <lun> -P MASK_PATH

Note – Use the esxcfg-mpath –b and esxcfg-scsidevs –l commands to identify disk and LUN information

Verify that the rule has taken with the command:

esxcli corestorage claimrule list

Re-examine your claim rules and you verify that you can see both the file and runtime class:

esxcli corestorage claimrule list

Unclaim all paths to a device and then run the loaded claimrules on each of the paths to reclaim them:

esxcli corestorage claiming reclaim –d <naa.id>

Verify that the masked device is no longer used by the ESX/ESXi host:

esxcfg-scsidevs –m

The masked datastore does not appear in the list

To verify that a masked LUN is no longer an active device

esxcfg-mpath –L | grep <naa.id>

Empty output indicates that the LUN is not active

Source: VMware KB 1009449 and KB 1014953

For an overview of PSA and commands, see VMware vSphere 4.1 PSA at http://geeksilver.wordpress.com/2010/08/17/vmware-vsphere-4-1-psa-pluggable-storage-architecture-understanding/

Analyze I/O workloads to determine storage performance requirements

There is a document from VMware called Storage Workload Characterization and Consolidation in Virtualized

Environments, with a lot of info.

Josh Townsend created a series of blog posts on his blog VMToday about everything you need to know about storage. See:

Links

http://www.seancrookston.com/2010/09/12/vcap-dca-objective-1-1-implement-and-manage-complex-storage-solutions-2/

http://www.kendrickcoleman.com/index.php?/Tech-Blog/vcap-datacenter-administration-exam-landing-page-vdca410.html

Documents and manuals

Fibre Channel SAN Configuration Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_san_cfg.pdf

iSCSI SAN Configuration Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_iscsi_san_cfg.pdf

ESX Configuration Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_esx_server_config.pdf

ESXi Configuration Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_esxi_server_config.pdf

vSphere Command-Line Interface Installation and Scripting Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp4_41_vcli_inst_script.pdf

Sources

 

If there are things missing or incorrect please let me know.

ESX and ESXi logfile locations

Location of ESX log files

You can see ESX logs:
  • From the Service Console
  • From the vSphere Client connected directly to the ESX host (click Home > Administration > System Logs)
  • From the VMware Infrastructure Client connected directly to the ESX host (click Administration > System Logs)
The vmkernel logs (which log everything related to the kernel/core of the ESX) are located at /var/log/vmkernel
The vmkwarning logs (which log warnings from the vmkernel) are located at /var/log/vmkwarning
The vmksummary logs (which provide a summary of system activities such as uptime, downtime, reasons for downtime) are located at /var/log/vmksummary
The hostd log (which is the log of the ESX management service of the ESX) are located at /var/log/vmware/hostd.log
The messages log (which log activity on the Service Console operating system) is located at /var/log/messages
The VirtualCenter Agent log is located at /var/log/vmware/vmware/vpx/vpxa.log
The Automatic Availability Manager (AAM) logs are located at /var/log/vmware/aam/vmware_<hostname>-xxx.log
The SW iSCSI logs are located at /var/log/vmkiscsid.log
The System boot log is located at /var/log/boot-logs/sysboot.log

Additional Information

For related information, see the main article in this series, Location of log files for VMware products (1021806).

Location of ESXi log files

The VMkernel, vmkwarning, and hostd logs are located at /var/log/messages
The Host Management service (hostd = Host daemon) log is located at /var/log/vmware/hostd.log
The VirtualCenter Agent log is located at /var/log/vmware/vmware/vpx/vpxa.log
The System boot log is located at /var/log/sysboot.log
The Automatic Availability Manager (AAM) logs are located at /var/log/vmware/aam/vmware_<hostname>-xxx.log
Note: The logs on an ESXi host can be rolled over and or removed after an ESXi host reboot. VMware recommends configuring the ESXi host with a syslog server. For more information on syslog server configuration, see your product versions Basic System Administration guide.

Additional Information

Free holiday gift from Veeam

Today at the Dutch VMUG Veeam have announced they are giving VCP, vExperts and VCI a FREE two-socket license of Veeam Backup & Replication v5 with vPower and / or Veeam One Solution which incorporates Veeam Monitor Plus and Business View. Of course as there are NFR’s they can only be used for non production!

To claim your free license pop over to Veeam’s website and register.

http://www.veeam.com/nfr/free-nfr-license

More info why, see Veeam blog http://www.veeam.com/blog/happy-holidays-from-veeamnfrs-for-vcps-vcis-and-vexperts.html

Many Thanks Veeam!