VCAP-DCA Objective 2.1 – Implement and Manage Complex Virtual Networks

Knowledge
  • Identify common virtual switch configurations
Skills and Abilities
  • Determine use cases for and apply IPv6
  • Configure NetQueue
  • Configure SNMP
  • Determine use cases for and apply VMware DirectPath I/O
  • Migrate a vSS network to a Hybrid or Full vDS solution
  • Configure vSS and vDS settings using command line tools
  • Analyze command line output to identify vSS and vDS configuration details
Tools
  • vSphere Command-Line Interface Installation and Scripting Guide
  • vNetwork Distributed Switch: Migration and Configuration
  • ESX Configuration Guide
  • ESXi Configuration Guide
  • Product Documentation
  • vSphere Client
  • vSphere CLI
    • vicfg-*
Notes

Identify common virtual switch configurations

See VMware document: VMware Virtual Networking Concepts.

See VMware document: VMware Networking Best Practices.

See VMware document: What’s new in VMware vSphere 4: Virtual Networking. 

Kendrick Coleman has created some designs with different network cards.

  • Design with 6 network cards.
  • Design with 10 network cards.
  • Design with 12 network cards.

Determine use cases for and apply IPv6

VMware KB document 1021769. Configure IPv6 with ESX and ESXi 4.1.

In vSphere 4.1, IPv6 is disabled by default for the COS, the VMkernel, and ESXi.

You can enable IPv6 for the COS and VMkernel from the command line and from Networking Properties.

To enable IPv6 from the command line:

  1. Run this command:
    • For ESX 4.1 – #esxcfg-vswif -6 true
    • For ESXi 4.1 – #esxcfg-vmknic -6 true
  2. Reboot.

To enable IPv6 from Networking Properties:

  1. In vCenter Server, select the host, click Configuration > Networking > Properties.
  2. Select Enable IPv6 support on this host system.
  3. Click OK.

IPsec

Internet Protocol Security (IPsec) is used for secure communication and is included in the TCP/IP protocol stack. Secure communication between two partners is based on cryptographic algorithms with Public or Pre-Shared keys. To establish a secure communication, keys must be exchanged Manually or Automatically:

  • Manual Key Exchange (Pre-Shared) – Keys are exchanged through a different communication channel
  • Automated Key Exchange – Public keys are exchanged during initialization sequence using the same channel

Notes:

  • ESX supports IPv4 with Internet Key Exchange Version 2 (IKEv2). However, IPv6 does not support IKEv2. IPv6 only supports manual keying.
  • IPv6 in ESX supports IPsec with manual keying.

A new set of commands (esxcfg-ipsec and vicfg-ipsec) provides an interface to configure IPsec properties:

  • The command # esxcfg-ipsec –h gives further information
  • To add a Security Association (SA), run the command:
    # esxcfg-ipsec –add-sa –sa-src x:x:x:: –sa-dst x:x:x:: –sa-mode transport –ealgo null –spi 0x200 –ialgo hmac-sha1 –ikey key saname
  • To add a Security Policy (SP), run the command:
    # esxcfg-ipsec –add-sp –sp-src x:x::/x –sp-dst x:x::/x –src-port 100 –dst-port 200 –ulproto tcp –dir out –action ipsec –sp-mode transport –sa-name saname spname
  • To add a generic SP with default options, run the command:
    # esxcfg-ipsec –add-sp –sp-src any -sp-dst any –src-port any –dst-port any –ulproto any –dir out –action ipsec –sp-mode transport –sa-name saname spname
  • To add a SP (such as a firewall rule), run the command:
    #esxcfg-ipsec –add-sp –sp-src x:x::/x –sp-dst x:x::/x –src-port 100 –dst-port 200 –ulproto tcp –dir out –action discard spname
  • To delete an SA, run the command:
    esxcfg-ipsec –remove-sa saname
  • To delete a SP, run the command:
    esxcfg-ipsec –remove-sp spname
  • To flush all SPs, run the command:
    esxcfg-ipsec –flush-sp

Note: IPsec cannot be configured through vSphere Client.

Other article, VMware KB 1010812. Configure IPv6 on ESX 4.0.x

Supporting IPv6 ESX.

  • ESX 3.5 supports virtual machines configured for IPv6.
  • ESX 4.0 supports IPv6 with the following restrictions:
    • IPv6 Storage (software iSCSI and NFS) is experimental in ESX 4.0.
    • ESX does not support TCP Segmentation Offload (TSO) with IPv6.
    • VMware High Availability and Fault Tolerance do not support IPv6.

Guest operating systems.

  • Windows Vista and Windows Server 2008 fully support IPv6.
  • Windows 2003 SP1 and Windows XP SP2 have the infrastructure for IPv6, but components of the system and applications are not fully compliant. For more information, see http://technet.microsoft.com/en-us/library/cc776103.aspx
  • Linux version 2.6 is fully compliant.

Configure NetQueue

NetQueue in ESX/ESXi takes advantage of the ability of some network adapters to deliver network traffic to the system in multiple receive queues that can be processed separately, allowing processing to be scaled to multiple CPUs, improving receive-side networking performance.

How to configure Netqueue on a ESXi host see: VMware ESXi Configuration Guide page 63.

Enable NetQueue on an ESXi Host

NetQueue is enabled by default. To use NetQueue after it has been disabled, you must reenable it.

Prerequisites

Familiarize yourself with the information on configuring NIC drivers in the VMware vSphere Command-Line Interface Installation and Reference guide.

Procedure

  1. In the VMware vSphere CLI, use the command vicfg-advcfg –set true VMkernel.Boot.netNetQueueEnable.
  2. Use the VMware vSphere CLI to configure the NIC driver to use NetQueue.
  3. Reboot the ESXi host.

Disable NetQueue on an ESXi Host

NetQueue is enabled by default.

Procedure

  1. In the VMware vSphere CLI, use the command vicfg-advcfg –set false VMkernel.Boot.netNetQueueEnable.
  2. To disable NetQueue on the NIC driver, use the vicfg-module -s “” module name command.
    For example, if you are using the s2io NIC driver, use vicfg-module -s “” s2io.
  3. Reboot the host.

How to configure Netqueue on an ESX host see: VMware ESX Configuration Guide page 65.

Enable NetQueue on an ESX Host

NetQueue is enabled by default. To use NetQueue after it has been disabled, you must reenable it.

Prerequisites

Familiarize yourself with the information on configuring NIC drivers in the VMware vSphere Command-Line Interface Installation and Reference guide.

Procedure

  1. In the VMware vSphere CLI, use the command vicfg-advcfg –set true
    VMkernel.Boot.netNetQueueEnable.
  2. Use the VMware vSphere CLI to configure the NIC driver to use NetQueue.
  3. Reboot the ESX host.

Disable NetQueue on an ESX Host

NetQueue is enabled by default.

Procedure

  1. In the VMware vSphere CLI, use the command vicfg-advcfg –set false
    VMkernel.Boot.netNetQueueEnable.
  2. To disable NetQueue on the NIC driver, use the vicfg-module -s “” module name command.
    For example, if you are using the s2io NIC driver, use vicfg-module -s “” s2io.
  3. Reboot the host.

Configure SNMP

See vSphere Datacenter Administration guide page 129.

Simple Network Management Protocol (SNMP) allows management programs to monitor and control a variety of networked devices.

Managed systems run SNMP agents, which can provide information to a management program in at least one of the following ways:

  • In response to a GET operation, which is a specific request for information from the management system.
  • By sending a trap, which is an alert sent by the SNMP agent to notify the management system of a particular event or condition.

Management Information Base (MIB) files define the information that can be provided by managed devices.

The MIB files contain object identifiers (OIDs) and variables arranged in a hierarchy.

vCenter Server and ESX/ESXi have SNMP agents. The agent provided with each product has differing capabilities.

Configure SNMP for ESX/ESXi

ESX/ESXi includes an SNMP agent embedded in hostd that can both send traps and receive polling requests such as GET requests. This agent is referred to as the embedded SNMP agent.

Versions of ESX prior to ESX 4.0 included a Net-SNMP-based agent. You can continue to use this Net-NMPbased agent in ESX 4.0 with MIBs supplied by your hardware vendor and other third-party management applications. However, to use the VMware MIB files, you must use the embedded SNMP agent.

By default, the embedded SNMP agent is disabled. To enable it, you must configure it using the vSphere CLI command vicfg-snmp. For a complete reference to vicfg-snmp options, see vSphere Command-Line Interface Installation and Scripting Guide and vSphere Command-Line Interface Reference.

Prerequisites

SNMP configuration for ESX/ESXi requires the vSphere CLI. For information on installing and using the vSphere CLI, see vSphere Command-Line Interface Installation and Scripting Guide and vSphere Command-Line Interface Reference.

Procedure

  1. Configure SNMP Communities on page 131
    Before you enable the ESX/ESXi embedded SNMP agent, you must configure at least one community for the agent.
  2. Configure the SNMP Agent to Send Traps on page 131
    You can use the ESX/ESX embedded SNMP agent to send virtual machine and environmental traps to management systems. To configure the agent to send traps, you must specify a target address and community.
  3. Configure the SNMP Agent for Polling on page 132
    If you configure the ESX/ESXi embedded SNMP agent for polling, it can listen for and respond to requests from SNMP management client systems, such as GET requests.
  4. Configure SNMP Management Client Software on page 133
    After you have configured a vCenter Server system or an ESX/ESXi host to send traps, you must configure your management client software to receive and interpret those traps.

Configure SNMP Communities

Before you enable the ESX/ESXi embedded SNMP agent, you must configure at least one community for the agent.

An SNMP community defines a group of devices and management systems. Only devices and management systems that are members of the same community can exchange SNMP messages. A device or management system can be a member of multiple communities.

Prerequisites

SNMP configuration for ESX/ESXi requires the vSphere CLI. For information on installing and using the vSphere CLI, see vSphere Command-Line Interface Installation and Scripting Guide and vSphere Command-Line Interface Reference.

Procedure

  • From the vSphere CLI, type
    vicfg-snmp.pl –server hostname –username username –password password -c com1
    Replace com1 with the community name you wish to set. Each time you specify a community with this command, the settings you specify overwrite the previous configuration. To specify multiple communities, separate the community names with a comma.
    For example, to set the communities public and internal on the host host.example.com, you might type vicfg-snmp.pl –server host.example.com –username user –password password -c public,internal.

Configure the SNMP Agent to Send Traps

You can use the ESX/ESX embedded SNMP agent to send virtual machine and environmental traps to management systems. To configure the agent to send traps, you must specify a target address and community. To send traps with the SNMP agent, you must configure the target (receiver) address, community, and an optional port. If you do not specify a port, the SNMP agent sends traps to UDP port 162 on the target management system by default.

Prerequisites

SNMP configuration for ESX/ESXi requires the vSphere CLI. For information on installing and using the vSphere CLI, see vSphere Command-Line Interface Installation and Scripting Guide and vSphere Command-Line Interface Reference.

Procedure

  1. From the vSphere CLI, type
    vicfg-snmp.pl –server hostname –username username –password password – t target_address@port/community.
    Replace target_address, port, and community with the address of the target system, the port number to send the traps to, and the community name, respectively. Each time you specify a target with this command, the settings you specify overwrite all previously specified settings. To specify multiple targets, separate them with a comma.
    For example, to send SNMP traps from the host host.example.com to port 162 on target.example.com using the public community, type
    vicfg-snmp.pl –server host.example.com –username user –password password –t target.example.com@162/public.
  2. (Optional) Enable the SNMP agent by typing
    vicfg-snmp.pl –server hostname –username username –password password –enable.
  3. (Optional) Send a test trap to verify that the agent is configured correctly by typing
    vicfg-snmp.pl –server hostname –username username –password password –test. The agent sends a warmStart trap to the configured target.

Configure the SNMP Agent for Polling

If you configure the ESX/ESXi embedded SNMP agent for polling, it can listen for and respond to requests from SNMP management client systems, such as GET requests. By default, the embedded SNMP agent listens on UDP port 161 for polling requests from management systems. You can use the vicfg-snmp command to configure an alternative port. To avoid conflicting with other services, use a UDP port that is not defined in /etc/services.

IMPORTANT Both the embedded SNMP agent and the Net-SNMP-based agent available in the ESX service console listen on UDP port 161 by default. If you enable both of these agents for polling on an ESX host, you must change the port used by at least one of them.

Prerequisites

SNMP configuration for ESX/ESXi requires the vSphere CLI. For information on installing and using the vSphere CLI, see vSphere Command-Line Interface Installation and Scripting Guide and vSphere Command-Line Interface Reference.

Procedure

  1. From the vSphere CLI, type vicfg-snmp.pl –server hostname –username username –password password -p port.
    Replace port with the port for the embedded SNMP agent to use for listening for polling requests.
  2. (Optional) If the SNMP agent is not enabled, enable it by typing
    vicfg-snmp.pl –server hostname –username username –password password –enable.

Configure SNMP Settings for vCenter Server

To use SNMP with vCenter Server, you must configure SNMP settings using the vSphere Client.

Prerequisites

To complete the following task, the vSphere Client must be connected to a vCenter Server. In addition, you need the DNS name and IP address of the SNMP receiver, the port number of the receiver, and the community identifier.

Procedure

  1. Select Administration > vCenter Server Settings.
  2. If the vCenter Server is part of a connected group, in Current vCenter Server, select the appropriate server.
  3. Click SNMP in the navigation list.
  4. Enter the following information for the Primary Receiver of the SNMP traps.
  5. (Optional) Enable additional receivers in the Enable Receiver 2, Enable Receiver 3, and Enable Receiver 4 options.
  6. Click OK.
Option Description
Receiver URL The DNS name or IP address of the SNMP receiver.
Receiver port The port number of the receiver to which the SNMP agent sends traps.If the port value is empty, vCenter Server uses the default port, 162.
Community The community identifier.

The vCenter Server system is now ready to send traps to the management system you have specified.

Determine use cases for and apply VMware DirectPath I/O

On the VMware VROOM! Blog there is an article called: Performance and Use Cases of VMware DirectPath I/O for Networking This article explains more details about DirectPath I/O and the performance.

VMware DirectPath I/O is a technology, available from vSphere 4.0 and higher that leverages hardware support (Intel VT-d and AMD-Vi) to allow guests to directly access hardware devices. In the case of networking, a VM with DirectPath I/O can directly access the physical NIC instead of using an emulated (vlance, e1000) or a para-virtualized (vmxnet, vmxnet3) device. While both para-virtualized devices and DirectPath I/O can sustain high throughput (beyond 10Gbps), DirectPath I/O can additionally save CPU cycles in workloads with very high packet count per second (say > 50k/sec). However, DirectPath I/O does not support many features such as physical NIC sharing, memory overcommit, vMotion and Network I/O Control. Hence, VMware recommends using DirectPath I/O only for workloads with very high packet rates, where CPU savings from DirectPath I/O may be needed to achieve desired performance.

DirectPath I/O for Networking

VMware vSphere 4.x provides three ways for guests to perform network I/O: device emulation, para-virtualization and DirectPath I/O. A virtual machine using DirectPath I/O directly interacts with the network device using its device drivers. The vSphere host (running ESX or ESXi) is only involved in virtualizing interrupts of the network device. In contrast, a virtual machine (VM) using an emulated or para-virtualized device (referred to as virtual NIC or virtualized mode henceforth) interacts with a virtual NIC that is completely controlled by the vSphere host. The vSphere host handles the physical NIC interrupts, processes packets, determines the recipient of the packet and copies them into the destination VM, if needed. The vSphere host also mediates packet transmissions over the physical NIC.

In terms of network throughput, a para-virtualized NIC such as vmxnet3 matches the performance of DirectPath I/O in most cases. This includes being able to transmit or receive 9+ Gbps of TCP traffic with a single virtual NIC connected to a 1-vCPU VM. However, DirectPath I/O has some advantages over virtual NICs such as lower CPU costs (as it bypasses execution of the vSphere network virtualization layer) and the ability to use hardware features that are not yet supported by vSphere, but might be supported by guest drivers (e.g., TCP Offload Engine or SSL offload). In the virtualized mode of operation, the vSphere host completely controls the virtual NIC and hence it can provide a host of useful features such as physical NIC sharing, vMotion and Network I/O Control. By bypassing this virtualization layer, DirectPath I/O trades off virtualization features for potentially lower networking-related CPU costs. Additionally, DirectPath I/O needs memory reservation to ensure that the VM’s memory has not been swapped out when the physical NIC tries to access the VM’s memory.

Source: VMware Configuration Examples and Troubleshooting for VMDirectPath

ESX Host Requirements

VMDirectPath supports a direct device connection for virtual machines running on Intel Xeon 5500 systems, which feature an implementation of the I/O memory management unit (IOMMU) called Virtual Technology for Directed I/O (VT‐d). VMDirectPath can work on AMD platforms with I/O Virtualization Technology (AMD IOMMU), but this configuration is offered as experimental support. Some machines might not have this technology enabled in the BIOS by default. Refer to your hardware documentation to learn how to enable this technology in the BIOS.

Enable or Disable VMDirectPath

Enable or disable VMDirectPath through the hardware advanced settings page of the vSphere Client. Reboot the ESX host after enabling or disabling VMDirectPath. Disable VMDirectPath and reboot the ESX host before removing physical devices.

To find the VMDirectPath Configuration page in the vSphere Client

  1. Select the ESX host from Inventory.
  2. Select the Configuration tab.
  3. Select Advanced Settings under Hardware.

To disable and disconnect the PCI Device

  1. Use the vSphere Client to disable or remove the VMDirectPath configuration.
  2. Reboot the ESX host.
  3. Physically remove the device from the ESX host.

More info about how to enable DirectPath I/O can be found here: http://www.petri.co.il/vmware-esxi4-vmdirectpath.htm

Migrate a vSS network to a Hybrid or Full vDS solution

See VMware document: VMware vNetwork Distributed Switch: Migration and Configuration

Read it to gain a better understanding of vDS and reasoning on why a Hybrid solution may or may not work. This is a good excerpt from the document below, see page 4:

In a hybrid environment featuring a mixture of vNetwork Standard Switches and vNetwork Distributed Switches, VM networking should be migrated to vDS in order to take advantage of Network VMotion. As Service Consoles and VMkernel ports do not migrate from host to host, these can remain on a vSS. However, if you wish to use some of the advanced capabilities of the vDS for these ports, such as Private VLANs or bi-directional traffic shaping, or, team with the same NICs as the VMs (for example, in a two port 10GbE environment), then you will need to migrate all ports to the vDS.

Scaling maximums should be considered when migrating to a vDS. The following virtual network configuration maximums are supported in the release of vSphere 4.1:

  • 350 ESX/ESXi Hosts per vDS
  • 32 Distributed Switches (vDS or Nexus 1000V) per vCenter Server
  • 5000 Static or Dynamic Port groups per vCenter
  • 1016 Ephemeral Port groups per vCenter
  • 20000 Distributed Virtual Switch Ports per vCenter
  • 4096 total vSS and vDS virtual switch ports per host

Note: These configuration maximums are subject to change. Consult the Configuration Maximums for vSphere 4 documents at vmware.com for the most current scaling information. See Configuration Maximums vSphere 4.1

Configure vSS and vDS settings using command line tools

See VMware KB1008127 Configuring vSwitch or vNetwork Distributed Switch from the command line in ESX/ESXi 4.x.

More information about the command line tools see chapter 10 of the vSphere Command-Line Interface Installation and Scripting Guide.

Setting Up vSphere Networking with vNetwork Standard Switches

Create or manipulate virtual switches using vicfg-vswitch. By default, each ESX/ESXi host has one virtual switch, vSwitch0.

Make changes to the uplink adapter using vicfg-nics

Use vicfg-vswitch to add port groups to the virtual switch.

Use vicfg-vswitch to establish VLANs by associating port groups with VLAN IDs.

Use vicfg-vmknic to configure the VMkernel network interfaces.

For more info about the vicfg-vswitch command line see: http://www.vmware.com/support/developer/vcli/vcli41/doc/reference/vicfg-vswitch.html

For more info about the vicfg-nics command line see: http://www.vmware.com/support/developer/vcli/vcli41/doc/reference/vicfg-nics.html

For more info about the vicfg-vmknic command line see: http://www.vmware.com/support/developer/vcli/vcli41/doc/reference/vicfg-vmknic.html

extra information links:

Analyze command line output to identify vSS and vDS configuration details

vicfg-vswitch -l (to get DVSwitch, DVPort, and vmnic names)

esxcfg-vswif -l (get vswif IP address, netmask, dvPort id, etc. ESX Only)

Links

http://www.seancrookston.com/2010/08/30/vcap-dca-objective-2-1-implement-and-manage-complex-virtual-networks/

http://www.kendrickcoleman.com/index.php?/Tech-Blog/vcap-datacenter-administration-exam-landing-page-vdca410.html

http://damiankarlson.com/vcap-dca4-exam/objective-2-1-implement-and-manage-complex-virtual-networks/


Documents and manuals

vSphere Command-Line Interface Installation and Scripting Guide: www.vmware.com/pdf/vsphere4/r41/vsp4_41_vcli_inst_script.pdf

vNetwork Distributed Switch: Migration and Configuration: www.vmware.com/files/pdf/vsphere-vnetwork-ds-migration-configuration-wp.pdf

ESX Configuration Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_esx_server_config.pdf

ESXi Configuration Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_esxi_server_config.pdf

vSphere Command-Line Interface Installation and Scripting Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp4_41_vcli_inst_script.pdf

Video

Carlos Vargas has created a video about this objective see: http://virtual-vargi.blogspot.com/2011/02/vcap-dca-section-2-21.html

Source

Related articles:

Disclaimer.
The information in this article is provided “AS IS” with no warranties, and confers no rights. This article does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my opinion.

Marco

Marco works for ViaData as a Senior Technical Consultant. He has over 15 years experience as a system engineer and consultant, specialized in virtualization. VMware VCP4, VCP5-DC & VCP5-DT. VMware vExpert 2013, 2014,2015 & 2016. Microsoft MCSE & MCITP Enterprise Administrator. Veeam VMSP, VMTSP & VMCE.