The NIC Teaming or Link Aggregation or EtherChannel technology itself is described in previous articles. Here we will focus directly on setting up NIC Teaming on VMware ESXi (specifically ESXi 4.0.0, which belongs to the VMware vSphere family) and its combination with a Cisco switch. ESXi works reliably, efficiently and offers all the necessary functions. We can significantly increase the performance area (especially for certain situations) when we use a connection through two or more network cards.
Note: Two abbreviations are frequently used in the article. VM - Virtual Machine, i.e. a virtual computer running inside the ESXi server. NIC - Network Interface Controller, i.e. a network card or network adapter. It can be a physical adapter located in the server and used to connect to the computer network (via some physical switch). Or a virtual adapter that is inside the VM and used to connect the virtual to the virtual switch (vSwitch) and can be further connected via a physical adapter to the network.
We get a noticeable difference in speed if we use more than one network adapter for operations when we upload data to the ESXi server. For example, when converting a physical machine to virtual or other actions using VMware vCenter Converter. ESXi probably allocates most of the bandwidth to VM network when using one network adapter, so other operations are very slow. When another NIC is added, the traffic is divided.
Note: The initial condition for running NIC Teaming is, of course, the support of the given network card in ESXi. Since ESXi version 4, only Gigabit NICs are supported. However, in terms of manufacturers and models, the support is really wide.
VMware NIC Teaming
The advantages we gain by connecting a server with ESXi to a computer network using multiple network adapters are probably clear. We can achieve increased throughput, where communication is divided according to certain parameters and flows through different physical NICs. We also gain resistance to failure, where communication switches to the backup link (or simply stops using the non-functional one). Both of these advantages are particularly suitable for virtualization because we have consolidated multiple servers onto one hardware. Moreover, probably every server today has at least two network cards, and if we don't use the others for anything else (such as connecting to iSCSI), then using Teaming costs us almost nothing (of course, we must have resources like a port on the switch, etc.).
Just like with NIC Teaming in Windows, we have two main options for setting up Teaming. This also determines what resources we will need.
- we only configure on ESXi - we can connect to any switches (which means even simple ones without management) and we can also connect to different switches at the same time, load balancing is only one-way, slower failover
- we configure both ESXi server and switch - we have two-way Teaming, but we need a switch that supports it, moreover, we usually then have to lead all connections (on which Teaming is configured) to the same switch
VMware ESXi Configuration
We'll start from the initial state where we have a cleanly installed (which of course is not a condition) ESXi 4.0, where we have set up Management Network, i.e. assigned one network adapter with an IP address. We use VLANs, so a trunk is set on this connection on the switch side and a VLAN is set on the ESXi side in which the IP address is located on the Management Network.
The port configuration on the switch might look like this:
interface GigabitEthernet1/0/30 description ESXi-1 switchport trunk encapsulation dot1q switchport trunk allowed vlan 200,400 switchport mode trunk switchport nonegotiate end
Now we'll add a second network adapter, which automatically turns on Teaming. It's important that the other port on the switch has the correct configuration, which means the same as the first one. So for example:
interface GigabitEthernet1/0/31 description ESXi-2 switchport trunk encapsulation dot1q switchport trunk allowed vlan 200,400 switchport mode trunk switchport nonegotiate end
Note: If we have a different configuration, we won't see any error, but it simply won't work.
We'll add another adapter in the following way, the images illustrate the whole process.
- connect using vSphere Client to ESXi
- Configuration tab
- Hardware - Networking menu

- next to Virtual Switch: vSwitch0 click on Properties
- Network Adapters tab - click on Add
- check the adapters we want to add

- leave the adapters as active

- now on the Network Adapters tab we see more adapters

With this simple configuration, we have already achieved redundancy, where default values were used. Communication is balanced in a certain way on all participating NICs and in case of failure, it switches. We can try unplugging one network cable and the communication will switch to the other path after a few seconds (failover). After plugging it back in, it returns (failback).

Changing Teaming Properties
Now we can adjust some Teaming parameters. We can adjust settings globally for the entire virtual switch (vSwitch) or for individual Port Groups (what is set here overrides the global settings). Usually, we can manage with the vSwitch configuration. We'll make the settings as follows:
- connect using vSphere Client to ESXi
- Configuration tab
- Hardware - Networking menu
- next to Virtual Switch: vSwitch0 click on Properties
- select vSwitch and click on Edit

- switch to the NIC Teaming tab

Here we set the following parameters:
- Load Balancing - method by which traffic is balanced on individual physical NICs (i.e. assigning a virtual Ethernet adapter inside VM through vSwitch to a physical Ethernet adapter), options are
- Route based on the originated virtual port ID - according to the port ID on the vSwitch where the VM is connected, the simplest and default method, recommended in various places, but problematic for me (see below), traffic from VM is always sent to one physical NIC (unless there's a failure) and responses are expected from the same NIC
- Route based on source MAC hash - according to the last byte of the VM's MAC address, traffic from VM is always sent to one physical NIC (unless there's a failure) and responses are expected from the same NIC
- Route based on IP hash - according to the connection at the IP level, a simple hash is calculated from both source and destination IP addresses, VM can communicate through multiple NICs, the physical switch then sees the same MAC on multiple ports, so it's recommended to have static Etherchannel turned on, IPhash method should be set for the entire vSwitch and inherited to all Port Groups
- Use explicit failover order - provides only failover, uses NICs in a given order and switches to the next only in case of failure
- Network Failover Detection - how it detects link interruption
- Link Status Only - takes only the link status (we can use the switch's Link State Tracking function), it's a simple recommended method
- Beacon Probing - sends broadcasts into the network in each VLAN and listens, determining functional paths based on this
- Notify Switch - sends a notification to switches to update their tables in case of failover
- Failback - if an adapter recovers after an error, its active function returns
In the default state, the Teaming setting for Management Network is overwritten, even if with the same values as for vSwitch. We can cancel this setting.

VMware Teaming and Cisco switches
VMware ESXi doesn't support any dynamic protocols for negotiating link aggregation, such as PAgP or LACP. If we want to use two-way Teaming, we must use manual (in other words, static) EtherChannel. We'll easily configure the Cisco switch. On selected ports (they must have the same configuration) we'll set Etherchannel in on mode.
SWITCH(config)#int range g1/0/30,g1/0/31 SWITCH(config-if)#channel-group 1 mode on SWITCH#show etherchannel 1 summary | begin Group Group Port-channel Protocol Ports ------+-------------+-----------+----------------------------------------------- 1 Po1(SU) - Gi1/0/30(P) Gi1/0/31(P)
The switch also uses a certain method for Load Balancing, i.e. how to distribute communication to different ports. The default setting is by source MAC address. We can set it by source or destination MAC or IP address or by XOR operation over source and destination MAC address or IP address.
SWITCH(config)#port-channel load-balance src-dst-ip
Practice shows that in case we use Teaming on both sides, the outage when switching to the other link is significantly shorter. In a simple ping test with infinite repetition, only one packet is lost (in the previous case it was five).
Problems in practice or various Load Balancing methods
In practice, I encountered a problem that I spent some time on and eventually found its cause and solution. First, I'll describe the problem itself.
If the default setting is used and thus the Load Balancing method Route based on the originated virtual port ID, everything works well if we use Teaming only on ESXi. When I set up Etherchannel on the switch, everything still worked (after a short outage when the configuration was reset). But if communication on the Management interface was interrupted for about 10 minutes, this interface stopped responding and couldn't be connected to. Subsequently, it was necessary to disconnect one network cable or cancel Etherchannel on the switch. If I left a simple ping running on this interface, everything worked fine. Similarly, if the Network Failover Detection method was switched to Beacon Probing. Then everything worked as it should, but I wasn't satisfied with sending out a lot of broadcasts. After certain attempts, it turned out that when the Load Balancing method is changed to Route based on IP hash, everything works without problems.
Only afterwards did I find the cause of this problem and even material from VMware where the reason is described (I read a whole series of similar materials, but I didn't come across this description anywhere). The document states that the Load Balancing methods srcPortID or srcMAChash are not recommended to be used together with Teaming on a physical switch. Because they expect responses to come through the same NIC through which they send packets, and discard communication on other NICs. But when we use Etherchannel on the switch, it sends communication to a certain port according to the chosen algorithm (and this may be different from where the communication comes to it).
The document also states that if we use the IPhash method, Etherchannel should be turned on on the physical switch, otherwise some packets may be lost. People in various discussions recommend not using Etherchannel and only having NIC Teaming according to srcPortID on ESXi, but my practical results are noticeably better when using Etherchannel and when setting IPhash everything works well.
MAC addresses in Teaming
If one wants to look into the details of how Teaming works, it's definitely good to look at what MAC addresses communicate on individual network adapters.
In the simplest case, we have Management Network, ESXi assigns the MAC address of the first physical NIC to this interface. Communication then takes place through one NIC, if there's an outage on this link, the MAC address moves to the other NIC and communication continues. The switch always has information on which port this MAC address is, so it sends responses correctly.
When we start some VM and start communicating over the network, the communication is assigned to a certain physical NIC (and the VM's MAC address is set here). This assignment can change variously over time if we use the Load Balancing method Route based on IP hash. In case of failure, it moves to another functional NIC. If we don't use Etherchannel, the switch has the MAC address assigned to at most one port at a time and sends communication there.
MAC addresses appear on individual ports on the switch where NICs are connected. If we use Etherchannel, we don't see MAC on physical ports, but only on the virtual interface Port-channel 1 (the number we created). This interface has physical ports under it and can communicate through all of them (according to the set method), it has MAC addresses as if to all physical ports.
Sohlas...s ethernetchannel a IPhash...to funguje pekne..u hp terminologie s trunk-em. Myslim, ze to chodilo i s LACP. Co se tyka sitove komunikace tak kluci u vmwaru v nove verzi esx4 udelali fakt pokrok. Distributed switch me osobne se hodne libi..i možnost použít virtualni switche cisco nexus v HA...jen je k tomu zapotrebi az nejvyssi licence enterprise plus nebo 60den eval. aby si to clovek vyzkousel a s nasazenim ESX + BLADE je to sexy :)
respond to [1]Wiper2: No jo, s nasazenim na ESX + Blade je to sexy, az na drobnost, ze v blade enclosure mas kazdou sitovku vedenou do jinyho switche (mame cbs3020), ktery spolu nejsou spojeny. Leda rozchodit nejak stackwise i pro ne. Dalsi vec, ktera me napada je, ze pres jednu sitovku chodi provoz z virtualnich masin a druha je vyhrazena pro VMotion, ale to by asi nemuselo vadit.
Samuraji, uz jsi zkousel spojit vic switchu do stacku, aniz by to byly 3750ky? Se mi to zatim nepovedlo.
respond to [2]Karel: Ano, nechápu proč ty switche v bladech nejsou řešený jako stack, přitom se jedná o speciální modely a na sběrnici jsou spolu propojeny několika porty. Takhle se Etherchannel dá rozchodit jen ze strany ESX ...
Neslyšel jsem, že by šlo rozchodit stack třeba přes normální ethernet porty (ale je to škoda).
Dobrý den,
mám prosbu, lze nastavit na VMWARE ESXi SERVERU dva síťové rozsahy. Jde o to, že mám síť rozdělenou na VLAN1 a VLAN2. Na serveru mám dvě síťové karty do každé z nich vede jedna VLANa. Část virtualních mašin potřebuji mít v 1. a část v 2. VLAN rozsahu. Lze to nějakým způsobem uskutečnit.
respond to [4]diatonix: Jde to velice jednoduše pomocí VLAN na straně ESXi. Já to používám tak, že do serveru vede trunk s více VLANami a s těmi již ESXi normálně pracuje (virtuálu se přiřadí daná VLAN).
Více třeba v www.vmware.com/files/pdf/virtual_networking_concepts.pdf nebo www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/vmware/VMware.pdf.
Díky za radu.
Mám ještě jeden problém. Podporuje ESXi server xaxmodemové karty, potřeboval bych instalovat faxserver. Bohužel jsem se nikde nedočetl jestli s tím má někdo nějaké zkušenosti. Dík :-(
Dobrý den
Smazal sem si v configuration - networking původní vSwitch0 a nedošlo mi, že přes něj se k vSphare Serveru připojuju :-(. Takže teď mi z počítače s vSphare Clientem nejde adresa, na kterou jsem se připojoval, ani pingnout. Nevíte prosím jak se to dá opravit?
respond to [7]pam: Pripoj se na server pres iLO, pokud mas moznost, a pust si Remote Console - tam mas nastaveni sitovek. Taky jsem se jednou behem pokusu s etherchannelem odriznul... nastesti od testovaciho stroje.
Lze nastavit na VMWARE ESXi 4, že danému virtuálu přiřadím jednu jistou fyzickou síťovou kartu? Dále potřebuji aby tento virtuál komunikoval i s ostatními virtuály. Jedná se o hodně síťově náročný stroj. Tak hledám řešení.
Předem díky.
Ahoj,
řešíme také, dokoupil jsem další Pass throug Lan modul do blade BLC3000.
Podle infa z vmware, Management network slouží pro Vmotion a služby iSCSI, NFS... pro spojení s úložišti. Vyhradil jsem 2x lan pouze pro management rozhraní a 2 lan pro Vmnetwork k VM.
Ale netuším zde píši pravdu.
Pokud ne děkuji za opravu.
respond to [9]s_rybka: Lze nastavit na VMWARE ESXi 4, že danému virtuálu přiřadím jednu jistou fyzickou síťovou kartu? Dále potřebuji aby tento virtuál komunikoval i s ostatními virtuály. Jedná se o hodně síťově náročný stroj. Tak hledám řešení.
ve virtualnim switchi se udela dalsi port group a v teto lze nastavit ze pouziva konkretni vnic sitovou kartu ze switche
pro cely switch tato karta pak muze byt jako zaloha
Od vSphere 5.1 VMware podporuje Link Aggregation Control Protocol (LACP), ale pouze na vSphere Distributed Switch (VDS).
kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2034277
Zdravim , je potreba provest nastaveni na cisco switchi ? nebo staci pouze ve vmware.
Mam jednoduchy switch cisco SG200-26 s web interface.
Ve vmware bych postupu rozumnel ,ale s cisco nemam velkou zkusenost.
Diky
PS: Pripadne jak overim ze mi to funguje spravne ?
respond to [13]Choze: Stačí si projít tuhle moji sérii článků a vše je tam vysvětleno!
Pokud mi stačí jednostranné rozkládání zátěže a zajištění vysoké dostupnosti, tak mohu nastavit pouze na VMware.
Zdravím, dokázal by mi prosím někdo ve stručnosti poradit jaké je Best Practices pro zapojení několika ESXi hostů v síti? Chápu, že takový vMotion by měl být pravděpodobně na separe NICs a nejlépe i switchi(ch).
Ale co vm network pro vCentrum? Může být společně na jedné NIC s provozem VM serverů oddělené pouze VLANy? Jaké nároky na tuhle komunikaci vm networku jsou?
Díky!