EN 
30.11.2025 Ondřej WELCOME IN MY WORLD

This website is originally written in the Czech language. Most content is machine (AI) translated into English. The translation may not be exact and may contain errors.

Tento článek si můžete zobrazit v originální české verzi. You can view this article in the original Czech version.
Připojení rychlejší a spolehlivější

Faster and more reliable connection

| Petr Bouška - Samuraj |
Demands on the data throughput of computer network links are increasing. Today's network cards offer a speed of 1Gbps and the speed of ports on switches corresponds to this. But often we need to aggregate multiple such lines and connect to another device. For example uplink port of a switch or consolidation of multiple servers to one HW (virtualization).
displayed: 25 676x (25 349 CZ, 327 EN) | Comments [6]

I wrote this article for the Connect magazine and it was published in issue Connect 12/09, I'm publishing it here with the kind permission of the editorial team.
This is a summary of the information I described in articles previously published on this website.

Link Aggregation, EtherChannel, and NIC Teaming

While there are devices with 10Gbps speeds on the market, their price is significantly higher. That's why the more than 20-year-old link aggregation technology is widely used today. Its considerable advantage is also ensuring link redundancy and thus increased availability.

Link aggregation means combining multiple physical Ethernet network links into a single virtual (logical) link to achieve greater speed and redundancy for failover. We will refer to a network link as a network cable that is connected on both sides to a switch port or a server network adapter port. In practice, this can be a connection between two switches or a server connection to a switch.

We can encounter a whole range of terms that more or less refer to the same thing. The main terms are Link Aggregation, EtherChannel, and NIC Teaming, which we will use further in the article. Even though their result is the same, they are used in different places. Other terms we may encounter are Link Bundling, NIC bonding, Port Trunking, Port Teaming, etc.

Glossary
VM - Virtual Machine is a virtual computer running within an ESXi server.
NIC - Network Interface Controller is a term for a network card or network adapter. It can be a physical adapter located in the
 server and through which it is connected to the computer network (through some physical switch). Or a virtual adapter inside the VM
 and through which the virtual machine is connected to the virtual switch (vSwitch), which can then be connected through a physical
 adapter to the network.
Failover refers to the ability to automatically switch to a redundant or backup link in case of failure of the active link.
Failback is the process that ensures a return to the original state before the failover, if the fault is corrected.

Cisco EtherChannel

The term EtherChannel comes from the Cisco company (the technology was acquired by acquiring Kalpana) and is currently widely used. It is used primarily with Cisco switches, and its original purpose was to connect two switches more reliably and at a higher speed than a single link. It is still a relatively common situation that a switch is equipped with ports of the same speed (now most often 1Gbps, previously 100Mbps), so its connection to the backbone can be a bottleneck.

The EtherChannel technology has been greatly expanded because it not only solves the problem of data throughput to the network backbone, but also bypasses the most common faults on the port-cable-port route. EtherChannel can be created from two to eight physical interfaces, which must be of the same type and speed, be full duplex, and be assigned to the same VLAN or the same trunk type. On a single switch/stack, we can create up to 48 EtherChannels. All interfaces that are to be members of the EtherChannel must be on the same switch or stack. EtherChannel cannot be created between ports on different switches.

Creating an EtherChannel creates a virtual interface designated as a Port-channel and a number. This interface has a configuration common to all physical ports, and any configuration changes should be made to it. It shares MAC addresses from the physical ports, and all technologies (such as the Spanning Tree Protocol) work with this virtual port instead of the physical ones.

The actual communication works by determining, according to the selected algorithm, through which physical interface the data should be sent when transmitting data through the EtherChannel. Load balancing depends on the selected algorithm. If a link is interrupted, the communication automatically moves to the functional physical interface.

Of course, it is not mandatory for a switch to be on the other side. Very often today, we use this technology where on the other side is a server and a configured NIC Teaming. It is important that the connected device accepts communication that can come through multiple physical ports.

In the EtherChannel configuration, we can specify how the assignment to the physical interface should be done. The hash is always calculated from the given value, the result of which is a value from 0 to 7, and this determines the port to be used. The default Load Balancing method is based on the source MAC address, we can set the source or destination MAC or IP address, or the XOR operation over the source and destination MAC or IP addresses. One session should always send data through the same interface to prevent out-of-order delivery.

EtherChannel is a proprietary technology that provides link aggregation and communication through them. As a supplement, there is the Port Aggregation Protocol (PAgP) protocol, which is used for dynamic negotiation of link aggregation. PAgP can operate in active mode, where it actively tries to negotiate the establishment of an EtherChannel, which is referred to as desirable. Or in passive mode, where the EtherChannel negotiation starts only when a request comes from the other side (it never initiates the negotiation itself), which is the auto mode.

Schématické zobrazení EtherChannelu a NIC Teamingu

IEEE 802.1ax Link Aggregation

Based on EtherChannel, the IEEE 802.1ad standard was developed in 2000, which later separated into the standalone IEEE 802.1ax standard called Link Aggregation. This standard also describes the Link Aggregation Control Protocol (LACP) protocol, which is similar to PAgP and is used for dynamic negotiation of groups. These groups are referred to here as Link Aggregation Groups (LAG).

Most of the properties of Link Aggregation are the same as EtherChannel. Today's Cisco switches support both of these technologies. Slight differences are, for example, in terminology. LACP operates in active mode, where it sends packets itself to negotiate the connection, and passive mode, where it waits for the start of the negotiation. Link Aggregation allows creating a LAG of up to eight active interfaces and allows adding up to 8 passive interfaces, which will switch in case of a failure.

Difference between LACP/PAgP and Manual EtherChannel

Whether we use some dynamic protocol for link aggregation negotiation or set the aggregation manually, the main properties are the same and the configuration work is also similar. In both cases, we must configure both sides and all ports. When a link fails, the communication is automatically redirected to the others, and when it is restored, it starts being used again.

So what's the difference? In many practical situations, we can make a manual configuration. Using the protocol adds one feature. Frames (LACPDU) are sent between the directly connected devices (more precisely their ports) and are used to determine if the link is available (without the protocol, only the link status is taken). These frames are also used to verify that the other side is configured correctly.

Configuration in Cisco IOS

The following commands show the simplicity of basic EtherChannel configuration. We create a group from only two ports and choose a static EtherChannel, where neither PAgP nor LACP is used. The next step is to change the Load Balancing algorithm, which is set for the entire stack. Next is a command to display information about the EtherChannel, i.e., to check it. And finally, its removal (the physical ports switch to the shutdown state at that moment).

SWITCH(config)#interface range g1/0/7,g1/0/8
SWITCH(config-if-range)#channel-group 1 mode on
Creating a port-channel interface Port-channel 1
SWITCH(config)#port-channel load-balance src-dst-ip
SWITCH#show etherchannel 1 summary | begin Group
Group  Port-channel   Protocol    Ports                                   
------+-------------+-----------+-----------------------
1       Po1(SD)          -        Gi1/0/7(s)  Gi3/0/6(s)
SWITCH(config)#no interface port-channel 1

NIC Teaming in the World of Windows

Now let's look at the already mentioned option to connect a server using link aggregation. The term NIC Teaming (or just Teaming) is often used in the case of a server, and we say that we combine two or more network adapters into a logical team (or bundle) using aggregation. To be more specific, we will consider the MS Windows OS, although the situation is similar in other OSes. And certain specifics will be described for the Intel network card.

When we perform Teaming, we combine several physical links into one logical one. It's actually the opposite of when we use VLANs, when we transmit multiple networks through a single physical link, i.e., we divide it into multiple logical links. We can use both technologies together, combine adapters into a team and transmit multiple VLANs through it.

In the world of Windows, we have one problem, namely that the kernel lacks support for technologies like Teaming and VLANs. So we need a special driver for the network adapter from its manufacturer. Server manufacturers usually have such a driver (although not always on the cheapest models), an example is HP, where we can install the HP Network Config Utility application, through which the configuration is done. Among network card manufacturers, Intel probably has the greatest support, with most of their network adapters, including integrated ones, supporting Teaming. With Intel, the configuration is done directly on the tab in the network adapter properties.

Konfigurace NIC Teamingu u adaptéru Intel

Different Types of Teaming in Windows

NIC Teaming has one different property compared to Link Aggregation, which we can consider a major advantage in certain situations. It allows it to be operated in two main types. First, we configure both sides, i.e., Teaming on the server and EtherChannel on the switch. Then the conditions mentioned in the EtherChannel description must be met. With NIC Teaming, we can choose the Static Link Aggregation method, where we set a manual EtherChannel on the switch. Or Dynamic Link Aggregation, which uses the LACP protocol according to IEEE 802.3ad.

The second option is to configure only one side, which is Teaming on the server. Then we can use any switch (without management and support for Link Aggregation), in fact, the members of one team can be connected to different switches, significantly increasing availability (we also solve the situation where the entire switch fails). The individual links can also have different speeds and properties. It is important that the configuration of all ports is the same, i.e., assignment to the same VLAN or trunk, etc. Of course, communication is only balanced in the outgoing direction, but the failure is resolved in both directions.

Again, we have several different types that we can set on the server. Adaptive Load Balancing balances traffic across all adapters. Adapter Fault Tolerance uses only one link as active and the others are backup in case of failure. Switch Fault Tolerance connection to two different switches.

NIC Teaming and VMware ESXi

From connecting the server through aggregated links, we'll move a small step forward. Nowadays, virtualization is the topic of the day, and in my opinion, rightly so. When we consolidate multiple servers onto one physical hardware, because the CPU performance, memory capacity, and other parameters are absolutely sufficient, we may run into a network connection problem. Either the link speed is insufficient, or the risk of a single link failure, through which a larger number of servers communicate, is too high. A perfectly suitable solution to this situation is the use of NIC Teaming.

To be specific again, our description will relate to my favorite tool, VMware ESXi 4.0, which is provided for free.

When we use Teaming, not only do we get an increase in data throughput due to the load balancing of individual VM communications across different physical NICs. Increased availability, where in the event of a link failure, the communication is automatically moved to another functional one. But also a significant increase in speed when performing operations where we upload data to the ESXi server, such as when converting a physical machine to a virtual one using VMware vCenter Converter.

Teaming Options in ESXi

Similar to Windows, we can configure Teaming only on ESXi or on both sides on the switch. Experience shows that when using a two-sided configuration, the switchover in case of a failure is much faster.

In ESXi, we have more configuration options for Teaming. The main parameter is the Load Balancing method, i.e., according to what the communication will be distributed across the physical NICs. We can choose according to the source port ID, i.e., the vSwitch port where the VM is connected (this is the default setting), or according to the source MAC address. In these cases, the VM (more precisely its virtual NIC) always communicates through the same physical NIC.

Another option is to not use load balancing, but only failover. And the last option is by IP hash, then the hash is calculated from the source and destination IP address for each session, and the physical NIC is assigned based on this hash. In this case, different communication of the same VM can go through different physical NICs. This is also the only method where we can set aggregation on the switch, and it must be a static EtherChannel. This should be used in this case because the same MAC address can appear on multiple switch ports. This configuration should also be done for the entire vSwitch.

Another interesting parameter is Network Failover Detection, i.e., how the link interruption is detected. We can choose Link Status Only, which decides only based on the link status, or Beacon Probing, which transmits and listens for broadcasts.

Teaming Configuration on ESXi

Teaming is set up very easily on ESXi. Just add another network adapter and by default, NIC Teaming is used with load balancing by source port ID, with link state failover detection, and with failback functionality. The entire configuration can be done through the vSphere Client.

Teaming configuration can be done either globally for the entire virtual switch (vSwitch), which is usually recommended. Or override the values for a specific Port Group.

Author:

Related articles:

Link Aggregation

Multi-line connection. EtherChannel, Link Aggregation, PAgP, LACP, NIC Teaming, Bonding, Bundling ...

If you want write something about this article use comments.

Comments
  1. [1] joe07

    mozem poprosit o mensie vysvetlenie spoluprace teamingu na serveroch a nastavenia potov na switchoch.

    V etherchanneli mozu byt len trunk porty?

    v clanku http://www.samuraj-cz.com/clanek/windows-a-nic-teaming-aneb-pripojeni-pres-vice-sitovek/ sa pise, ze najprv porty nastavime ako access do vlan 100 a potom ich hodime do etherchannela.

    Ako to teda je? Zrejme sa mylim, ze v etherchanneli mozu byt len trunky. Dakujem

    Thursday, 19.11.2009 07:59 | answer
  2. [2] Samuraj

    respond to [1]joe07: Do EtherChannelu můžeme nastavit jak access port, tak trunk port. Důležité je pouze to, aby měli všechny porty v teamu stejnou konfiguraci.

    Možná vás plete to, že některé firmy označují NIC Teaming jako NIC Trunking.

    Thursday, 19.11.2009 09:09 | answer
  3. [3] petr

    respond to [1]joe07:

    pekny dokument, ktery to popisuje http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/vmware/VMware.pdf

    vice k virtualizaci na www.windowsportal.cz

    Sunday, 22.11.2009 08:28 | answer
  4. [4] Radek

    lze nastavit, ze napriklad Gi0/1 ma max propustnost 800Mbps a Gi0/2 500Mbps? A switch podle toho delal loadbalancing? Na portech jsou pripojeny 2 bezdratovy spoje, ktery je potreba spojit dohromady.

    Wednesday, 08.10.2014 08:21 | answer
  5. [5] Petr

    Dobrý den, moc rád bych na Vás prosím vznesl dotaz, z oficiální dokumentace k lagu jsem se dočetl, že se používají dva typy port-channel load-balance a to: a)src-dst-mac b)src-dest-mac-ip.

    Moc rád bych se Vás zeptal, kdy použiji jakou metodu, a jaký je vlastně rozdíl..?

    Děkuji.

    Friday, 24.04.2020 09:27 | answer
  6. [6] Samuraj

    respond to [5]Petr: Nevím, jestli tam nemáte chybu (src-dest-mac-ip by mělo být src-dst-ip). Každopádně jde o to, podle čeho se rozhazuje komunikace na jednotlivé porty. Cisco podporuje zdrojovou IP nebo MAC, cílovou IP nebo MAC, zdrojovou i cílovou IP nebo MAC. Výchozí nastavení je src-mac.

    Friday, 24.04.2020 14:19 | answer
Add comment

Insert tag: strong em link

Help:
  • maximum length of comment is 2000 characters
  • HTML tags are not allowed (they will be removed), you can use only the special tags listed above the input field
  • new line (ENTER) ends paragraph and start new one
  • when you respond to a comment, put the original comment number in squar brackets at the beginning of the paragraph (line)