Note: I'm not sure which term to optimally use for port bundling. Cisco previously used the term EtherChannel, which was considered more of a Cisco-specific thing. A more general and perhaps optimal term is PortChannel (on Nexus, the show etherchannel command has been replaced with show port-channel). Cisco mixes both terms in their new documentation. And in many places, other terms like Link Aggregation, Port Bundle, Port Group, NIC Teaming, NIC Bonding, and others are used.
There are numerous advantages of link aggregation, including higher availability (fault tolerance), higher speed (load balancing) under certain conditions, no need to deal with Spanning Tree against loops because these links are treated as one and not different paths. The fact that we can use multiple switches further increases availability (similar to using a stack).
Virtual Port Channel (vPC) allows links that are physically connected to two different Cisco Nexus switches to appear as one PortChannel to a third device (switch, server, or other network device). On Nexus switches, we create a local EtherChannel (PortChannel), which can be set up either manually (without protocol) or with negotiation using Link Aggregation Control Protocol (LACP), which is recommended. Each switch can have up to 8 active links in a PortChannel.
Note: With Cisco, I've also read about a similar technology called Multichassis EtherChannel (MCEC) along with Multichassis LACP (mLACP), which are supposed to work on Cisco IOS. In practice, I've never encountered it and didn't understand on which switches it's supposed to be supported.
Switches that we configure using vPC are referred to as vPC peer devices. We must interconnect these devices and create a vPC peer keepalive link and a vPC peer link.
To create a vPC peer link, we must connect at least two Ethernet ports on each switch and set them up as a PortChannel. It's recommended to set up the vPC peer link PortChannel as a trunk. This link is used for state synchronization between devices.
The vPC peer keepalive link is used only for monitoring the health of individual vPC peer devices. Regular keepalive messages are sent here. L3 connectivity is required, and we can use the management VRF and its IP address (mgmt 0 interface).
On both vPC peer devices, we configure a vPC domain with the same ID, which can only be one per device. The domain connects both vPC peer devices, the vPC peer keepalive link, the vPC peer link, and all PortChannels using vPC. We define global parameters in the domain.

Configuring vPC in NX-OS
Below is a step-by-step description of the configuration. First, the setup to enable vPC usage, followed by a sample configuration of one vPC. For more detailed information, you can refer to the official documentation Virtual PortChannel Quick Configuration Guide or Cisco Nexus 3000 - Configuring Virtual Port Channels.
Note: The configuration contains sample values, including the use of the LACP protocol (which is not mandatory).
Enabling vPC
First, we must enable the vPC feature.
SWITCH1(config)#feature vpc SWITCH2(config)#feature vpc
vPC domain
We create a vPC domain with a specific ID, which is a number from 1 to 1000. This must be set the same on both switches. If we have multiple pairs in a connected network where we configure vPC, it's recommended to use unique domain IDs.
SWITCH1(config)#vpc domain 10 SWITCH2(config)#vpc domain 10
For the vPC domain, we can (optionally) manually set the system priority, but if we do, we must set it the same on both switches. It's a value from 1 to 65535, with the default being 32667. A lower value means higher priority.
SWITCH1(config-vpc-domain)#system-priority 4000 SWITCH2(config-vpc-domain)#system-priority 4000
Furthermore, we can (optionally) set the role priority, which is the switch priority and thus determines which is primary and which is secondary. Each switch must have a different value. The values are the same as for system priority.
SWITCH1(config-vpc-domain)#role priority 1000 SWITCH2(config-vpc-domain)#role priority 2000
A mandatory setting is the destination IP address for the vPC peer keepalive link. We enter the management VRF IP addresses of the other switch crosswise.
SWITCH1(config-vpc-domain)#peer-keepalive destination 192.168.1.20 SWITCH2(config-vpc-domain)#peer-keepalive destination 192.168.1.10
vPC peer link
We create the vPC peer link, so first we must connect at least two ports from each switch, for example, E1/49 and E1/50. On these ports, we create a PortChannel and set it as a peer link.
SWITCH1(config)#interface Ethernet1/49-50 SWITCH1(config-if)#channel-group 1 mode active SWITCH1(config)#interface port-channel 1 SWITCH1(config-if)#vpc peer-link SWITCH1(config-if)#switchport mode trunk SWITCH2(config)#interface Ethernet1/49-50 SWITCH2(config-if)#channel-group 1 mode active SWITCH2(config)#interface port-channel 1 SWITCH2(config-if)#vpc peer-link SWITCH2(config-if)#switchport mode trunk
Creating vPC
Now we have everything prepared and can create a PortChannel connected via vPC across two switches. On both switches, we join one or more ports into a PortChannel and set it to a specific vPC. In the example, only one port on each switch is used, and PortChannel number 10 and vPC number 10 are used throughout.
SWITCH1(config)#interface Ethernet1/1 SWITCH1(config-if)#channel-group 10 mode active SWITCH1(config)#interface Po10 SWITCH1(config-if)#vpc 10 SWITCH2(config)#interface Ethernet1/1 SWITCH2(config-if)#channel-group 10 mode active SWITCH2(config)#interface Po10 SWITCH2(config-if)#vpc 10
Of course, then we need to set up the other side (server or switch) and connect the ports to the corresponding PortChannel (here the LACP protocol is used).
Checking the configuration
As a basis, we create a standard EtherChannel (PortChannel), so we check it with commands for PortChannel, such as
SWITCH#show port-channel summary
We also have special commands for checking vPC
SWITCH#show vpc SWITCH#show vpc role SWITCH#show vpc peer-keepalive SWITCH#show vpc consistency-parameters vpc 10
And if we use the LACP protocol, we can display information about it
SWITCH#show lacp neighbor interface po10
Kde se tohle dá prakticky využít? :) nějak mě nic nenapadá. Jinak opět zajímavé čtení :)
respond to [1]Karllan: Použít se to dá všude v datacentru, kde tvořím vysoce dostupnou (redundatní) infrastrukturu. Obdobně jsem to vždy používal na Catalyst 3750, které jsem měl ve stacku a servery (nebo uplinky do dalších switchů) připojoval každou cestou do jiného přepínače. Nexusy do stacku nezapojím, takže stejného řešení dosáhnu pomocí vPC.
Nechyba nahodou v konfiguracii klasickeho portchannelu pri vybere interfejsov prikaz range ? :)
respond to [3]Tomique34: Na Nexusech (NX-OS) příkaz range neexistuje. Není potřeba, rovnou můžeme adresovat více portů jako u IOSu s range.