The described procedures and screenshots are from VMware vSphere Client 6 (the Web Client is not used in the article) and VMware ESXi 6.5, but they are identical when using VMware vCenter Server. The description of iSCSI technology and SAN networks can be found in previous articles; here we focus only on their use with VMware. The networking area is quite extensive here, and the description covers the most common options.
Note: The article includes several recommended settings from practice, provided by my colleague who is a VMware specialist.
Network Connection
For iSCSI SAN networks, there are several important recommendations, which are usually consistently stated by VMware as well. Separate network adapters (not shared with LAN) with a minimum speed of 1 Gbps should be used for connecting to the SAN network. In many cases, it is recommended to use Jumbo Frames (Ethernet frames that have a larger size (MTU) than 1500B, often 9000B). The SAN network should consist of two separate isolated networks (fabrics). The VMware server then has two ports (two single-port network adapters or one dual-port), each connected to a different network. Similarly, the storage array is connected with at least two ports (often four). This creates two (or four) paths between the server and the array. An example setup, which is the basis for the rest of the article, is shown below.

To connect an ESXi Server to a network (whether LAN or SAN), we need to create a vSphere Standard/Distributed Switch and configure the network adapter. The official documentation for the entire Storage area is vSphere Storage.
As a basic component on the ESXi server, we create a Storage (iSCSI) Adapter. VMware supports three types, depending on which network adapter we use. These are:
- Software - uses a standard network card (NIC), often used in practice and will be described in this article (even if we have an HBA in the server, we can use a SW adapter)
- Dependent Hardware - uses an adapter that has some functions implemented in HW but is dependent on VMware networking and iSCSI configuration
- Independent Hardware - uses an iSCSI HBA (implements the entire iSCSI stack in HW), then VMkernel Networking is not used on ESXi, but configuration is done under Storage Adapter
Note: Storage array manufacturers often have various plugins available for VMware; some can help us manage the storage array from the VMware console, others add support for various HW functions of the array and can be quite important for operation.
Finding the IQN on the ESXi server
This is provided here for reference. We must already have created an iSCSI adapter (which we will do later in the article).
- launch VMware vSphere Client and connect to our ESXi server (or vCenter server)
- select the server, switch to the Configuration tab, and in the Hardware section, choose Storage Adapters
- right-click on our iSCSI Software Adapter in the list and select Properties
- on the General tab in the iSCSI Properties section, you can see the IQN (Name)
Creating iSCSI vSwitches - connecting to the SAN network
The first step is network configuration. We must have two free network adapters (ports) connected to the iSCSI SAN network (according to the previous description, into two different fabrics). Then the standard solution is to create two vSphere Standard Switches (vSwitch), each must have a different subnet and has one physical adapter (vmnic). Through the virtual VMkernel adapter (VMkernel port vmk), mapping to the iSCSI adapter (vmhba, description in the second step of the article) occurs. The mapping of the physical adapter (vmnic) to the virtual one (vmk) must always be 1:1. We will describe this method here.
The second option is to create only one vSwitch and add both physical adapters (vmnic) and both VMkernel adapters (vmk) to it. In this case, however, we must manually set the mapping of a specific VMkernel adapter to a specific physical adapter (again, it must be 1:1, we do this by moving other vmnics to the Unused area in NIC Teaming on vmk).
Creating the first iSCSI vSwitch
- launch VMware vSphere Client and connect to our ESXi server (or vCenter server)
- select the server, switch to the Configuration tab, and in the Hardware section, choose Networking
- click on Add Networking

- for Connection Type, select VMKernel, click Next

- select Create a vSphere standard switch and check the network adapter connected to the SAN network, click Next

- enter the Port Group name, such as
iSCSI VLAN 10and the VLAN number (we probably don't use trunk on SAN, but the port is access, so leaveNone (0)), click Next

- enter the IP address of the interface in the SAN network (i.e., Initiator IP), click Next and Finish

Setting MTU - enabling Jumbo Frames
If we use a larger MTU, which is usually recommended for SAN networks, we need to adjust the settings on the virtual switch.
- for the new switch, click on Properties (here vSwitch1)
- on the Ports tab, select vSwitch and click on Edit
- on the first General tab, enter the desired MTU, such as 9000 and click OK
- switch to the created Port (in the example
iSCSI VLAN 10) and click on Edit - on the first General tab, enter the desired MTU, such as 9000 and click OK

Creating the second iSCSI vSwitch (adding the second path)
Then we repeat the entire process for the second path (second physical adapter and subnet) and create a second virtual switch.

Creating and configuring the iSCSI Software Adapter - connecting to the array
- launch VMware vSphere Client and connect to our ESXi server (or vCenter server)
- select the server, switch to the Configuration tab, and in the Hardware section, choose Storage Adapters
- click on the Add button and add an iSCSI Software Adapter and confirm the warning

- right-click on the iSCSI Software Adapter in the list and select Properties (this takes us to the iSCSI Initiator configuration)
- if its status is Disabled, click on Configure, check Enabled here and confirm with OK
- its status will change and the IQN (iSCSI Name) will be displayed
Note: At the same time, we can modify the IQN, the beginning is standard iqn.1998-01.com.vmware:, then the server name is used and VMware adds a string to ensure uniqueness. We should have unique server names in the network, so we can leave only this. Optionally, we can enter an alias (but this serves only for us as a friendly name).

A reader pointed out this point - VMkernel Port Bindings (which I have listed below as optional) in the discussion, stating that according to VMware, it should not be set in most cases. I provide some opinions on this in the discussion, but according to the official documentation, it really shouldn't be set in the case described here. According to the article Requirements for iSCSI Port Binding, fixed assignment can be used only if all interfaces (ports on the array and VMware server) are in the same subnet.
The setup described here (which I consider standard) uses two separate networks (fabrics or VLANs). One physical network card port on the server (VMkernel adapter) and one port from each array controller (target iSCSI port) are connected to one SAN. In this case, the assignment should not be used, and VMware determines the VMkernel port based on the routing table.
This is described in more detail in the documentation Best Practices for Configuring Networking with Software iSCSI. When port binding is set, iSCSI traffic can only flow through these ports. The iSCSI Initiator creates a session from all assigned ports to all Target Portals (communication IP address and port on the storage array's network interface). So for 2 VMkernel ports and 4 addresses on the array, it's 8 sessions. But if two separate networks are used, VMware cannot communicate from one of its ports to all array addresses, and a problem arises.
If port binding is not set, VMware chooses the best VMkernel port based on the routing table. The iSCSI Initiator creates only one session for each Target Portal from the most optimal VMkernel port. So for 2 VMkernel ports and 4 addresses on the array, it's 4 sessions. The result is that there are situations where it's useful to create manual binding (to create more paths to the array if it has only one address) and equally situations where it's not useful (because communication is not possible there). Practical examples are provided in the referenced VMware documentation.
- (optional) switch to the Network Configuration tab and add VMkernel ports (our vSwitches for iSCSI)

- switch to the Dynamic Discovery tab and click the Add button
- enter the first IP address of the array in the iSCSI SAN network (one of 4 addresses, example 192.168.130.200)

- confirm with OK and click on Close
- we'll get a prompt asking if we want to perform a Rescan Host Bus Adapter, which we confirm with Yes
Note: When we return to the settings, we should see all field addresses on the Static Discovery tab.
Here's the translated HTML code:Setting up Multipath
Now we create a LUN on the array. It's advantageous to create one large Volume/LUN, which we connect to multiple ESXi servers and place all virtual machines on it. We map the created LUN to our VMware server (using its IQN). Then we return to the ESXi configuration. Multipath works automatically; in the settings, we can work with paths.
- launch VMware vSphere Client and connect to our ESXi server (or vCenter server)
- select the server, switch to the Configuration tab, and in the Hardware section, choose Storage Adapters
- right-click on our iSCSI Software Adapter in the list and select Rescan
- after it's completed, we should see the assigned disk (LUN) in the lower part of the details (when displaying Devices), and when switching to Paths, we should see all four paths to the array

- right-click on the disk and select Manage Paths, here we can set the algorithm for Load Balancing Multipath, the Path Selection item

Creating Storage - Storage datastore
- launch VMware vSphere Client and connect to our ESXi server (or vCenter server)
- select the server, switch to the Configuration tab, and in the Hardware section, choose Storage
- click on the Add Storage button and select Disk/LUN as the type, click Next

- choose the available disk (LUN), click Next

- review the layout information, click Next
- enter a name for the datastore, such as
NetApp_datastore1, click Next - in the Formatting step, enter the capacity of the datastore being created, we should always choose Maximum available space, VMware supports only one VMFS datastore per LUN, click Next
- a summary of the entered information is displayed, which we confirm with the Finish button

Increasing Storage Size (Datastore - LUN)
First, we make the change on the storage array and then adjust it on the ESXi server. If the shared storage (LUN) is connected to multiple ESXi servers (cluster nodes), we can perform the increase (expansion) on any server.
- launch VMware vSphere Client and connect to our ESXi server (or vCenter server)
- select the server, switch to the Configuration tab, and in the Hardware section, choose Storage
- right-click on the desired datastore and select Properties
- in the bottom right, in the Extent Device section, we see what capacity is available (Device), if the new value is not here, click on Refresh
- click on the Increase button and go through the wizard (increase to maximum capacity - Maximum available space)

Tuning iSCSI Connection Performance
This information was sent to me by reader vlho, for which I am very grateful.
Reducing Round Robin IOPS limit
A well-known method to significantly increase throughput for iSCSI storage if we have multiple active paths to the array is to change the iops parameter. It can probably only be done from the CLI. Of course, RoundRobin must be enabled over the Datastore (Path Selection Policy for Multipath).
First, we find out the Datastore identifier:
# esxcli storage nmp device list
We'll receive an output like this:
naa.6000eb3aeb9f24600000000000000075
Device Display Name: LEFTHAND iSCSI Disk (naa.6000eb3aeb9f24600000000000000075)
Storage Array Type: VMW_SATP_DEFAULT_AA
Storage Array Type Device Config: SATP VMW_SATP_DEFAULT_AA does not support device configuration.
Path Selection Policy: VMW_PSP_RR
Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useANO=0;lastPathIndex=0: NumIOsPending=0,numBytesPending=0}
On the last line, we see the default value iops=1000.
Various vendors recommend changing it to a value ranging between 1 and 10. I always set it to 1 because I couldn't observe any change with a slightly higher value through any measurement.
For one device (and one LUN) we can set it with the command:
# esxcli storage nmp psp roundrobin deviceconfig set --type=iops --iops=1 --device=naa.xxxx
For multiple LUNs we can set it in a loop at once:
# for i in `esxcfg-scsidevs -c |awk '{print $1}' | grep naa.xxxx`; do esxcli storage nmp psp roundrobin deviceconfig set -t iops -I=1 -d $i; done
Where we replace the value naa.xxxx with the one found above, i.e., naa.6000.
Then with the first command, we verify if the iops value has actually changed.
No restart is necessary.
Performance, according to many of my observations, can increase up to double, especially for writing. Mainly for applications that access disk storage with multiple threads, e.g., SQL, etc.
And what exactly is the iops parameter? It's the number of I/O operations after which the path changes, more here: VMware KB 2069356
Disabling Delayed ACK
Another method to increase performance (this time only for some iSCSI storage) is to disable Delayed ACK. This can be done in the GUI, in the thick client it's here:
Configuration - Storage Adapters - right-click on our iSCSI Software Adapter (vmhbaxx) and select Properties - General tab - Advanced and here uncheck the Delayed ACK option (at the very bottom)
An ESXi restart must follow.
After long-term observation, I disable Delayed ACK on all iSCSI storage because I've verified that even if there's no performance increase, it improves connection stability.
If you want to know more about Delayed ACK - VMware KB 1002598.
Zdravím.
Já bych byl s tím Bindingsem v tomto případě velmi opatrný, viz zde:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2038869
Sice to funguje a nedochází k takovým katastrofálním situacím, které popisuje VMware na přiloženém linku, ale na několika typech storage jsem si ověřil, že zde dojde při jeho použití k propadu výkonu o min. 30%, často i více.
Podobně nedokonalé návody lze bohužel nalézt i v profesionálních guidech od známých vendorů.
A je škoda, že jste se nezmínil, jak se taková iSCSI konfigurace dá vytunit, např. deaktivací Delayed ACK nebo změnou parametru iops.
Jinak dobrý.
respond to [1]vlho: Děkuji za zajímavou informaci. Já se VMware nevěnuji a chtěl jsem jen doplnit sérii o iSCSI často používaným klientem, tak jsem prostudoval dostupnou dokumentaci (která mi přijde dost šílená a nenalezl jsem žádné detaily). Probíral jsem to ještě s kolegou, který VMware spravuje.
Zajímavé je, že mi jsme právě v minulosti měli problém s připojením na pole, při výpadku jedné cesty nefungoval správně Multipath a docházelo k pádu serverů, start ESXi serveru byl extrémně pomalý, apod. A vše se vyřešilo právě nastavením přiřazení skupiny (VM Kernel port, fyzický adaptér) na iSCSI Software Adapter. Nyní to již dlouho provozujeme na několika desítkách ESXi serverů připojených k několika různým polím.
Osobně nechápu, proč by to přiřazení mělo vadit a pokusím se k tomu najít nějaké informace (zatím jsem jen nalezl další potvrzení toho co píšete, ale nikde vysvětlení). Přitom další věc je přiřazovaní vmk na vmnic, které se v určitých konfiguracích musí provést.
Co se týče ladění konfigurace, kdybyste měl nějaké info, tak budu rád. My řešíme různé ladění podle doporučení výrobce pole, což se různě mění. Ale jestli je něco obecného, tak mne zaráží, proč to VMware nemá jako defaultní nastavení :-).
DD,
1. Velmi známá metoda, jak navýšit významně propustnost u iSCSI storage je změna parametru iops.
Pokud se nepletu, dá se to stále udělat jen z CLI.
Pochopitelně musí být nad Datastorem zapnut RoundRobin.
Nejdříve si zjistíme identifikátor Datastore:
# esxcli storage nmp device list
Obdržíme asi takový výstup:
naa.6000eb3aeb9f24600000000000000075
Device Display Name: LEFTHAND iSCSI Disk (naa.6000eb3aeb9f24600000000000000075)
Storage Array Type: VMW_SATP_DEFAULT_AA
Storage Array Type Device Config: SATP VMW_SATP_DEFAULT_AA does not support device configuration.
Path Selection Policy: VMW_PSP_RR
Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useANO=0;lastPathIndex=0: NumIOsPending=0,numBytesPending=0}
Na posledním řádku vidíme defaultní hodnotu iops=1000
Různí vendoři ji doporučují změnit na hodnotu pohybující se mezi 1 až 10.
Já dávám vždy 1, protože jsem žádným měřením nedokázal vypozorovat nějakou změnu při o něco vyšší hodnotě.
A teď iops v cyklu pro všechny LUNy změníme takto:
# for i in `esxcfg-scsidevs -c |awk '{print $1}' | grep naa.xxxx` ; do esxcli storage nmp psp roundrobin deviceconfig set -t iops -I=1 -d $i; done
kde hodnotu naa.xxxx zaměníme na tu výše zjištěnou, tedy naa.6000
Prvním příkazem pak ověříme, zda se hodnota iops skutečně změnila.
Není třeba restart.
Výkon dle mnoha mých pozorování může narůst až na dvojnásobek, zejména pro zápis,
a hlavně pro aplikace, které k diskovému úložišti přistupují vícero vlákny, např. SQL atd.
A co je to vlastně to iops? Je to počet I/O operací, po nichž se změní cesta, více zde:
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2069356
Pokračování v dalším příspěvku...
respond to [3]vlho: Moc díky za informace. Já jsem mezitím prostudoval to VMkernel Port Bindings a plyne z toho, že máte pravdu. Doplnil jsem popis do článku.
Dobrý den,
mohu se k tomu zeptat na související věc? Budu-li mít fyzický server, na kterém budou třeba 1-2 virtuály a připojím NAS pomocí iSCSI, jak drasticky pomalejší budou ty virtuály, pokud bude propojení server-NAS děláno po 1Gb lince? Jde mi o to, jak razantní bude zpomalení, když storage přesunu z fyzického serveru na externí NAS.
Díky
Tony
respond to [5]Tony: Na to se dá těžko odpovědět. Záleží na mnoha věcech. Diskové pole má řadu výhod, když se připojuje k více serverům. Pokud mám pouze jeden fyzický server, tak je pro mne jednoduší osadit jej několika disky a mít úložiště lokálně.
Jde o to, jak mám rychlé disky, jak kvalitní RAID řadič a jaký RAID použiji (v levných serverech to výkon spíš degraduje). Běžný SATA disk má rychlost čtení okolo 150MB/s. Když připojím diskové pole po 1Gbps síti a ono ji dokáže maximálně využít, tak dostávám rychlost 128MB/s (samozřejmě pár procent vezme režie, takže to v praxi bude tak 110MB/s). Takže tam moc velký rozdíl není. Navíc tam hrají roli různé keše, optimalizace, apod.