The described procedures and screenshots are from Windows Server 2012 R2. If we need to use an older version, it is recommended to use Windows Server 2008 R2 SP1, the procedures are very similar, but the 2012 version brought certain improvements. The new Windows Server 2016 version does not change the basic procedures. In any case, it is always recommended to have the latest updates (hotfixes) installed, for most operating systems the MPIO patch is available.
Basic Terms
SAN - Storage Area Network - a special network that provides access to consolidated data storage. We connect the target to the network, which is primarily a storage array or some backup device, and the source (Initiator), which are typically servers. Normally, block-level transfer occurs in the SAN network and servers are connected to a disk from the array, which behaves the same as a local disk (looks identical to a disk connected to SATA and we can perform the same operations on it). The SAN network primarily uses the Fibre Channel (FC) and Internet Small Computer System (iSCSI) protocols for block-level transfer (the second option is file transfer and protocols like Network File System - NFS or Common Internet File System - CIFS).
iSCSI - Internet Small Computer Systems Interface - this is a block-oriented storage protocol that uses Ethernet for transfer. SCSI commands are encapsulated in the TCP/IP protocol. TCP is not as efficient as FCP, but with today's network speeds, this is no longer a problem. iSCSI requires a minimum speed of 1 Gbps, and is now commonly deployed on 10 Gbps metallic networks.
LUN - Logical Unit Number - represents an individually addressable logical device (unit). From the array's point of view, it is a file within a Volume. From the client's (server's) point of view, it is a disk. Simplified, a LUN is a logical disk.
iSCSI Initiator - a client (typically a server) that sends SCSI commands over the network, usually connects to a LUN, negotiates a connection with the Target.
iSCSI Target - typically the target address of the storage array, through which the Initiator connects to the LUN and sends input/output requests.
IQN - iSCSI Qualified Name - a unique iSCSI identifier for the initiator or target, example iqn.1991-05.com.microsoft:server.firma.local
Note: In the article, I mention (for me) the standard situation where the SAN network contains servers and a storage array. The servers are the Initiator and use iSCSI to connect disks (LUNs) from the array, i.e. the Target, which appear as local disks on them. But the iSCSI Target does not have to be only a specialized storage array (or backup device), it can also be a server that has local disks and provides them to other servers. The Windows Server 2012 (for 2008 R2 it can be downloaded) includes a role (Server Role) called iSCSI Target Server, which allows us to create a Target and essentially a storage array after adding it.
Network Connections and Adapter Configuration
When building an iSCSI protocol SAN network, all the principles are similar to those for Fibre Channel. The advantage is that we can use standard cabling, Ethernet switches, etc., instead of specialized ones for Fibre Channel solutions. In servers, we can use standard network cards (Network Interface Card - NIC), although it is better to choose specialized iSCSI Host Bus Adapters (HBA), which have hardware support for off-loading TCP/IP and iSCSI protocols.
Even though we can use the same network components (and consolidation is popular these days), it is always recommended to build a separate SAN network (if this is not possible, at least use VLAN). It is also important to have dedicated network cards for the SAN in the server. We still have the simplification that we manage the same technologies as for the LAN, and the equipment can be cheaper.
IP SAN design is built the same as FC SAN. This means that we create two separate networks (in the FC world, they talk about Fabric), each with a different address range and they are not interconnected (they are not routed - they are closed). This is for high availability reasons, if we do not have this requirement, we can manage with a single network. Separation from the regular network also has an impact on increased security. For further security, we can use bidirectional CHAP authentication. It is recommended to use Jumbo Frames on the SAN network (which means setting it on switches, servers and the array).
Normally, there are four paths (there can be more if we have more ports on the controller, but usually Link Aggregation is then used) between the server (Initiator) and the storage array (Target). We have two SAN fabrics (separate networks / subnets), the array has two controllers and each is connected to both fabrics (so it has a total of 4 IP addresses), the server has two network cards (2 IP addresses) and is connected to each fabric. The server sees both array controllers in each fabric, so it can communicate with the two array IP addresses through its single IP address. IP addresses in the same network must always be used.

Network Adapter Configuration
It is highly desirable to properly configure the network adapters on the Windows server that lead to the IP SAN network. It mainly involves turning off all unnecessary features. On IPv4, we turn off NetBIOS, DNS registration, etc. We can also turn off IPv6 (if we don't use it), file and printer sharing, MS network client, etc.
If we have decided to use Jumbo Frames in the SAN network, we need to enable them on the network adapter in Windows. First, we list the interfaces and the current MTU.
netsh interface ipv4 show subinterfaces
For the selected interface, we then change the MTU value accordingly.
netsh interface ipv4 set subinterface "Local Area Connection" mtu = 9000
Finding the IQN on the Server
Since iSCSI runs over TCP/IP, we use MAC and IP addresses for network communication. To find them, we normally use the ipconfig /all command.
But on the array, we need to know the IQN of the client (iSCSI Initiator) in order to assign the LUN/Volume to the server. On Windows Server, we can find it in several ways (it's best to do the array connection described later first).
Using the GUI:
- open Server Manager - Tools - iSCSI Initiator
- in the iSCSI Initiator Properties window, switch to the last tab Configuration and here we see our Initiator Name (IQN)
We can also use the command line iscsicli, which immediately displays the server's IQN when run.
C:\>iscsicli Microsoft iSCSI Initiator Version 6.3 Build 9600 [iqn.1991-05.com.microsoft:server] Enter command or ^C to exit
On Windows Server 2012 R2, we can also use PowerShell and the Get-InitiatorPort cmdlet, which displays information for iSCSI, FC and SAS initiator ports.
PS C:\> Get-InitiatorPort | FT -AutoSize InstanceName NodeAddress PortAddress ConnectionType ------------ ----------- ----------- -------------- ROOT\ISCSIPRT\0000_0 iqn.1991-05.com.microsoft:server ISCSI ANY PORT iSCSI
Multipath Configuration
In SAN networks, it is common to use multiple physical paths between the client and the storage. The main reason is to achieve high availability (High Availability), but it is also possible to achieve higher performance (Load Balancing). To make multiple paths work, the Multipath I/O technique is used. If we have the same disk (storage) accessible through multiple paths, the system considers it as multiple different disks (according to the number of paths). The Multipath driver ensures that the disk is interpreted as one with multiple paths. Microsoft has its own implementation of Microsoft Multipath I/O (MPIO), which is part of the server operating systems.
MPIO allows the use of Device Specific Module (DSM), which hardware vendors can develop and optimize for their storage array. Windows includes a general Microsoft DSM for Fibre Channel, iSCSI or SAS.
Adding the MPIO Feature
MPIO is a system feature that we must add:
- open Server Manager - Add roles and features
- navigate to the Features section and check Multipath I/O
- complete the wizard
Configuring MPIO
- start the MPIO configuration (Control Panel - MPIO or find MPIO - mpiocpl.exe in the Start menu or Administrative Tools)
- switch to the Discover Multi-Paths tab
- check Add support for iSCSI devices and click Add
- get the information that a restart is required and perform it
- after the restart, reopen MPIO and on the MPIO Devices tab we should see the device MSFT2005iSCSIBusType_0x9

Note: For MPIO configuration, we can also use the command line mpclaim.
MPIO Policy - Load Balancing Methods
To be able to use multiple active paths (load balancing), the array must support Active/Active or better Asymmetric Logical Unit Access (ALUA). ALUA (also known as Target Port Groups Support - TPGS) are SCSI concepts and commands for defining path prioritization, describing port status and access characteristics. When using ALUA, the path (LUN) can be in the following states:
- Active/Optimized
- Active/Unoptimized
- Unavailable
- Transitioning
MPIO Policy
- Fail Over Only - all communication (I/O) goes through one active (Primary or Active/Optimized) path (so no load balancing), the others are standby (Standby or Active/Unoptimized) and only in case of failure of the active path, one standby is switched to (in case the active path is restored, communication returns to it)
- Round Robin - the DSM can use all available paths with load balancing using Round Robin (I/O is evenly distributed, paths are taken in sequence from the first to the last and then back to the beginning - a circle)
- Round Robin with Subset - similar to Round Robin, but only Active/Optimized paths are used, if all fail, Active/Unoptimized are used
- Least Queue Depth - the path with the shortest request queue is used
- Weighted Paths - we set a weight for each path and that determines the relative distribution of requests
- Least Blocks - the path currently processing the smallest volume of data (I/O blocks) is used
iSCSI Configuration - Connecting to the Array
Note: The default assumption is that the server has one or more network interfaces connected to the SAN network and has the corresponding IP address configured (e.g. 192.168.10.10 and 192.168.20.10).
- open Server Manager - Tools - iSCSI Initiator
- if the Microsoft iSCSI Initiator Service is not running, we will be prompted to start it and set it to automatic startup
- in the iSCSI Initiator Properties window, on the last tab Configuration, we see our Initiator Name (IQN), which is important for connecting the LUN on the array
- creating a connection to the array (Target Portal)
- switch to the Discovery tab and click Discover Portal
- here we enter the first IP address of the iSCSI host port on the array controller (one of the 4 addresses, e.g. 192.168.10.1)
- then click on Advanced and set
- Local adapter - Microsoft iSCSI Initiator
- Initiator IP - we choose the server's IP address in the same (fabric) iSCSI network (example 192.168.10.10)
- we confirm with OK and OK

- connecting to the array (Target)
- we switch to the Targets tab and should see the Target here, for example iqn.1992-08.com.netapp:aff
- we select the Target (it should be in Inactive state) and click on Connect
- we check the option to add to Favorites and Enable multi-path, then click on Advanced
- again we set Local adapter (Microsoft iSCSI Initiator), Initiator IP (address on the server for the first path, 192.168.10.10), Target portal IP (we should see all array IPs, we choose the first one on the same path, 192.168.10.1)
- we confirm with OK and OK (the status should change to Connected)

- so far we have set up one path (Session), we need to add all the others
- we select the Target and click on Properties
- we click on Add session and proceed the same way as in the previous step (Connect To Target) for the remaining three paths
- we check the option to add to Favorites and Enable multi-path, then click on Advanced
- we set Local adapter (Microsoft iSCSI Initiator), Initiator IP (gradually the first (192.168.10.10) and second (192.168.20.10) address on the server), Target portal IP (we choose other addresses in the same network as the Initiator, so first 192.168.10.2 and then 192.168.20.1 and 192.168.20.2)
- we confirm with OK and OK and continue for the next path

Note: For various iSCSI operations, we can use the command line tool iscsicli.
Disk Operations
Adding a new disk - mapping a LUN
On the Windows server side, this operation is very simple (the main work is done on the array). On the disk array, we create a LUN for our server (assign its IQN), and on Windows Server, we just initialize it.
- we launch Disk Management
- we click on Action - Rescan Disks
- the assigned disk (Volume - LUN) should appear as offline and uninitialized
- we right-click on it and select Online, then Initialize Disk (we choose either MBR or GPT type)
- then we can create a Volume (Partition) on the disk

Setting up MPIO for LUN (disk)
If we have the disk array connected via multiple paths, and thus a mapped LUN with multiple paths, we can set up the use of these paths (Load Balancing Policy) and look at the connection details.
- we launch Disk Management
- we right-click on our disk from the array (we must click directly on the disk, not on the volume, so on the left where it says Disk number) and select Properties
- we switch to the (new) MPIO tab
- in the Select the MPIO policy item, we can choose the Load Balancing method (commonly used is Round Robin With Subset)
- we can see which DSM is used and also the individual paths, which we can expand

Increasing disk size (LUN)
We first make the change on the disk array and then adjust it in the OS.
- we launch Disk Management
- we click on Action - Refresh
- for the disk, we should see the expanded part as Unallocated
- we can increase our partition using Extend Volume or add a new one

Tato stranka je skutocne uzasna a rad by som sa podakoval, pretoze je malo stranok ako tato, kde autor skutocne pise zrozumitelne. Velmi,ale skutocne velmi mi pomaha tato stranka a verim ze nie len mne. Author ma u mna minimalne pivko,alebo slivovicku v neobmedzenom mnozstve ;)
Skvělý článek, mohl bych poprosit o vysvětlení jednotlivých typů MPIO policy?
Díky za článek!!!
Děkuji za pozitivní reakce :-).
Docela mne překvapil velký ohlas k tomuto článku (původně jsem nevěděl, jestli ho vůbec zveřejňovat). Tak jsem doplnil nějaké drobnosti, třeba ty MPIO policy. A na podobné téma přijdou další články.
Super článek, děkuji!
respond to [1]Martin P: ... pivko za článek ;-)
Dnes som objavil a fakt vynikajuca uzitocna stranka. Dakujem velmi pekne autorovi za cas, energiu a realizaciu napadu! Prajem Ti vela sil a len tak dalej Samuraj.. ;-)
Bezvadný, díky.
Samuraji, mooooooc díky za tento skvělý článek a další. :-)
Ať se Vám daří a díky za takový super web. :-)
Lze MPIO i na stanici, tedy s Windows 10?