EN 
06.10.2024 Hanuš WELCOME IN MY WORLD

This website is originally written in the Czech language. Only part of the content is machine (AI) translated into English. The translation may not be exact and may contain errors.

Tento článek si můžete zobrazit v originální české verzi. You can view this article in the original Czech version.
Storage technologie a SAN sítě aneb připojení serverů k diskovému poli

Storage technologies and SAN networks or connecting servers to a disk array

| Petr Bouška - Samuraj |
The article tries to comprehensively, clearly, but briefly, describe the field of data storage with a focus on disk arrays. And network communication and the protocols used for it, primarily in the area of ​​SANs, i.e. iSCSI and Fiber Channel Protocol (FCP). This is a general description of how the server gets the disk from the storage system. Followed by a description of the implementation and differences in iSCSI and FCP. The article should provide some comparison between the two protocols, the general principle of which is very similar.
displayed: 29 889x (29 830 CZ, 59 EN) | Comments [9]

An important thing in this entire area is terminology. Not only do we have English terms and their Czech equivalents (we will mention both), but often different terms are used for the same thing within various technologies or manufacturers. The article is written somewhat unconventionally. In the descriptive paragraphs of the individual chapters, technical terms are mentioned, and at various points in the article, a description of these terms is provided in bullet points.

In the article, I try to put together a number of different pieces of information and technologies that I have described in older posts. Here it is more of a general and theoretical description; for more practical information, you can refer to the individual articles:

Storage

Disk arrays

SAN Fibre Channel

SAN iSCSI

LAN a sítě obecně

Storage - data storage and disk arrays

First, let's define the whole area that this article falls into, which I would most generally describe as computer data storage. The short term Storage is often used (even in Czech). Here we won't be discussing the actual media for data storage, memory, or direct-attached storage (DAS) technologies. Instead, we'll focus on disk arrays in general and the communication access to them.

A Disk array is an external device that centrally provides storage capacity (disk space). It typically contains disk controllers, physical disks, cache, and various interfaces. It's built with an emphasis on high availability, resilience, and easy maintenance, so components are typically redundant and hot-swappable. Data is stored across multiple disks using some technique to protect against physical disk failure.

Diskové pole NetApp E-Series 5600

The entire term disk array is related to the most commonly used technology for storing data on multiple disks, which is RAID (Redundant Array of Independent Disks). RAID technology combines multiple physical disks into one logical unit with the possibility of ensuring data redundancy and increasing performance. Most disk arrays support various levels (types) of RAID. We can also use RAID technology on servers, where either software RAID or more commonly hardware RAID is used, for which we need a RAID controller.

In Czech, the terms diskové pole, úložný systém or just úložiště or pole are used. In English, the most commonly used term now is probably Storage System, followed by Data Storage System, Storage Array, Disk Array or simply Storage or Array.

Classification of disk arrays (storage systems)

The main division of disk arrays is according to data access (storage space). It can be:

  • block access (Block Level) - direct access to the medium using protocols such as iSCSI and Fibre Channel, transferring data blocks. The space appears as a local disk on the server.
  • file access (File Level) - files are obtained using protocols such as NFS and CIFS/SMB. Arrays of this type replace a file server. We can map the space as a network drive.
Storage podle přístupu

Arrays are then typically of two different categories, although a combination of both is increasingly appearing today.

  • NAS - Network Attached Storage - uses file access (it's File-based Data Storage), is connected to the LAN network, works directly with files, so it can perform some extra functions and optimizations on them, uses protocols for sharing files over the network, such as NFS, CIFS/SMB
Network Attached Storage
  • SAN - Storage Area Network - the term SAN doesn't refer to a disk array, but to a dedicated network for data transfer, yet we talk about SAN-type disk arrays, these arrays use block access (it's Block-level Data Storage), they don't connect to LAN (only for management), but to a separate SAN network, disk space is connected directly to the server (client) and only there is a file system created and files are worked with
Storage Area Network

In this article, we focus on SAN-type disk arrays.

Note: In the description, I mention LAN networks, which I refer to as telecommunication networks, but it's usually possible to consider any Ethernet networks (of various sizes). And SAN networks, which I refer to as data networks (although of course data is also transferred in LAN networks).

SAN-type disk arrays

From a global perspective, we have a disk array that is connected to a special dedicated SAN data network. Servers to which we want to provide disk space are also connected to this network (servers may not have any local disk and can even boot from the array), as well as other devices (for example, for backup and archiving).

Host - client for the array (server)

From the array's perspective, the term Host is always used in English for devices that connect to the array, and it provides them with disk space. Host means hostitel in Czech, which seems like a strange term to me, because I would say that the host is the array and the one connecting is more like a guest (in English Guest) or client. There's also the confusion that the Czech and English word "host" have opposite meanings.

Physical components of a disk array

A disk array typically consists of two (or more) controllers (Disk Array Controller), which often work in a cluster (to ensure high availability) and are then referred to as nodes. Controllers have an operating system (Operating System - Firmware) that controls the functioning of the array. They contain a processor, memory, cache, various interfaces. Sometimes the disk array controller (or part of it) is referred to as a Storage Processor or Service Processor (SP).

Interfaces are used for communication internally with disks (most often using SAS) and for connection to the SAN network and communication with Hosts (most often Ethernet for iSCSI or Fibre Channel for Fibre Channel Protocol). Often the interfaces are modular (the controller has slots where we can insert the required type of interface) and we can also combine multiple types. Controllers also have an Ethernet port for connection to the management LAN network, through which array management is performed.

Controllers are placed in a chassis, which also contains power supplies and fans. In some arrays, disks are placed directly in the chassis with controllers, but in any case, it's possible to connect additional disk shelves. Different manufacturers use different terms to refer to chassis and shelves. A common term is Enclosure, such as Controller Enclosure and Expansion Disk Enclosure. Also used are Disk Shelf, System Shelf, Cage, Tray.

NetApp AFF8040 zapojení 1

Disk array and virtualization

A disk array actually performs virtualization, because it takes physical components, creates logical (virtual) components above them, and allocates these to various consumers. So we divide the disk array into virtual parts and multiple servers connect to one array. Modern storage systems support dividing the array into independent administrative parts (Virtual Storage Controller or Virtual Domains), where we can assign certain defined HW resources to an administrative account and for it, it appears as a separate array.

Logical components of a disk array

How exactly the logical structure is created and what layers it consists of depends on the manufacturer. Each has its own method, which is always better than the competition. Generally, we can say that we group physical disks (Physical Disk Drive) into some kind of group (Group; it can be multiple layers encapsulated into each other) and define a type of RAID (or similar technology) on it. The next part is more or less uniform. Inside the disk group, we create volumes, which is a logical space where data is already stored. We connect volumes to clients (Host) using LUN - Logical Unit Number. Many terms are used for assigning a volume to a server (Host): Presentation, Mapping, Assign, Export. We can group volumes and clients (Host) into sets and perform bulk assignment.

Logické komponenty diskového pole NetApp Logické komponenty diskového pole HP 3PAR

SAN - Storage Area Network

Storage Area Network (SAN) is a special network that provides access to consolidated data storage (disk spaces from servers are moved to one central location, which allows more efficient use, higher performance, ensuring greater redundancy and use of cluster services). We connect targets to the network, which is primarily a disk array (Storage Array) or some backup device, and sources (Initiator), which are typically servers. Typically, block data transfer takes place in the SAN network and servers are connected to a disk from the array that behaves the same as a local one (looks identical to a disk connected to SATA, more precisely SCSI, and we can perform the same operations with it), communicating using the SCSI protocol. The SAN network primarily uses transport protocols Fibre Channel Protocol (FCP) or Internet Small Computer System Interface (iSCSI).

Schéma počítačové sítě, kde je použita LAN i SAN

A SAN network consists of active network elements, cabling (passive network elements) and connected devices. We generally refer to devices as (the following terms are widely used with iSCSI, but are also used for FC, these are general terms for SCSI):

  • Initiator (SCSI Initiator) - a client (in most cases a server) that sends SCSI commands over the network. It usually connects to a LUN, negotiates connection with the Target. It can be software (then it uses a regular network card - NIC) or hardware (it's a special HBA), which provides greater performance by offloading the processor. For FC, only HW Initiator exists.
  • Target (SCSI Target) - typically the target address of the disk array through which the Initiator connects to the LUN and sends input/output requests to it. Generally, it's network storage, but it can also be a computer with corresponding software.

Note: Software for running iSCSI Initiator, as well as iSCSI Target, is part of Windows Server and is commonly available for Linux distributions. Applications that provide FC Target are also available. FC Initiator is part of FC HBA, so it's handled in HW.

The primary purpose of a SAN network is that the Initiator obtains disk space (virtual disk) from the Target. Two terms are used in practice to refer to this space (logical disk) from the array (although it should depend on the context and the meaning of these terms is different, they are interchanged in practice). LUN and Volume have a generally given meaning, but different array manufacturers give them different properties and use them differently in their terminology.

  • Volume - logical space on the array that is created inside a group of physical disks and allocated to the client. The actual data is stored in it. It's like a partition from the array's perspective. From the server's perspective (to which we assign it), it's a disk and we can create partitions inside (today the term Volume is more commonly used even in the Windows world). We define basic parameters such as size (we can change it further) and type (for example Thick and Thin).
  • LUN - Logical Unit Number - represents an individually addressable logical device (unit). It's a number for identifying a volume (if there are multiple, otherwise it's 0). In simple terms, a LUN is said to be a logical disk. When we assign multiple volumes to a server, each has a different LUN number. From the SCSI perspective, a LUN is a logical (addressable) device that is part of a physical device (Target).

Note: With some manufacturers, we can create several LUNs inside a Volume, which we assign to servers as separate disks. Another solution is to create a smaller LUN inside the Volume, and the free space can be used for Snapshots (if we want to use Snapshots, free space must remain inside the Volume).

Active and passive network elements

SAN network is built today either on Fibre Channel or Ethernet technology, depending on which transmission protocol we want to use, and this corresponds to the cabling and active network elements used. iSCSI works over Ethernet and we can use metallic (Copper) or optical (Optical Fiber) cables and common Ethernet switches. FCP runs over Fibre Channel and (predominantly) uses optical (Optical Fiber) cables and special FC switches. An alternative is to use FCoE, where FC frames are encapsulated into Ethernet frames, and then we can use classic Ethernet switches (a switch with FCoE support is more efficient).

SFP šachty na switchi s SFP 1000Base-T Transceiver SFP pro Fibre Channel s konektorem LC

On the server (Host) side, an important physical component connects it to the SAN network. For Ethernet, this can be a common network card NIC or a specialized HBA, which is essentially a network card that performs various frequent operations in hardware. For Fibre Channel, we must use a special FC HBA (common manufacturers are QLogic, Emulex, Brocade).

  • NIC - Network Interface Card - network card. A component that connects a device to a computer network.
  • HBA - Host Bus Adapter - similar to a network card. A component that connects a device to a SAN network. It contains a hardware Initiator. It performs various frequent operations in hardware, thus relieving the server's processor (e.g., off-load of TCP/IP and iSCSI protocol). Often, the term HBA is used for FC interface card, but also for a specialized card for iSCSI.

From the comparison, it follows that iSCSI SAN can be built from relatively common components, while for FC SAN we need (special) cabling and definitely special active network elements. The situation is even more complex when the SAN network is not only built within a locality (LAN) but extends across multiple data centers (WAN). For iSCSI, we can theoretically use the Internet, although certain requirements for connection quality and security must be ensured. For FC SAN, we need special data center connections.

Network topology (connection)

Virtualization, consolidation, and convergence are increasingly pushing into the entire IT environment. So even in computer networks, possibilities of converged networks are emerging, where LAN and SAN transmission is combined. However, the traditional view still prevails, and in it, a separate SAN network from the LAN network is recommended, even when both are built on Ethernet.

The descriptions separate FC SAN or IP SAN (another term Ethernet SAN) network design, but the general principles are the same. Within SAN, we typically create two separate networks, each with a different address range and not interconnected (they are closed). The main reason is high availability and security. If we don't have this requirement, we can make do with one network. In practice, only one logical network is also used, to which multiple redundant connections are made. Especially in Fibre Channel networks, the term Fabric is used for a separate network, we can use VLAN or VSAN.

Topologie SAN sítě s oddělenými Fabric
  • Fabric - Fibre Channel Fabric or Switched Fabric, it's a network topology used in FC networks (the most used of possible connection/topology variants). Fabric is one closed network (SAN). It consists of one or more switches, to which end devices (servers and storage) are connected. It's very similar to a classic LAN network, multiple devices can be active at once, the medium is not shared. For high availability, the SAN network is standardly built so that we have two separate fabrics (two mutually unconnected groups of switches, end devices are connected to both), i.e., two independent paths.
  • VLAN - Virtual Local Area Network - (simplified) in the basic concept, a LAN network consists of one or more physical switches, if we want to create another LAN, we must use different switches. VLAN allows us to divide a physical switch into multiple virtual switches. Individual VLANs are communicatively isolated, even if they are on the same switch, and only ports assigned to the same VLAN communicate with each other.
  • VSAN - Virtual Storage Area Network is the equivalent of VLAN from Ethernet. Using VSAN, we connect ports and create a virtual fabric. One port can belong to multiple VSANs, and ports from different switches can be assigned to the same VSAN. Within VSAN, we have separate traffic, as well as security policy, zones, membership, etc.

Communication paths and Multipath

A quite common connection is illustrated in the image above. We have two SAN fabrics (separate networks). The disk array has two controllers, and on each, two network ports are used, with one connected to one fabric and the other to the second. Each port uses its own address for communication, so we have 4 array addresses. The server has a network card (HBA) with two ports (or two cards) and is connected to both fabrics, it has two addresses. This ensures high redundancy/availability (High Availability), a array controller, switch, any cable or port can fail. And it also increases performance (Load Balancing).

Note: In practice, we can use even more paths, either for higher availability or performance, but when we connect two devices in the same fabric with multiple paths (cables), Link Aggregation is often used (a virtual single connection is created) and it has one address (used in both Ethernet and FC).

This connection causes four paths to exist between the server (Initiator) and the disk array (Target). The server sees both array controllers (two addresses) in each fabric, thus a total of 4 array addresses. This causes that when we present a LUN to the server, it sees it through all paths, so four identical disks appear on the server. For everything to work properly and for the server to see only one disk, balance communication across all paths, and use only the available ones in case of failure, the Multipath I/O technique is used.

SAN síť se 4 cestami - Multipath
  • Multipath I/O - a technique for ensuring high availability (High Availability or Fault Tolerance) and increasing performance through load balancing (Load Balancing) if multiple paths exist between Host and Storage.
  • MPIO - Microsoft Multipath I/O - part of Windows Server is Microsoft's implementation of Multipath I/O, which uses DSM (Device Specific Module). Manufacturers can create an optimized module for their array. Windows includes a general Microsoft DSM for Fibre Channel, iSCSI or SAS.
  • ALUA - Asymmetric Logical Unit Access - a SCSI feature, also referred to as Target Port Groups Support (TPGS). A standardized protocol for identifying optimal paths between Host and Storage (path prioritization, describes port status and access characteristics). It is used by Multipath, it is necessary for multiple paths to be active simultaneously (Active/Active).

SAN technologies and protocols

In practice, we use either Ethernet SAN or Fibre Channel SAN, which determines what main network technology is used. We also need a protocol that ensures data transfer over this network technology. The most used is iSCSI for Ethernet SAN and Fibre Channel Protocol (FCP) for FC SAN. iSCSI and FCP only ensure communication over the network, the actual work with data (input/output operations with the disk) is performed by the standard SCSI protocol (which every OS knows), which is encapsulated in iSCSI or FCP. The iSCSI and FCP protocols provide an interface for the SCSI protocol (just like the Parallel SCSI or SAS interface). The operating system uses standard SCSI commands and sees the storage as a local device and works with it normally as with a local disk. The transport layer ensures the packing and unpacking of commands so they can travel over the network.

Srovnání protokolů iSCSI a FCP (vrstvy a zapouzdření)
  • SCSI - Small Computer System Interface - a standard that defines commands, protocols, and electrical and optical interfaces for data exchange. The parallel SCSI interface was widely used in the past (especially for servers) for connecting disks and other internal or external devices. Nowadays, it is replaced by SAS (Serial Attached SCSI) and SATA. SCSI commands ensure work with data blocks, which is used within SAN.
  • Ethernet - a family of network technologies, a standard for telecommunication networks IEEE 802.3. It defines the cabling used, transmission speeds, network access, signals, L2 communication, format of physical addresses (MAC) - generally properties on the 1st and 2nd layer according to the OSI model.
  • TCP/IP - Transmission Control Protocol / Internet Protocol - a set of communication protocols Internet Protocol suite for communication between endpoints, defines how data is packed (packet), addressed, transmitted, routed, and received. The IP protocol is the 3rd layer of the OSI model, uses logical IP addresses for addressing, describes the determination of the packet path (routing). The TCP protocol is the 4th layer of the OSI model, ensures reliable delivery of data without errors and in a given order (uses confirmation, segmentation), uses port numbers to determine the application (on one server, IP address, multiple services with different port numbers can run, the combination of source IP and port and destination IP and port forms a TCP Session). If we don't need reliability, we can use the connectionless UDP protocol.
  • iSCSI - Internet Small Computer System Interface - a network transport protocol that runs over TCP/IP and thus uses Ethernet technology. It allows block access to storage devices by transmitting SCSI commands over the network (LAN, WAN). While TCP is not as efficient as FCP, with today's Ethernet network speeds, this is no longer a problem. iSCSI requires a minimum speed of 1 Gbps, today it is standardly deployed on metallic networks with a speed of 10 Gbps.
  • FC - Fibre Channel - a network technology that was created as a standard for SAN networks. It's a full-duplex, serial, block-oriented, point-to-point communication interface designed for high-speed data transfer (operates at gigabit speeds). FC transmission is lossless. It's commonly used for connecting disk arrays to servers in SAN networks. Various physical media can be used for transmission, typically optical fibers or metallic cables (TP). It uses the Fibre Channel Protocol (FCP) or Fibre Channel over Ethernet (FCoE), which is the encapsulation of FC frames into Ethernet networks.
  • FCP - Fibre Channel Protocol - a transmission protocol that transmits SCSI commands over an FC network. FCP works over FC, like TCP/IP over Ethernet. It's similar to iSCSI, specially designed for one purpose, so it's more optimal and leaner (less overhead) and doesn't need TCP/IP layers.

Used addresses

SCSI addressing

For addressing SCSI devices, an address (ID) of several devices is used: Adapter ID, Target ID and LUN, possibly extended with Channel (when one adapter has multiple buses) Adapter ID, Channel ID, Target ID and LUN, example 1-0-1-0. When we encapsulate SCSI into some SAN protocol, it handles the Initiator - Target communication, and when using Multipath, we get the same disk through different adapters, so only LUN remains as important data. More precisely, we need a unique LUN identifier (LUN Network Address Authority (NAA) number), referred to variously as Serial Number, GUID, LUN/Volume WWN.

iSCSI protocol

The iSCSI protocol uses TCP/IP, so MAC addresses and IP addresses are used for network communication. The iSCSI Target listens on a specific TCP port, which is usually port 3260. The iSCSI protocol itself uses IQN names for identification and connection between Initiator and Target.

  • MAC address (Media Access Control address) - physical address of a network interface, a unique identifier used for communication within a network segment (says nothing about its location in the network). It's typically assigned by the device manufacturer (in practice, it can often be changed). Example 01:23:45:67:89:ab
  • IP address (Internet Protocol address) - logical address for identifying a node within a computer network. IP addresses allow division of networks into subnets (subnetting) and routing between networks. Example 193.100.10.1
  • IQN - iSCSI Qualified Name - unique iSCSI identifier of an Initiator or Target, example iqn.1991-05.com.microsoft:server.company.local
iSCSI protokol (zapouzdření SCSI)

Fibre Channel Protocol

The FCP protocol uses Fibre Channel, where WWPN addresses (generally WWN addresses) are used for communication, which is the equivalent of a MAC address in Ethernet, i.e., a unique identifier assigned to a network interface. A dual-port Ethernet NIC (network card) has two MAC addresses (one for each port). A dual-port FC HBA has three WWN addresses (one Node WWN and two Port WWN). There's no equivalent to IP addresses, so communication is more flat.

There are several topologies for Fibre Channel, but today Switched Fabric is probably exclusively used, where all devices are connected to a switch (the same principle as in today's Ethernet) and communicate directly with each other. In this case, Fibre Channel ID (or Port ID) is also used, which is assigned to each FC interface in the up state and is used for communication in FC frames.

  • WWN - World Wide Name - a general term for a globally unique identifier used in the world of Fibre Channel, equivalent to a MAC address in Ethernet, 8 bytes (64 bits) long, thus longer than MAC, example 01:23:45:67:89:ab:cd:ef
  • WWPN - World Wide Port Name - WWN assigned to a port in a Fibre Channel Fabric, Cisco sometimes uses the term Port WWN
  • WWNN - World Wide Node Name - WWN assigned to a node (end device, disk array, switch) in a Fibre Channel Fabric, the same WWNN can be seen on all ports of a device, but each time with a different WWPN
  • FCID - Fibre Channel ID - more precisely Fibre Channel Node Port Identifier, also abbreviated as N Port ID, a 24-bit number assigned to an end device (N Port) during the FLOGI process. The switch uses FCID to route frames from a given source (initiator) to a specific target within the SAN fabric. It's a unique identifier within the fabric. It consists of Domain_ID, Area_ID and Port_ID. Example 0x5e0000
  • Name Server - a service that runs on the switch and collects a list of WWNs across the entire Fabric (attributes of all devices in the VSAN). It creates a database called FCNS, which is shared by all switches in the network.

Note: Just as in an Ethernet network, frames are used in an FC network. For addressing in FC frames, WWN addresses are not used, but shorter FCIDs. The assignment of FCID and WWN can be learned at the Name Server.

Fibre Channel frame (rámec)

Host address

On the disk array, we always define a Host object, which is a client that connects to it and to which we then assign disk space (Volume or LUN). The Host is defined either by Fibre Channel WWN (World Wide Name) or iSCSI IQN (iSCSI Qualified Name). We need to find this information on the client to be able to perform the configuration on the disk array. If network communication is properly enabled, the array typically detects available addresses in the network and offers them during configuration. Publishing a virtual disk for a client is also a security measure and is also referred to as LUN Masking.

  • LUN Masking - LUN is provided (provisioned) to a specific Host according to its (Initiator) address through certain (Target) ports of the array. The controller allows commands to the given LUN only coming from the given Initiator. Of course, the Initiator address can be spoofed (more easily with iSCSI, overall depends on network security).

Communication control (security), or who can communicate with whom

iSCSI protocol

Ethernet SAN uses common network technologies as in LAN networks, so all devices in the same subnet can communicate with each other, and when using routing, we can also communicate between networks. To limit communication, we can use, for example, ACL (Access Control List).

Security in an iSCSI network is handled through authentication (using the not very secure CHAP protocol). General security is addressed by using a separate SAN network, to which only servers that can communicate with the disk array are connected, and no one from the regular LAN network should have access.

Additional security is on the array side, where we use LUN Masking, mapping volumes (Volume - Target LUN) and servers (Initiator), so no one else has access to the volume (without using authentication, this cannot be considered extra security).

Fibre Channel Protocol

In a Fibre Channel SAN network, all devices in the same Fabric (VSAN) can normally communicate directly. In practice, this communication is limited at the network level (FC switches) to ensure greater security. The method used is referred to as zoning. We create zones within a specific VSAN and only devices in a given zone can communicate with each other (we typically create a zone for each server and always add the array to it). A device that is not included in any zone uses the default zone policy. We can include devices in a zone using WWPN, FC ID, the interface it's connected to, etc.

  • Zoning (masking) - zoning of VSAN/SAN, we create zones and only devices in a given zone can communicate with each other. It's set on the switch. It's not mandatory to use, but for security reasons, it's always used in practice.

Created zones are grouped into zone sets, which are shared by all switches in the given fabric. Other information is also exchanged and shared by all FC switches, such as a list of all devices logged into the Fabric topology. So on each switch, we can obtain information about the entire network (fabric).

Example of zoning configuration (using aliases) on Cisco MDS:

SWITCH(config)#zone name Server1 vsan 10
SWITCH(config-zone)#member fcalias Server1-port-A
SWITCH(config-zone)#member fcalias Array-port-1
SWITCH(config)#zoneset name SAN-VSAN10 vsan 10
SWITCH(config-zoneset)#member Server1

Since FC is a less common technology than Ethernet, this also contributes to security. A separate SAN network is always used. LUN Masking is also used on the array side.

Connecting the server to the array

iSCSI protocol

The iSCSI protocol uses TCP/IP, so we need to know the IP address (or all of them) and TCP port of the disk array. The server must be connected to the SAN network and have IP addresses correctly set to be able to communicate with the array. On the server, we then configure the iSCSI Initiator, where we enter the address and port of the array, usually in the Discovery section, where other addresses are automatically detected. If the iSCSI communication is successful, we will see the detected array (Target), its IQN name and detected paths. Next, we need to enable/set up Multipath and its path selection policy (most often Round Robin). Optionally, we can set up security using authentication.

Konfigurace iSCSI Initiator - Discover Portal

In an iSCSI SAN network, it's recommended to use Jumbo Frames. If we decide to do so, we must make the same setting across the entire network on all switches, servers (Initiator) and disk array (Target).

  • Jumbo Frames - Ethernet frames that have a larger size (MTU) than 1500B, often 9000B

Fibre Channel Protocol

Fibre Channel Protocol uses World Wide Name (WWN) for communication, but we don't need to know the WWN of the disk array. If the server is properly connected to the SAN network and can communicate with the array (zoning), it will automatically connect with it. Then we only need to enable/set up Multipath, where our disk array should be found.

Difference between iSCSI and FC

A certain difference between communication in iSCSI and FC is that within iSCSI, we must make a network connection and then configure the iSCSI Initiator, where we enter the addresses of the disk array. Only then will the disk (Volume) assigned to the server appear on it (automatically). With FCP, it's enough for the network connection to be set up, then the disk array is automatically found and the assigned disks are connected (of course, we must have the driver for FC HBA installed).

Author:

Related articles:

Computer Storage

Data storage is a vast and complex issue in the computer world. Here you will find articles dedicated to Storage Area Networks (SAN), iSCSI technologies, Fiber Channel, disk arrays (Storage System, Disk Srray) and data storage and storage in general.

If you want write something about this article use comments.

Comments
  1. [1] Pavel

    Zdravim, vrta mi hlavou otazka: Mam cluster a vypadek uzlu A prevezme uzel B. Uzel A i B ma logicky data na nejakem takovem diskovem poli jako zde v clanku. Ale jak se resi ochrana ze selze toto pole kde jsou ma data? (napr. ho nekdo ukradne :))

    Monday, 11.09.2017 12:59 | answer
  2. [2] Zagro

    Musím pochválit za pěkný článek. V tomto odvětví jsem se pohyboval 6 let a takto pěkně sepsané/popsané jsem to snad nikde nenašel. Pro začátečníka naprosto super, ikdyž samozřejmě četba několikasetstránkového redbooku, nebo jeho obdoby, ho nemine. Až se mi skoro zastesklo, ikdyž dávám přednost "NASu od NetAppu" :)

    Wednesday, 25.10.2017 15:09 | answer
  3. [3] Václav

    respond to [1]Pavel: Napadají mne 3 řešení. Buď máš druhý pole v clusteru nebo častěji, máš druhý pole v nějaký geograficky oddělený lokalitě a máš mezi nimi nastavený replikace. Drahý řešení. Nebo prostě často zálohuješ na zálohovací server, který máš pro jistotu taky v clusteru, případně v geograficky oddělený lokalitě. A pro sichr to ještě hrneš na pásku. Levnější řešení s menší dostupností - samozřejmě zálohuješ data i v prvním případě s dvěma datovými úložišti. Nebo máš vše v cloudu a máš dobře nastavenou smlouvu. Nejlevnější řešení, ale prakticky nevíš, kde všude tvá data jsou :-)

    Monday, 19.03.2018 11:51 | answer
  4. [4] Honza

    Skvěle napsáno, jako vždy. :-) Jen se chci zeptat, SW např. od NetAPP se tedy instaluje někam na server a přes něj se spravuje SAN uložiště, vytváři virtuální Volume atd. ?

    Nebo to chápu špatně a je to právě ten zmiňovaný kontroler?

    Předem díky a omlouvám se za hloupý dotaz. :-)

    Friday, 23.03.2018 16:36 | answer
  5. [5] Samuraj

    respond to [4]Honza: To záleží na poli :-).

    Například NetApp AFF8040 (všechny All Flash FAS) má instalovanou, jako součást systému, OnCommand System Manager. To je webové rozhraní pro správu. Takže se prohlížečem připojím přímo na pole (nebo napřímo pomocí SSH).

    Na druhou stranu NetApp E2760 (všechny E-Series Storage System) nabízí (asi) nějaké API pro konfiguraci a aby to bylo uživatelsky příjemné, tak je potřeba na server nebo stanici nainstalovat aplikaci SANtricity Storage Manager Client, pomocí které provádíme správu.

    Je to podobné jako jinde, některé systémy spravujeme pomocí webového rozhraní (třeba Exchange 2016) a někde potřebujeme aplikaci - bohatého klienta (třeba Exchange 2010).

    Friday, 23.03.2018 18:24 | answer
  6. [6] Honza

    respond to [5]Samuraj: Paráda, děkuju za vysvětlení. ;-)

    Friday, 23.03.2018 20:14 | answer
  7. [7] Honza

    Naprosto dokonale napsano. Muselo to dat spoustu prace. Obcas se pohybuji v okoli SANek a jako novacek jsem uplne marnej, kdyz se kolegove bavi o zminovane technologii. Zde cerpam rozumy :-)

    Ad terminologie: taky me neskutecne stve, obzvlast neustala zamena vyrazu pro stejnou vec a naopak, A zvlast pouzivani vyrazu "host" ve vyznamu zarizeni, pripojujici se k poli. Z toho jsem na prasky...

    Friday, 03.08.2018 17:08 | answer
  8. [8] Karel

    Díky za super článek. Zrovna se rozhoduji zda jít do FC nebo iSCSI. Co má budoucnost?

    Friday, 28.09.2018 15:16 | answer
  9. [9] Druid

    Teoretická otázka lze propojit mezi sebou dva servery pomocí SAN. Stím že by jeden poskytoval uložný prostor druhému - prostě by se ze serveru udělo uložiště. Něco jako pomocí LAN přímé propojení kabelem, jen taby to bylo optikou jak by to bylo z nastavením WWN?

    Friday, 03.06.2022 22:59 | answer
Add comment

Insert tag: strong em link

Insert Smiley: :-) ;-) :-( :-O

Help:
  • maximum length of comment is 2000 characters
  • HTML tags are not allowed (they will be removed), you can use only the special tags listed above the input field
  • new line (ENTER) ends paragraph and start new one
  • when you respond to a comment, put the original comment number in squar brackets at the beginning of the paragraph (line)