Flash Storage​

 View Only

FC vs iSCSI Storage Area Networks (SAN)

By Stephen Rice posted 08-28-2023 12:50


In this blog I am attempting NOT to choose the best vehicle for your business application.  I have multiple vehicles that I personally own, each has a best application or purpose.   I have a sporty luxury sedan and a four-door pickup truck.  Each vehicle has area’s where it excels and where it is deficient.  As there is no perfect vehicle, there is no one perfect network protocol for all workloads in a Storage Area Network.  You will have to choose the one that fits the best for your business application.

Fibre Channel

Fibre Channel(FC) was designed to be a high speed low latency network, I would equate the sporty luxury sedan to FC . FC networks are usually designed with consideration of capacity and link sharing and the simple message flow (buffer to buffer credits) plays towards these strengths and realizes the low latency requirement.  It works extremely well with small block sizes which require low latency.  There are several protocols we have tested in the lab, SCSI-FCP and FC-NVMe.  Testing with FC-NVMe we have achieved 800 KIOPs @ 39µs (39 microseconds(µs) = 0.039 milliseconds(ms)). response time.  See Blog…

With SCSI-FCP many tests have been run in the lab with near 0.1 ms or 100 µs for cache read workloads. Typically, FC is a more secure network, because it is not a public network.   However, FC is a lot more expensive to implement than an IP network. 

iSCSI was designed to use Transmission Control Protocol/Internet Protocol TCP/IP as a transport.  TCP/IP is a suite of transmission protocols used to interconnect devices over the internet.  Thus TCP/IP is a general purpose transport with a lot of complex tuning knobs and switches.  I would equate TCP/IP to a pickup. TCP flow control was designed to use capacity in a fair manner, often with unpredictable demand.  TCP congestion algorithms monitor acknowledgements and calculate the demand they should place on the network based on previous performance, ramping up or down as needed.  This can have the effect of single-session bandwidth inconsistency.

iSCSI seems to have higher latency with the small block sizes.  Where FC is in the tenth of a millisecond range, iSCSI is in the half millisecond range for our testing in the lab. There are several reasons for this, one is FC switches are typically cut-thru and IP switches tend to be store and forward.  IP uses store and forward so that corrupted frames do not flow through the entire network.  With most vendors the entire switch must be set to one switching methodology.  TCP/IP jumbo frames used in a store and forward switch may cause more latency, because the switch must wait for the complete frame to be validated.  Some installations only allow TCP/IP jumbo frames locally and a standard MTU of 1500 on all WAN connections to reduce latency.  A second reason for higher latency of iSCSI could be that not all Hosts connected may have a TCP/IP stack offload engine, and TCP/IP must interrupt the Host CPU. An offload engine for iSCSI increases the cost for each Host, in a similar way to a FC HBA.  IP networks typically are a more public network. Thus, IP networks tend to have less predictable traffic, less predictable latency, and more security issues. IP networks are less expensive to implement than FC.  In this analogy I would consider iSCSI the pickup.  Probably not as fast as the sporty luxury sedan, but more versatile.  The pickup can handle multiple task, it can haul more adult passengers, while hauling cargo in the back. But being larger makes it more difficult to find public parking or maneuver in heavy traffic.

In large block size both FC and iSCSI performance is similar filling the bandwidth available.

Not every strategic business unit needs low latency performance.  However, small block size workloads like Online transaction Processing(OLTP) where response time is an important component like, credit card, banking, and stock market transactions.  Another example would be CRM/ERP applications like SAP® would probably also lean more to FC because of the predictable low latency even though there is higher cost.

Other strategic business units may have various workloads where response time is not as mission critical. iSCSI will work at a much lower cost and is a good fit for, general computing, research projects, VDI applications, Office productivity applications.

In the Hitachi Vantara solution, the physical cards that supports FC option on the storage subsystems has twice as many ports as the iSCSI physical cards.  Testing in the lab with the VSP E1090 and VDBench IO driver with iSCSI 10G, iSCSI 25G, and 32G SCSI-FCP achieved nearly the same results with each topology for the following workloads.  We found that Random Write Cache miss 8K, OLTP 30% Cache Hit, and VMware workloads were able to exhaust the Parity Group SSD resources of the VSP E1090. As well OLTP 70% Cache Hit on the VSP E1090 also achieved very similar results with iSCSI 10G, iSCSI 25G, and 32G SCSI-FCP while both MPU was approximately 80% busy and Parity Groups were approximately 35% busy.​  In other words the frontend bandwidth (FC or iSCSI) was able to exhaust the storage subsystem.

In conclusion, a FC SAN may be cost prohibitive for some businesses while iSCSI SANs are more affordable option.  Setting up a SAN whether FC or iSCSI takes a tremendous design and network design effort to be successful. Also, all components in a storage subsystem should be well balanced behind the SAN infrastructure to handle the workload is just as important as choosing the network topology.

If your budget only allows you to choose one SAN network for your business, pick carefully.  Many businesses may have multiple SAN networks chosen to fulfill unique business requirements, each serving a specific purpose.