A Q&A from the FCoE vs. iSCSI vs. iSER Debate

It’s become quite clear to those of us in the SNIA Ethernet Storage Forum (ESF) that everyone loves a great debate. We’ve proved that with our “Great Storage Debates” webcast series which has had over 3,500 views in just a few months! Last month we had another friendly debate on FCoE vs. iSCSI vs. iSER. If you missed the live event, you can watch it now on-demand and download a pdf of the webcast slides.  Our live audience asked a lot of interesting questions. As promised, here are answers to them all.

Q. How often are iSCSI offload adapters used in customer environments as compared to software initiators?   Can these adapters be used for all IP traffic or do they only run iSCSI?

A. iSCSI offload adapters are ideally suited for enabling high-performance storage access at up to 100Gbps data rates for business-critical applications, for example, latency-sensitive transactional applications and large-file business intelligence applications. iSCSi offload adapters typically also support offload of other storage protocols such as NVMe-oF, iSER, FCoE as well as regular Ethernet traffic using offload or non-offload means.

Q. What you’ve missed with iSCSI is Jumbo Frames. That payload size is one of the biggest advantages over Fibre Channel. The biggest problem with both FCoE and iSCSI is they build the networks too complex, with too many hops, without true redundant isolation. Best Practices with block based FC is to keep the host and storage as close to each other as possible. And to have separate isolated redundant networks/fabric.

A. The Jumbo Frame (JF) argument is quite contentious among iSCSI storage and network administrators, even beyond anything to do with Fibre Channel.

Considering that the performance advantages of JFs are minimal – only 3%-5% performance boost over default MTU sizes of 1500. In mixed workload environments (which dominate the Data Center application deployments), JFs simply do not provide the kind of benefits that people expect in real-world scenarios. The only time JFs can “push the needle,” so to speak, is when you have massively scaled systems with 100s or 1000s of devices, but this raises other issues.

One of those issues is that every device in the system needs to have JFs enabled. This can be something of a problem when systems get as large as they need to be in order to take advantage of JFs. Ensuring that every device is configured properly – especially over time, and especially when considering how iSCSI devices are added to networked environments – is a job that requires the coordination of the server/virtualization teams, the networking teams, and the storage teams. By and large, many people find QoS to be a more productive means of performance improvement for iSCSI systems than JFs.

Fibre Channel, on the other hand, has a maximum frame size of 2112 bytes. FCoE, then, only requires “baby jumbo” frames, for which the configuration is pushed from the switch to the end devices (~2.5k). What FC has that iSCSI does not have is the concept of “sequences” and “exchanges,” which ensure that the long-flow of frames (regardless of their size) are sent as an entity. So, regardless of what the frame size is (2.5k or 9k), the data flow is sent with consistency and low-jitter because of the way that the sequences and exchanges are handled.

The concern about “too complex” and “too many hops” is an interesting one, as Fibre Channel (and, correspondingly, FCoE) are deliberately kept as simple and straightforward as possible. A FC network, for instance, rarely goes beyond 2 hops (“hops” in FC are measured as the links between switches, whereas in Ethernet “hops” are measured as the switches themselves).

Logically, then, there is usually, at most, an edge-core-edge topology with a predeterministic path to be followed thanks to Fibre Channel’s FSPF routing algorithm.

iSCSI topologies, on the other hand, can be complex (as Ethernet topologies sometimes can be). For larger iSCSI environments, it is often recommended to isolate the storage traffic out into its own, simplified topology. iSCSI SANs that have grown organically, however, can sometimes struggle to be reined in over time.

Best practices for all storage is to keep it as close to the host/source as is reasonably possible, not just block. In backup scenarios for example, you want the storage far enough away to be safe from any catastrophe, but close enough to ensure recovery objectives. The design principle of keeping storage as close to the host is a common best practice, and as mentioned in the webinar it is important that architectural principles ensure high availability (HA) to compensate for the rigidity that block storage systems require to compensate for weaker ULP recovery mechanisms.

 Q.  Most servers today have enough compute power to not need offload adapters.

A.  This statement might be true in some situations, but definitely not most. With more and more virtual machines being deployed on physical systems and new storage technologies such as SSDs, and NVMe devices which greatly lower latencies, servers are often CPU bound when moving or retrieving data from storage. Offloading storage related activities to an adapter frees the CPU and increases overall server performance.

Q. In which industry is each protocol (i.e. FCOE or ISCSI and iSER) widely used and where?

A. iSCSI is the most widely-supported Ethernet SAN protocol  with native initiator support integrated into all the major operating systems and hypervisors, built-in RDMA for high performance offloaded implementations supporting up to 100Gbps and support across major storage platforms and  is thus ideally suited for deployment across cloud and enterprise data center environments.

Q. Do iSCSI offload adapters provide the IPSec encryption, or is this done in software only solutions? Please answer from both initiator and target perspective.

A. Yes, iSCSI protocol offload adapters can optionally provide offload of IPSec encryption for both iSCSI (as well as NVMe-oF) initiator and target operation at data rates of up to 100 Gigabits-per-second. This results in overall higher server and target efficiency including power, cooling, memory, and CPU savings.

Q. Does iSER support direct or is a switch between them required?

A. A switch is not required.

Q. J, you left out the centralized management that Fibre Channel provides for FCoE as a positive.

A. I got there eventually! But you are correct, the Fibre Channel tools for a centralized management plane with the name server – regardless of the number of switches in the fabric – is a tremendous positive for FCoE/FC solutions at scale.

Q. Is multipath possible on the initiator with ISER and will it scale with high IOPs?

A. Yes. Mulitpath is possible on the initiator with iSER and scales with high IOPs.

Q. FCoE has been around for a while, but I noticed that some storage vendors are dropping support for it. Do you still see a big future for FCoE?

A. As a protocol, FCoE has always been able to be used wherever and whenever needed. Almost all converged infrastructure systems use FCoE, for instance. Given that the key advantage of FCoE has been traffic/protocol consolidation, there is an extremely strong use case for FCoE at “the first hop” – that is, from the server to the first network switch.

Q. What is the MTU for iSER ?

A. iSER as a protocol that sits above the Layer 2 Data Link Layer, which is where the MTU is set. As a result, iSER will accept/accommodate any MTU setting that is configured at that layer. Please see the answer earlier about Jumbo Frames for more information.

Ready for more great storage debates? Our next one will be RoCE vs. iWARP on August 22, 2018. Save you place by registering here.

And you can check out our previous debates “File vs. Block vs. Object Storage” and “Fibre Channel vs. iSCSI” on-demand at your convenience too. Happy debating!

FCoE vs. iSCSI vs. iSER: Get Ready for Another Great Storage Debate

As a follow up our first two hugely successful “Great Storage Debate” webcasts, Fibre Channel vs. iSCSI and File vs. Block vs. Object Storage, the SNIA Ethernet Storage Forum will be presenting another great storage debate on June 21, 2018. This time we’ll take on FCoE vs. iSCSI vs. iSER.

For those of you who’ve seen these webcasts, you know that the goal of these debates is not to have a winner emerge, but rather provide unbiased education on the capabilities and use cases of these technologies so that attendees can become more informed and make educated decisions.

Here’s what you can expect from this session: One of the features of modern data centers is the ubiquitous use of Ethernet. Although many data centers run multiple separate networks (Ethernet and Fibre Channel (FC)), these parallel infrastructures require separate switches, network adapters, management utilities and staff, which may not be cost effective.

Multiple options for Ethernet-based SANs enable network convergence, including FCoE (Fibre Channel over Ethernet) which allows FC protocols over Ethernet and Internet Small Computer System Interface (iSCSI) for transport of SCSI commands over TCP/IP-Ethernet networks. There are also new Ethernet technologies that reduce the amount of CPU overhead in transferring data from server to client by using Remote Direct Memory Access (RDMA), which is leveraged by iSER (iSCSI Extensions for RDMA) to avoid unnecessary data copying.

That leads to several questions about FCoE, iSCSI and iSER:

  • If we can run various network storage protocols over Ethernet, what differentiates them?
  • What are the advantages and disadvantages of FCoE, iSCSI and iSER?
  • How are they structured?
  • What software and hardware do they require?
  • How are they implemented, configured and managed?
  • Do they perform differently?
  • What do you need to do to take advantage of them in the data center?
  • What are the best use cases for each?

Register today to join our SNIA experts as they answer all these questions and more on the next Great Storage Debate: FCoE vs. iSCSI vs. iSER. We look forward to seeing you on June 21st.

 

Fibre Channel vs. iSCSI – The Great Debate Generates Questions Galore

The SNIA Ethernet Storage Forum recently hosted the first of our “Great Debates” webcasts on Fibre Channel vs. iSCSI. The goal of this series is not to have a winner emerge, but rather provide vendor-neutral education on the capabilities and use cases of these technologies so that attendees can become more informed and make educated decisions. And it worked! Over 1,200 people have viewed the webcast in the first three weeks! And the comments from attendees were exactly what we had hoped for:

“A good and frank discussion about the two technologies that don’t always need to compete!”

Really nice and fair comparison guys. Always well moderated, you hit a lot of material in an hour. Thanks for your work!”  

“Very fair and balanced overview of the two protocols.”

“Excellent coverage of the topic. I will have to watch it again.”

If you missed the webcast, you can watch it on-demand at your convenience and download a copy of the slides.

The debate generated many good questions and our expert speakers have answered them all: Read More

The Great Debates – Our Next Webcast Series

The SNIA ESF is announcing a new series of webcasts, following our hugely successful “Everything You Wanted To Know About Storage But Were Too Proud To Ask” webcasts. Those focussed on explaining storage technology from the ground up, and while they were pretty all encompassing in their storage technology coverage, they didn’t compare or contrast similar technologies that perform broadly similar functions.

That’s what we’re going to do in our new “Great Debates” series, the first of which was “FC vs. iSCSI.” It’s now available on-demand. I encourage you to check it out. It’s a great debate with experts who really know their stuff.

But wait… FC vs. iSCSI? That “versus” sounds more like an argument than a discussion. Was there a winner? Was this a technology fight, with a clear-cut winner and a loser? The answer is an emphatic “No!” Read More

FC vs. iSCSI – The Debate Continues

It’s one of the great IT debates: Fibre Channel (FC) or iSCSI. We at the SNIA Ethernet Storage Forum thought this was this perfect way to kick off the New Year, so we’re hosting a live webcast “FC vs. iSCSI” on January 31st with experts who will not be afraid to highlight differences and compare and contrast these two storage protocols.

In the enterprise, block storage typically handles the most critical applications such as database, ERP, product development, and tier-1 virtualization. The dominant connectivity option for this has long been Fibre Channel SAN (FC-SAN), but recently many customers and block storage vendors have turned to iSCSI instead. FC-SAN is known for its reliability, lossless nature, 2x FC speed bumps, and carefully tested interoperability between vendors. iSCSI is known for running on ubiquitous Ethernet networks, 10x Ethernet speed bumps, and supporting commodity networking hardware from many vendors.

Because, FCoE also delivers increasing performance as Ethernet speeds increase – and, Fibre Channel also delivers increasing performance as FC speeds increase. Historically, FC delivered speed bumps at a more rapid interval (2x bumps), while Ethernet delivered their speed bumps at a slower pace (10x bumps), but that has changed recently with Ethernet adding 2.5G, 5G, 25G, 40G, and 50G to the traditional 1G, 10G, 100G timeline.

As the storage world moves to more flash and other non-volatile memory, more cloud, and more virtualization (or more containers), this debate becomes even more interesting. Attend this webcast to learn:

  • Will Fibre Channel or iSCSI deliver faster performance? Does it depend on the workload?
  • How is the wire speed race going between FC and iSCSI? Does anyone actually run iSCSI on 100GbE? When will 128Gb Fibre Channel arrive?
  • Can any server or storage array actually support more than 32Gb/s or 40Gb/s speeds?
  • Do Linux, Windows, or hypervisors have a preference?
  • Is one really easier to install and manage, or are they just different?
  • How does the new NVMe over Fabrics protocol affect this debate?

I will be moderating this event where storage networking experts Fred Knight (NetApp) and John Kim (Mellanox) will argue in an energetic, yet friendly way about the differences and merits of each. And they will be available to answer your questions on the spot. I encourage you to register today and start off 2018 with this exciting and informative discussion.

Comparing iSCSI, iSER, and NVMe over Fabrics (NVMe-oF): Ecosystem, Interoperability, Performance, and Use Cases

iSCSI is one of the most broadly supported storage protocols, but traditionally has not been associated with the highest performance. Newer protocols like iSER and NVMe over Fabrics promise extreme performance but are still maturing and lack the broad feature and platform support of iSCSI. Storage vendors and customers face interesting tradeoffs and options when evaluating how to achieve the highest block storage performance on Ethernet networks, while preserving the major software and hardware investment in iSCSI. Read More

Q&A on All Things iSCSI

In the recent SNIA Ethernet Storage Forum iSCSI pod webcast, from our “Everything You Wanted To Know About Storage Part Were Too Proud to Ask” series, we discussed all things iSCSI. If you missed the live event, it’s now available on-demand. As promised, we’ve compiled all the webcast questions with answers from our panel of experts. If you have additional questions, please feel free to ask them in the comment field of this blog. I also encourage you to check out the other on-demand webcasts in this “Too Proud To Ask” series here and stay informed on upcoming events in this series by following us on Twitter @SNIAESF.

Q. What does SPDK stand for?

A. SPDK stands for Storage Performance Development Kit. It is comprised of tools and libraries for developers to write high performance and scalable storage applications in user-mode. For details, see www.spdk.io.

Q. Can you elaborate on SPDK use? A quick search seems to indicate it is a “half-baked” solution, and available only on Linux systems.

A. SPDK isn’t a solution, per se – it’s a development kit, intended to provide common building blocks (NVMe drivers, NVMe over Fabrics targets & host/initiator, etc.) for solutions developers who care about latency, license (BSD) and efficiency.

Q. Is iSCSI ever going to be able to work with object storage?

A. iSCSI is a block storage protocol while object storage is normally accessed using a RESTful API such as Amazon’s S3 API or the Swift API. For this reason, iSCSI is unlikely to be used for direct access to object storage. However, an object storage system controller could use iSCSI—or other block protocols–to access remote storage enclosures or for data replication. There also could be storage systems that support both iSCSI/block and object storage access simultaneously.

Q. Does a high-density virtualized workload represent something better served with a full or partial offload solution?

A. The type of workload that is better served with full or partial offload will really depend more on what that workload is doing. If you are processing a lot of very large data segments, LSO or LRO might be very helpful. If you have a lot of smaller data sets, you might be able to benefit from checksum or chimney offload. Unfortunately, the best way to see is to test things out (but not on production, obviously).

Q. How does one determine if TOE NIC cards are worth the cost?

A. This is a really tough question to answer without context. The best way to look at it is do some digging into what your CPU and memory utilization and IO patters look like on your servers and try to map that to TCP connections. If you have a lot of iSCSI IO and a large amount of TCP connections on a server, that might be a candidate for TOE. That’s just a technical response, but then comes the really tricky part – the QUANTITY measurement of how many dollars it is worth… that’s way more challenging. For example, if I have a regular 10G NIC that costs $200 and a TOE card that costs 3x that and only saves 5% CPU, then it may not have enough value. On the other hand, if that 5% CPU can be used by your application to transact enough business to pay for the extra $400, then it’s worth it. Sorry to say that I have seen no scientific way to enumerate that value outside of specific hands-on testing of the solution with and without TOE NICs.

Q. What is the difference between a stateless and stateful TCP offload? Are RSS and TSS (receive-side and transmission-side scaling) offloads a type of TCP offload or are they operating at a lower level like Layer 2?

A. Stateless offloading is basically any offload function that can be done without the NIC needing to maintain a connection state table. Checksum offloads are an example. Stateful offloading is any offloading that requires the NIC to maintain a full state connection table. Receive Side Scaling has to do with distributing inbound connections in order to alternate connections coming into the server to different CPUs on a multi-CPU server. There are also some other performance-enhancements that can be done such as RPS, RFS, XPS and some others. These are more about how to get data from the network to the CPU, but are not really specifically TCP functions, as they have to do with uniform processing, not necessarily to do with the TCP stack.

Q. Is using the host CPU to run iSCSI really a downside?

A. There may be applications where this is a problem, but you’re generally right; it’s not too much of an issue today. But there are iSCSI-based storage solutions coming up where a consistent 100s of nanoseconds to low microseconds of latency from the device is possible – and that’s very fast indeed. So an iSCSI stack in these circumstances needs to ensure that its consumption of CPU doesn’t increase the latency (even very efficient stacks can add 100s of micro- to milliseconds of latency), or cause contention for the CPU (busy CPUs mean you may queue for compute resources).

Q. Is the term “onload” for iSCSI new – never heard this before?

A. It was intended as a quick shorthand word to stand in contrast to iSCSI offload. It will probably not catch on!

Update: If you missed the live event, it’s now available  on-demand. You can also  download the webcast slides.

 

 

 

Would You Like Some Rosé with Your iSCSI?

Would you like some rosé with your iSCSI? I’m guessing that no one has ever asked you that before. But we at the SNIA Ethernet Storage Forum like to get pretty colorful in our “Everything You Wanted To Know about Storage But Were Too Proud To Ask” webcast series as we group common storage terms together by color rather than by number.

In our next live webcast, Part Rosé – The iSCSI Pod, we will focus entirely on iSCSI, one of the most used technologies in data centers today. With the increasing speeds for Ethernet, the technology is more and more appealing because of its relative low cost to implement. However, like any other storage technology, there is more here than meets the eye.

We’ve convened a great group of experts from Cisco, Mellanox and NetApp who will start by covering the basic elements to make your life easier if you are considering using iSCSI in your architecture, diving into:

  • iSCSI definition
  • iSCSI offload
  • Host-based iSCSI
  • TCP offload

Like nearly everything else in storage, there is more here than just a protocol. I hope you’ll register today to join us on March 2nd and learn how to make the most of your iSCSI solution. And while we won’t be able to provide the rosé wine, our panel of experts will be on-hand to answer your questions.

Update: If you missed the live event, it’s now available  on-demand. You can also  download the webcast slides.

Q&A on Exactly How iSCSI has Evolved

Our recent SNIA ESF Webcast, “The Evolution of iSCSI” drew a big and diverse group of attendees. From beginners looking for iSCSI basics, to experts with a lot of iSCSI deployment experience, there were plenty of good questions. Our presenters, Andy Banta and Fred Knight, did a great job answering as many as they could during the live event, but we didn’t have time to get to them all. So here are answers to them all. And by the way, if you missed the Webcast, it’s now available on-demand.

Q. What are the top 3 reasons to choose iSCSI over FC SAN?

A. 1. Use of commodity equipment and protocols. It means that you don’t have to set up a completely separate network. It means you don’t have to buy separate HBAs. 2. Inherent networking capability. Built on top of TCP/IP, it benefits from any networking technology to come along. These include routing, tunneling, authentication, encryption, etc. 3. Ease of automation and configuration. In it’s simplest form, an iSCSI host only needs to know the IP address of the target system. In more complex systems, hosts and storage provide APIs to allow automation through scripting or management tools.

Q. Please comment on why SCSI went from being a widely used protocol for all sorts of devices to being focused as only essentially a storage protocol?

A. SCSI was originally designed as both a protocol and a bus (original Parallel SCSI). Because there were no other busses, the SCSI bus did it all; disks, tapes, scanners, printers, Optical (CDs), media changers, etc. As other busses came onto the market (think USB), many of those devices moved to the new bus (CDs, printers, scanners, etc.) Commodity devices used commodity busses (IDE, SATA, USB), and enterprise devices used enterprise busses (FC, SAS); and so, disks, tapes, and media changers mostly stayed on SCSI.

The name SCSI can be confusing for some, as the term originally was used for both the SCSI protocol and the SCSI bus. The term for the SCSI protocol is all that remains today; the SCSI bus (the old SCSI parallel bus) is no longer in wide use. Today, the FC bus, or the SAS bus, or the SoP bus, or the SRP bus are used to carry the SCSI protocol. The SCSI Architecture Model (SAM) describes a very distinct separation between the device layer (the SCSI protocol) and the transport layer (the bus).

And, the SCSI command set has become the basis for many subsequent command sets. The JEDEC group used the SCSI command set as a model (JEDEC devices are in your cell phone), the ATAPI devices used SCSI commands, and many SCSI commands and SATA commands have a common heritage. The Mt. Fuji group (a standards group in Japan) also uses SCSI as the basis for new DVD and BlueRay devices. So, while not widely known, the SCSI command family has grown well beyond what is managed by the ANSI/INCITS T10 committee that originally defined SCSI in to a broad set of capabilities that are used across the industry, by a broad group of organizations. But, that all said, scanners and printers are still on USB, and SCSI is almost all about storage in one form or another.

Q. How does iSCSI support software-defined storage?

A. Answered during the talk. SDS provides more automation and knobs on the storage capabilities. But SDS still needs a way to transport the storage and iSCSI works perfectly fine for that. They are complementary technologies, not competing.

Q. With 40Gb and faster coming soon to a server near you, what kind of impact will that have on CPU utilization? Will smaller servers be able to push that much traffic?

A. More throughput simply requires more CPU. With good multithreaded drivers available, this can mean simply adding cores to keep the pipe as full as possible. As we mentioned near the end, using iSCSI with RDMA lightens the load on the CPU even more, so you’ll probably be seeing more of that.

Q. Is IPSec commonly supported on iSCSI targets?  

A. Yes, IPsec is required to be implemented on an iSCSI target to be a compliant device.   However, it is not commonly enabled by customers. If they MUST provide IPsec there are a lot of non-compliant initiators and targets on the market.

Q. I’m told direct connect with iSCSI is discouraged, that there should be a switch in place to handle the buffering, latency, acknowledgement etc….. Is this true or a best practice to make sure switches are part of the design?

A. If you have no need to connect to multiple targets or multiple initiators, there’s no harm in direct connections.

Q. Ethernet was not designed to support storage traffic. The TCP/IP protocol suite was not designed to support storage traffic. SCSI was not designed to be encapsulated. So TCP/IP FTW? I think not. The reason iSCSI is exists is [perceived] cost savings. I get fed up with people constantly looking for ways to squeeze another penny out of something. To me it illustrates that they’re not very creative. Fibre Channel is a stupid name, but it is a purpose built protocol that works as designed to.

A. Ethernet is a general purpose network. It is capable of handling lots of different traffic (including storage). By putting iSCSI onto an existing Ethernet infrastructure, it can (as you point out) create a substantial cost savings over installing a FC network (although that infrastructure savings comes with other costs – such as the impact of a shared wire). However, installing a dedicated Ethernet network provides many of the advantages of a dedicated FC network, but at an added cost over that of a shared Ethernet infrastructure. While most consider FC a purpose-built storage network, it is worth pointing out that some also consider it a general purpose network (for example FC-Avionics is built into Fighter Jets, and it’s not for storage). And while not designed to be encapsulated, (it was designed for a parallel bus), SCSI today is encapsulated on every transport that carries it (yes, that includes FCP and SAS).
There are many kinds of storage at different price points, USB storage, SATA devices, rotating media (at different RPMs), SSD devices, SAS devices, FC devices, single spindles, arrays, cloud, drop boxes, etc., all with the corresponding transport wires. iSCSI is one of those wires. Each protocol and wire offer specific advantages and disadvantages.     There can be a lot of confusion about which to use, but just as everyone does not drive the same type car (a FORD FUSION for example), everyone does not need the same type of storage (FC devices/arrays). Yes, I drive a FORD FUSION, and I like FC storage, but I use a USB stick on my laptop, and I pray my bank never puts my financial records out in the cloud. Selecting the right storage (and wire) for the job at hand can be one of a system administrators most interesting problems to solve.As for the name – that is often what happens in committees…

Q. As a best practice for Windows servers, disable hardware acceleration features in NICs (TOE etc.)? Are any NIC features valuable given modern multicore CPUs?

A. Yes. Typically the only reason to disable TOE is that multiple or virtual TCP/IP stacks are going to be using the same NIC. TSO, LRO and jumbo frames will benefit any OS that can take advantage of them.

Q. What is the advantage of iSCSI when compared with NVMe?

A. NVMe and iSCSI are very different protocols. NVMe started life as a direct attach protocol to communicate to native PCIe devices (not even outside the box). iSCSI was a network protocol from day one. iSCSI has to deal with the potential for long network induced delays, and complex out of order error recovery issues. NVMe operates over an interlocked bus, and as such, does not have those issues.

But, NVMe is now being extended over fabrics. NVMe over a RoCE V1 transport will be a data center network (since there is no IP routing). NVMe over a RoCE V2 transport or an iWARP transport will have the same routing capabilities that iSCSI has.     When it comes to the raw command set, they are very similar (but there are some differences). SCSI is a more full featured command set than NVMe – it has been developed over a span of over 25 years, and has developed solutions for all the problems that have been discovered during that time span. NVMe has a more limited (or more focused) command set (for example, there are no tape commands in the NVMe command set). iSCSI is available today, as is direct attach NVMe, but NVMe over Fabrics is still in the development phases (the specification is expected to be available the first week of June, 2016). NVMe products will take some time to mature and to develop solutions for the problems they have not discovered yet. Another example of this is the ability to support shared storage – it existed on day one in iSCSI, but did not exist in the first NVMe specification. To support shared storage in NVMe over Fabrics, that capability has since been added, and it was done using a SCSI compatible method (to make it easier for host S/W that already performs this function).

There is a large community working to develop NVMe over Fabrics. As memory based storage device get cheaper, and the solution space matures, NVMe will become more attractive.

Q. How often do iSCSI installations provide encryption of data in flight? How: IPsec, IKEv2-SCSI + ESP-SCSI, etc.?

A. Rarely. More often than not, if in-flight data security is needed, it will be run on an isolated network. Well under 100% of installations are 100% compliant.  VMware never qualified IPsec with iSCSI and didn’t have any obvious switch to turn it on. Side note: We standards guys can be overly picky about words.   Since the question is “provide” the answer is – 100% of compliant installations PROVIDE encryption (IPsec V2 – see above), however, in practice, installations that require that type of security typically run on isolated networks, rather than turn on encryption.

Q. How do multiple independent applications inside the same initiator map to iSCSI sessions to the same target? E.g., iSCSI session one-to-one with application?

A. There is no relationship between applications and sessions. When an iSCSI initiator discovers a target, the initiator logs in and establishes a session. If iSCSI MCS (multi connection session) is being used, multiple TCP connections may be established and used in parallel to process operations for that session.

Applications send reads and writes to the operating system. Those IO requests make their way through the file system and caching layers into the device driver. The device driver issues the IO request to the device (over the iSCSI session) and retains information about that IO. When a completion is received from a device (the WRITE command or READ command completed), it is matched up with the request. That completion status (success or error) is passed back through the operating system (file system, etc.) to the application. So it is the responsibility of the device driver to mux/demux the requests from all the applications out over the iSCSI session and track the responses as the operations are completed.

When an operating system is using MPIO (multi-pathing), then the device driver may create multiple sessions between the initiator and the target. This is where operating system MPIO policies such as round-robin, shortest queue, LRU, etc. come into play. In this case, the MPIO driver will send an IO operation to the device using what it considers to be the most appropriate path (based on the selected policy). But again, there is no relationship between the application and the path used for IO (any application can have it’s IO send via any path).

Today, MPIO is used more commonly than MCS.

Q. Will Microsoft iSCSI implement iSER?

A. This is a question for Microsoft or iSER-capable NIC vendor that provides Microsoft drivers.

Q.Zadara has some iSER deployments using Linux and VMware clients going to the Zadara cloud storage.

A. There’s an answer, all by itself.

Q. In the case of iWARP, the TCP layer takes care of out-of-order IP packet receptions. What layer does the out-of-order management of packets in ROCE ?

A. RoCE headers contain a 24 bit “Packet Sequence Number” that is used to validate the required ordering and detect lost packets. As such, ordering still occurs, just in a different way.

Q. Correction: RoCE is over Ethernet packets and is not routable. RoCEv2 is the one over UDP/IP and *is* routable.

A. You are correct. RoCE is not routable by IP. RoCE transmits raw Ethernet frames with just Ethernet MAC headers and no IP headers, and as such, it is not routable by IP. RoCE V2 puts the information into UDP packets (with appropriate IP headers), and therefore it is routable by IP.

Q. How prevalent is iSER today in deployment? And what are some of the typical applications that leverage iSER?

A. Not terribly prevalent today, but higher speed Ethernet might drive more adoption, due to the CPU savings demonstrated.

Update: If you missed the live event, it’s now available  on-demand. You can also  download the webcast slides.

 

 

Find out How iSCSI is Evolving

The next Ethernet Storage Forum Webcast. “Evolution of iSCSI including iSER, iSCSI over RDMA Ethernet,” will focus on developments with iSCSI – the Internet Protocol standard for transferring SCSI commands across an Ethernet network, enabling hosts to link to storage devices wherever they may be.  At this Webcast on May 24th, I will be joined by Fred Knight, Standards Technologist at NetApp, and Andy Banta, Storage Janitor at SolidFire/NetApp, who will discuss the evolution of iSCSI up to iSER, which takes advantage of Ethernet RDMA fabric technologies to enhance performance.  Register now to hear:

  • A brief history of iSCSI
  • How iSCSI works
  • IETF refinements to the specification
  • Enhancing iSCSI  performance with iSER

The Webcast will be live, so please bring your questions for Andy and Fred. We hope to see you there!

Update: If you missed the live event, it’s now available  on-demand. You can also  download the webcast slides.