Introducing the Networking Storage Forum

At SNIA, we are dedicated to staying on top of storage trends and technologies to fulfill our mission as a globally recognized and trusted authority for storage leadership, standards, and technology expertise. For the last several years, the Ethernet Storage Forum has been working hard to provide high quality educational and informational material related to all kinds of storage.

From our “Everything You Wanted To Know About Storage But Were Too Proud To Ask” series, to the absolutely phenomenal (and required viewing) “Storage Performance Benchmarking” series to the “Great Storage Debates” series, we’ve produced dozens of hours of material.

Technologies have evolved and we’ve come to a point where there’s a need to understand how these systems and architectures work – beyond just the type of wire that is used. Today, there are new systems that are bringing storage to completely new audiences. From scale-up to scale-out, from disaggregated to hyperconverged, RDMA, and NVMe-oF – there is more to storage networking than just your favorite transport.

For example, when we talk about NVMe™ over Fabrics, the protocol is broader than just one way of accomplishing what you need. When we talk about virtualized environments, we need to examine the nature of the relationship between hypervisors and all kinds of networks. When we look at “Storage as a Service,” we need to understand how we can create workable systems from all the tools at our disposal.

Bigger Than Our Britches

As I said, SNIA’s Ethernet Storage Forum has been working to bring these new technologies to the forefront, so that you can see (and understand) the bigger picture. To that end, we realized that we needed to rethink the way that our charter worked, to be even more inclusive of technologies that were relevant to storage and networking.

So…

Introducing the Networking Storage Forum. In this group we’re going to continue producing top-quality, vendor-neutral material related to storage networking solutions. We’ll be talking about:

  • Storage Protocols (iSCSI, FC, FCoE, NFS, SMB, NVMe-oF, etc.)
  • Architectures (Hyperconvergence, Virtualization, Storage as a Service, etc.)
  • Storage Best Practices
  • New and developing technologies

… and more!

Generally speaking, we’ll continue to do the same great work that we’ve been doing, but now our name more accurately reflects the breadth of work that we do.

We’re excited to launch this new chapter of the Forum. If you work for a vendor, are a systems integrator, university or someone who manages storage, we welcome you to join the NSF. We are an active group that honestly has a lot of fun. If you’re one of our loyal followers, we hope you will continue to keep track of what we’re doing. And if you’re new to this Forum, we encourage you to take advantage of the library of webcasts, white papers, and published articles that we have produced here. There’s a wealth of un-biased, educational information there, we don’t think you’ll find anywhere else!

If there’s something that you’d like to hear about – let us know! We are always looking to hear about headaches, concerns, and areas of confusion within the industry where we can shed some light. Stay current with all things NSF:

 

 

A Q&A from the FCoE vs. iSCSI vs. iSER Debate

It’s become quite clear to those of us in the SNIA Ethernet Storage Forum (ESF) that everyone loves a great debate. We’ve proved that with our “Great Storage Debates” webcast series which has had over 3,500 views in just a few months! Last month we had another friendly debate on FCoE vs. iSCSI vs. iSER. If you missed the live event, you can watch it now on-demand and download a pdf of the webcast slides.  Our live audience asked a lot of interesting questions. As promised, here are answers to them all.

Q. How often are iSCSI offload adapters used in customer environments as compared to software initiators?   Can these adapters be used for all IP traffic or do they only run iSCSI?

A. iSCSI offload adapters are ideally suited for enabling high-performance storage access at up to 100Gbps data rates for business-critical applications, for example, latency-sensitive transactional applications and large-file business intelligence applications. iSCSi offload adapters typically also support offload of other storage protocols such as NVMe-oF, iSER, FCoE as well as regular Ethernet traffic using offload or non-offload means.

Q. What you’ve missed with iSCSI is Jumbo Frames. That payload size is one of the biggest advantages over Fibre Channel. The biggest problem with both FCoE and iSCSI is they build the networks too complex, with too many hops, without true redundant isolation. Best Practices with block based FC is to keep the host and storage as close to each other as possible. And to have separate isolated redundant networks/fabric.

A. The Jumbo Frame (JF) argument is quite contentious among iSCSI storage and network administrators, even beyond anything to do with Fibre Channel.

Considering that the performance advantages of JFs are minimal – only 3%-5% performance boost over default MTU sizes of 1500. In mixed workload environments (which dominate the Data Center application deployments), JFs simply do not provide the kind of benefits that people expect in real-world scenarios. The only time JFs can “push the needle,” so to speak, is when you have massively scaled systems with 100s or 1000s of devices, but this raises other issues.

One of those issues is that every device in the system needs to have JFs enabled. This can be something of a problem when systems get as large as they need to be in order to take advantage of JFs. Ensuring that every device is configured properly – especially over time, and especially when considering how iSCSI devices are added to networked environments – is a job that requires the coordination of the server/virtualization teams, the networking teams, and the storage teams. By and large, many people find QoS to be a more productive means of performance improvement for iSCSI systems than JFs.

Fibre Channel, on the other hand, has a maximum frame size of 2112 bytes. FCoE, then, only requires “baby jumbo” frames, for which the configuration is pushed from the switch to the end devices (~2.5k). What FC has that iSCSI does not have is the concept of “sequences” and “exchanges,” which ensure that the long-flow of frames (regardless of their size) are sent as an entity. So, regardless of what the frame size is (2.5k or 9k), the data flow is sent with consistency and low-jitter because of the way that the sequences and exchanges are handled.

The concern about “too complex” and “too many hops” is an interesting one, as Fibre Channel (and, correspondingly, FCoE) are deliberately kept as simple and straightforward as possible. A FC network, for instance, rarely goes beyond 2 hops (“hops” in FC are measured as the links between switches, whereas in Ethernet “hops” are measured as the switches themselves).

Logically, then, there is usually, at most, an edge-core-edge topology with a predeterministic path to be followed thanks to Fibre Channel’s FSPF routing algorithm.

iSCSI topologies, on the other hand, can be complex (as Ethernet topologies sometimes can be). For larger iSCSI environments, it is often recommended to isolate the storage traffic out into its own, simplified topology. iSCSI SANs that have grown organically, however, can sometimes struggle to be reined in over time.

Best practices for all storage is to keep it as close to the host/source as is reasonably possible, not just block. In backup scenarios for example, you want the storage far enough away to be safe from any catastrophe, but close enough to ensure recovery objectives. The design principle of keeping storage as close to the host is a common best practice, and as mentioned in the webinar it is important that architectural principles ensure high availability (HA) to compensate for the rigidity that block storage systems require to compensate for weaker ULP recovery mechanisms.

 Q.  Most servers today have enough compute power to not need offload adapters.

A.  This statement might be true in some situations, but definitely not most. With more and more virtual machines being deployed on physical systems and new storage technologies such as SSDs, and NVMe devices which greatly lower latencies, servers are often CPU bound when moving or retrieving data from storage. Offloading storage related activities to an adapter frees the CPU and increases overall server performance.

Q. In which industry is each protocol (i.e. FCOE or ISCSI and iSER) widely used and where?

A. iSCSI is the most widely-supported Ethernet SAN protocol  with native initiator support integrated into all the major operating systems and hypervisors, built-in RDMA for high performance offloaded implementations supporting up to 100Gbps and support across major storage platforms and  is thus ideally suited for deployment across cloud and enterprise data center environments.

Q. Do iSCSI offload adapters provide the IPSec encryption, or is this done in software only solutions? Please answer from both initiator and target perspective.

A. Yes, iSCSI protocol offload adapters can optionally provide offload of IPSec encryption for both iSCSI (as well as NVMe-oF) initiator and target operation at data rates of up to 100 Gigabits-per-second. This results in overall higher server and target efficiency including power, cooling, memory, and CPU savings.

Q. Does iSER support direct or is a switch between them required?

A. A switch is not required.

Q. J, you left out the centralized management that Fibre Channel provides for FCoE as a positive.

A. I got there eventually! But you are correct, the Fibre Channel tools for a centralized management plane with the name server – regardless of the number of switches in the fabric – is a tremendous positive for FCoE/FC solutions at scale.

Q. Is multipath possible on the initiator with ISER and will it scale with high IOPs?

A. Yes. Mulitpath is possible on the initiator with iSER and scales with high IOPs.

Q. FCoE has been around for a while, but I noticed that some storage vendors are dropping support for it. Do you still see a big future for FCoE?

A. As a protocol, FCoE has always been able to be used wherever and whenever needed. Almost all converged infrastructure systems use FCoE, for instance. Given that the key advantage of FCoE has been traffic/protocol consolidation, there is an extremely strong use case for FCoE at “the first hop” – that is, from the server to the first network switch.

Q. What is the MTU for iSER ?

A. iSER as a protocol that sits above the Layer 2 Data Link Layer, which is where the MTU is set. As a result, iSER will accept/accommodate any MTU setting that is configured at that layer. Please see the answer earlier about Jumbo Frames for more information.

Ready for more great storage debates? Our next one will be RoCE vs. iWARP on August 22, 2018. Save you place by registering here.

And you can check out our previous debates “File vs. Block vs. Object Storage” and “Fibre Channel vs. iSCSI” on-demand at your convenience too. Happy debating!

FCoE vs. iSCSI vs. iSER: Get Ready for Another Great Storage Debate

As a follow up our first two hugely successful “Great Storage Debate” webcasts, Fibre Channel vs. iSCSI and File vs. Block vs. Object Storage, the SNIA Ethernet Storage Forum will be presenting another great storage debate on June 21, 2018. This time we’ll take on FCoE vs. iSCSI vs. iSER.

For those of you who’ve seen these webcasts, you know that the goal of these debates is not to have a winner emerge, but rather provide unbiased education on the capabilities and use cases of these technologies so that attendees can become more informed and make educated decisions.

Here’s what you can expect from this session: One of the features of modern data centers is the ubiquitous use of Ethernet. Although many data centers run multiple separate networks (Ethernet and Fibre Channel (FC)), these parallel infrastructures require separate switches, network adapters, management utilities and staff, which may not be cost effective.

Multiple options for Ethernet-based SANs enable network convergence, including FCoE (Fibre Channel over Ethernet) which allows FC protocols over Ethernet and Internet Small Computer System Interface (iSCSI) for transport of SCSI commands over TCP/IP-Ethernet networks. There are also new Ethernet technologies that reduce the amount of CPU overhead in transferring data from server to client by using Remote Direct Memory Access (RDMA), which is leveraged by iSER (iSCSI Extensions for RDMA) to avoid unnecessary data copying.

That leads to several questions about FCoE, iSCSI and iSER:

  • If we can run various network storage protocols over Ethernet, what differentiates them?
  • What are the advantages and disadvantages of FCoE, iSCSI and iSER?
  • How are they structured?
  • What software and hardware do they require?
  • How are they implemented, configured and managed?
  • Do they perform differently?
  • What do you need to do to take advantage of them in the data center?
  • What are the best use cases for each?

Register today to join our SNIA experts as they answer all these questions and more on the next Great Storage Debate: FCoE vs. iSCSI vs. iSER. We look forward to seeing you on June 21st.

 

Clearing Up Confusion on Common Storage Networking Terms

Do you ever feel a bit confused about common storage networking terms? You’re not alone. At our recent SNIA Ethernet Storage Forum webcast “Everything You Wanted To Know About Storage But Were Too Proud To Ask – Part Mauve,” we had experts from Cisco, Mellanox and NetApp explain the differences between:

  • Channel vs. Busses
  • Control Plane vs. Data Plane
  • Fabric vs. Network

If you missed the live webcast, you can watch it on-demand. As promised, we’re also providing answers to the questions we got during the webcast. Between these questions and the presentation itself, we hope it will help you decode these common, but sometimes confusing terms.

And remember, the “Everything You Wanted To Know About Storage But Were Too Proud To Ask” is a webcast series with a “colorfully-named pod” for each topic we tackle. You can register now for our next webcast: Part Teal, The Buffering Pod, on Feb. 14th.

Q. Why do we have Fibre and Fiber

A. Fiber Optics is the term used for the optical technology used by Fibre Channel Fabrics.   While a common story is that the “Fibre” spelling came about to accommodate the French (FC is after all, an international standard), in actuality, it was a marketing idea to create a more unique name, and in fact, it was decided to use the British spelling – “Fibre”.

Q. Will OpenStack change all the rules of the game?

A. Yes. OpenStack is all about centralizing the control plane of many different aspects of infrastructure.

Q. The difference between control and data plane matters only when we discuss software defined storage and software defined networking, not in traditional switching and storage.

A. It matters regardless. You need to understand how much each individual control plane can handle and how many control planes you have from a overall management perspective. In the case were you have too many control planes SDN and SDS can be a benefit to you.

Q. As I’ve heard that networks use stateless protocols, would FC do the same?

A.  Fibre Channel has several different Classes, which can be either stateful or stateless. Most applications of Fibre Channel are Class 3, as it is the preferred class for SCSI traffic, A connection between Fibre Channel endpoints is always stateful (as it involves a login process to the Fibre Channel fabric). The transport protocol is augmented by Fibre Channel exchanges, which are managed on a per-hop basis. Retransmissions are handled by devices when exchanges are incomplete or lost, meaning that each exchange is a stateful transmission, but the protocol itself is considered stateless in modern SCSI-transport Fibre Channel.

iSCSI, as a connection-oriented protocol, creates a nexus between an initiator and a target, and is considered stateful.  In addition, SMB, NFSv4, ftp, and TCP are stateful protocols, while NFSv2, NFSv3, http, and IP are stateless protocols.

Q. Where do CIFS/SMB come into the picture?

A. CIFFS/SMB is part of a network stack.   We need to have a separate talk about network stacks and their layers.   In this presentation, we were talking primarily about the physical layer of the networks and fabrics.   To overly simplify network stacks, there are multiple layers of protocols that run on top of the physical layer.   In the case of FC, those protocols include the control plane protocols (such as FC-SW), and the data plane protocols.   In FC, the most common data plane protocol is FCP (used by SCSI, FICON, and FC-NVMe).   In the case of Ethernet, those protocols also include the control plan (such as TCP/IP), and data plane protocols.   In Ethernet, there are many commonly used data plane protocols for storage (such as iSCSI, NFS, and CIFFS/SMB)

Update: If you missed the live event, it’s now available  on-demand. You can also  download the webcast slides.

Ethernet Networked Storage – FAQ

At our SNIA Ethernet Storage Forum (ESF) webcast “Re-Introduction to Ethernet Networked Storage,” we provided a solid foundation on Ethernet networked storage, the move to higher speeds, challenges, use cases and benefits. Here are answers to the questions we received during the live event.

Q.  Within the iWARP protocol there is a layer called MPA (Marker PDU Aligned Framing for TCP) inserted for storage applications. What is the point of this protocol?

A. MPA is an adaptation layer between the iWARP Direct Data Placement Protocol and TCP/IP. It provides framing and CRC protection for Protocol Data Units.   MPA enables packing of multiple small RDMA messages into a single Ethernet frame.   It also enables an iWARP NIC to place frames received out-of-order (instead of dropping them), which can be beneficial on best-effort networks. More detail can be found in IETF RFC 5044 and IETF RFC 5041.

Q. What is the API for RDMA network IPC?

The general API for RDMA is called verbs. The OpenFabrics Verbs Working Group oversees the development of verbs definition and functionality in the OpenFabrics Software (OFS) code. You can find the training content from OpenFabrics Alliance here. General information about RDMA for Ethernet (RoCE) is available at the InfiniBand Trade Association website. Information about Internet Wide Area RDMA Protocol (iWARP) can be found at IETF: RFC 5040, RFC 5041, RFC 5042, RFC 5043, RFC 5044.

Q. RDMA requires TCP/IP (iWARP), InfiniBand, or RoCE to operate on with respect to NVMe over Fabrics. Therefore, what are the advantages of disadvantages of iWARP vs. RoCE?

A. Both RoCE and iWARP support RDMA over Ethernet. iWARP uses TCP/IP while RoCE uses UDP/IP. Debating which one is better is beyond the scope of this webcast, but you can learn more by watching the SNIA ESF webcast, “How Ethernet RDMA Protocols iWARP and RoCE Support NVMe over Fabrics.”

Q. 100Gb Ethernet Optical Data Center solution?

A.  100Gb Ethernet optical interconnect products were first available around 2011 or 2012 in a 10x10Gb/s design (100GBASE-CR10 for copper, 100GBASE-SR10 for optical) which required thick cables and a CXP and a CFP MSA housing. These were generally used only for switch-to-switch links. Starting in late 2015, the more compact 4x25Gb/s design (using the QSFP28 form factor) became available in copper (DAC), optical cabling (AOC), and transceivers (100GBASE-SR4, 100GBASE-LR4, 100GBASE-PSM4, etc.). The optical transceivers allow 100GbE connectivity up to 100m, or 2km and 10km distances, depending on the type of transceiver and fiber used.

Q. Where is FCoE being used today?

A. FCoE is primarily used in blade server deployments where there could be contention for PCI slots and only one built-in NIC. These NICs typically support FCoE at 10Gb/s speeds, passing both FC and Ethernet traffic via connect to a Top-of-Rack FCoE switch which parses traffic to the respective fabrics (FC and Ethernet). However, it has not gained much acceptance outside of the blade server use case.

Q. Why did iSCSI start out mostly in lower-cost SAN markets?

A. When it first debuted, iSCSI packets were processed by software initiators which consumed CPU cycles and showed higher latency than Fibre Channel. Achieving high performance with iSCSI required expensive NICs with iSCSI hardware acceleration, and iSCSI networks were typically limited to 100Mb/s or 1Gb/s while Fibre Channel was running at 4Gb/s. Fibre Channel is also a lossless protocol, while TCP/IP is lossey, which caused concerns for storage administrators. Now however, iSCSI can run on 25, 40, 50 or 100Gb/s Ethernet with various types of TCP/IP acceleration or RDMA offloads available on the NICs.

Q. What are some of the differences between iSCSI and FCoE?

A. iSCSI runs SCSI protocol commands over TCP/IP (except iSER which is iSCSI over RDMA) while FCoE runs Fibre Channel protocol over Ethernet. iSCSI can run over layer 2 and 3 networks while FCoE is Layer 2 only. FCoE requires a lossless network, typically implemented using DCB (Data Center Bridging) Ethernet and specialized switches.

Q. You pointed out that at least twice that people incorrectly predicted the end of Fibre Channel, but it didn’t happen. What makes you say Fibre Channel is actually going to decline this time?

A. Several things are different this time. First, Ethernet is now much faster than Fibre Channel instead of the other way around. Second, Ethernet networks now support lossless and RDMA options that were not previously available. Third, several new solutions–like big data, hyper-converged infrastructure, object storage, most scale-out storage, and most clustered file systems–do not support Fibre Channel. Fourth, none of the hyper-scale cloud implementations use Fibre Channel and most private and public cloud architects do not want a separate Fibre Channel network–they want one converged network, which is usually Ethernet.

Q. Which storage protocols support RDMA over Ethernet?

A. The Ethernet RDMA options for storage protocols are iSER (iSCSI Extensions for RDMA), SMB Direct, NVMe over Fabrics, and NFS over RDMA. There are also storage solutions that use proprietary protocols supporting RDMA over Ethernet.

Update: If you missed the live event, it’s now available  on-demand. You can also  download the webcast slides.

 

 

 

 

 

 

 

 

 

 

 

Benefits of RDMA in Accelerating Ethernet Storage Q&A

At our recent live Webcast “Benefits of RDMA in Accelerating Ethernet Storage Connectivity” experts from Emulex, Intel and Microsoft had an insightful discussion on the ways RDMA is having an impact on Ethernet storage. The live event was attended by nearly 200 people and feedback was overwhelming positive with several attendees thanking us for our vendor neutral presentation and one attendee commenting that it was, “Probably the most clearly comprehensible yet comprehensive webinar I’ve attended in some time.” If you missed the Webcast, it’s now available on demand. We did not have time to get to everyone’s questions, so as promised, below are answers to all of them. If you have additional questions, please ask them in the comments section in this blog and we’ll get back to you as soon as possible.

Q.  Is RDMA over RoCEv2 in production?

A. The IBTA released the RoCEv2 Specification in September 2014.  In order to support that specification changes may be required across the RDMA stack, including firmware, drivers & operating systems.   Schedules for implementation of that specification will vary by operating system.   For example, the OpenFabrics Alliance (OFA) has not released an Open Fabrics Enterprise Distribution (OFED) version that implements that standard yet, although it is in process now.   Once OFA completes their OFED stack implementation, the Linux distribution vendors will then incorporate and support the updated OFED stack.   Implementations provided prior to full OFA and Distro vendor support would be preliminary, potentially incompatible with the OFED release, and require confirmation by the distro vendor with regard to the nature/level of support they would be providing

Q. I would have liked a list of Windows applications that take advantage of SMB Direct – both in a Hyper-V host or bare metal.

A.  In Windows, any file-based application can make use of SMB3 and SMB Direct due to the native file-based programming interface support. No application changes are required. For certain enterprise applications such as Hyper-V and SQL Server, SMB3 is officially supported, and more information can be found in the product catalog at www.microsoft.com.

Q. Are there any particular benefits in using one network protocol over another for SMB Direct/RDMA (iWARP vs. RoCE vs. IB)?

A.  There are no hard and fast rules; any adapter or protocol can be suitable for many scenarios. Of the Ethernet-based protocols we considered in today’s webcast

  • iWARP offers the benefit of operation over TCP with its reliability and routability, well-suited to a broad range of installed infrastructure.
  • RoCE offers a lightweight, efficient protocol when a DCB-enabled switched fabric is available. RoCE, however, is not routable.
  • RoCEv2 offers similar properties to RoCE, with the possibility to scale to larger routed and DCB-enabled fabrics.

Q. Who are the vendors offering iWARP capable RNICs?

A. Chelsio Communications has production iWARP adapters today, and both Intel and Qlogic have publicly committed to future iWARP controllers.

Q. How much testing has been done with SMB3, and in particular SMB direct, over WAN connections?

A. The SMB2 protocol was originally designed to adapt to WAN scenarios, and supports a credit-based management of large amounts of data to be outstanding, to make best use of WAN-type long pipes. The SMB3 protocol retains these design attributes, and the SMB Direct protocol also supports similar deep pipelining. The iWARP protocol, being layered on standard TCP, is well suited to such deployments, and RoCE WAN adapters are potentially available. Please contact the respective technology vendors for information on any available testing results.

Q. I love a future webcast for RDMA enabled distributed filesystems.

A. Thanks for the suggestion! We’re always looking for ideas for future webcasts and SNIA-ESF will consider this as a potential follow-on.

Q.  Is Live Migration the scenario where “packet size” is 1MB?

A.  All SMB Direct scenarios have workloads that range anywhere up to 8MB. For large file copies, most SMB3 clients request from 1MB to 8MB per operation, for Hyper-V live migration, transfers are typically similar, during the bulk transfer phase.

Q. SMB3 is being compared to FC for enterprise. If Ethernet based protocols are of interest, wouldn’t FCoE give the same performance as FC (same stack) vs. SMB3?

A. SMB3 with SMB Direct enables many workloads not possible with Fibre Channel over Ethernet, and performance comparisons are therefore difficult. Perhaps another SNIA webcast could investigate this!

Q.  Regarding your SMB direct example with lots of small operations, how do you deal with the overhead of registering and unregistering buffers for the RDMA operations?

A. As answered later in the session, the registration and unregistration is not a protocol matter, but in the case of the Windows implementation, it is strictly performed for the specific buffers of each operation, which is critical for security, data integrity, and system protection. The standard “Fast Register Work Request” method is used, and careful implementation has shown that the overhead does not negatively impact performance, even for small I/O (4KB/operation). Check out Jose Barreto’s blog, which contains many benchmark results.

Q. But isn’t Live Migration done in 1MB “chunks”? So not “small” I/Os?

A. As answered later in the session, Hyper-V Live Migration is done in several phases, the first phase is the initial bulk copy of memory, done in large chunks, but immediately after it a second phase of copying individual pages which were dirtied by the live-running VM is performed. These operations are typically 4KB. Note: The faster the initial phase goes, the less work there is in this second phase, but in both phases, the faster, the better, and RDMA accelerates both.

Q. Are iSER and iWARP alternatives to one another?

A.  iWARP is an RDMA protocol, and iSER is a mapping of iSCSI to iWARP, as well as RoCE/InfiniBand.

Q. What’s Intel’s roadmap for RoCE and/or iWARP?

A.  Intel is committed to iWARP and plans to incorporate it in future server chipsets and SOCs. See http://www.intel.com/content/www/us/en/ethernet-products/accelerating-ethernet-iwarp-video.html for more information.

Q. Is there any other Transport being used other than IB to create a reliable transport for RoceV2? Puristically it is possible?

A. RoCE was developed to leverage Infiniband as much as possible.   For that reason, the Infiniband transport was chosen when the RoCE standard was developed.   As the RoCEv2 standard was developed, the underlying Infiniband network protocol was replaced with IPv4 / IPv6 in order to provide the layer 3 routability  and UDP to provide stateless encapsulation (and indication) of the Infiniband transport header that was retained.   While it may be possible to develop a reliable transport to replace Infiniband, the RoCE standards body has elected not to go that route  as of this writing.

 

 

 

Real-World FCoE Best Practices Q&A

At our recent live Webcast “Real-World FCoE Designs and Best Practices,” IT leaders from Thermo Fisher Scientific and Gannett Co. shared their experiences from their FCoE deployments – one single-hop, one multi-hop. It was a candid discussion on the lessons they learned. If you missed the Webcast, it’s now available on demand. We polled the audience to see what stage of FCoE deployment they’re in (see the poll results at the end of this blog). Just over half said they’re still in learning mode. To that end, here are answers to the questions we got during the Webcast. As you will see, many of these questions were directed to our guest end users regarding their experiences. I hope that it will help you in your journey. If you have additional questions, please ask them in the comments section in this blog and we’ll get back to you as soon as possible.

Q. Have any issues come up where the storage team needed to upgrade SAN switch firmware to solve a problem, but the network team objected to upgrading the FCFs?  This assumes a shared firmware release on both network and SAN switch products (i.e. Cisco NX-OS)

A. No we need to work together as a team so as long as it is planned out in advance this has not been an issue.

Q. Is there any overhead at the host CPU level when using FCOE/CNA vs. using FC/HBA? Has anyone done any benchmarking on this?

A. To the host OS it is the CNA that presents a HBA and 10G Ethernet adapter, so to the host OS there is not a difference from what is normally presented for Ethernet and FC adapters.  In a software FCoE implementation there might be, but you should check with the particular implementation from the OS vendors for this information.

Q. Are there any high-level performance considerations when compared to typical FC SAN? Any obvious impact to IO latency as hosts are moved to FCoE compared to FC?

A. There is a performance increase in comparison to 8GB Fibre channel since FCoE using Ethernet and 64/66b encoding vs. 8/10b encoding that native 8GB uses. On dedicated links it could be around 50% increase in performance from 10GB FCoE vs. 8GB FC.

 Q. Have you planned to use of 40G – FCoE in you edge core design?

A. We have purchased the hardware to go 40G if we choose to.

 Q. Was DCB used to isolate the network traffic with FC traffic at the CNA?

A. DCB is a set of technologies that includes DCBX, PFC, ETS that are used with FCoE.

 Q. Was FCoE implemented on existing hosts or just on new ones being added to the SAN?

A. Only on new hosts.

Q. Can you expand on Domain_ID sprawl ?

A. In FC or FCoE fabrics each storage vendor supports only a certain amount of switches per fabric. Each full FC or FCoE switch will consume a Domain ID, so it is important to consider how many switches or domain IDs are allowed in a supported fabric based on the storage vendor’s fabric recommendations. There are technologies such as NPIV and vendor specific technologies that can be helpful to limit domain ID sprawl in your fabrics.

Remember the poll I mentioned during the Webcast? Here are the results. Let us know where you are in your FCoE deployment plans.

Screen Shot 2014-12-19 at 9.15.15 AM

 

 

Webcast Preview: End Users Share their FCoE Stories

Fibre Channel over Ethernet (FCoE) has been growing in popularity year after year. From access layer, to multi-hop and beyond, FCoE has established itself as a true solution in the data center.

Are you interested in learning how customers are using FCoE? Join us on December 10th, at 3:00 pm ET, 1:00 pm PT for our live Webcast, “Real World FCoE Designs and Best Practices“. This live SNIA Webcast examines the most used FCoE designs and looks at how this is being used in REAL world customer implementations. You will hear from two IT leaders  who have implemented FCoE and why they did so. We will cover:

  • Real-world Use Cases and Customer Implementations of:
    • Single-Hop FCoE
    • Multi-Hop FCoE
    • Use of FCoE for Inter-Switch Links (ISLs)

This will be a vendor-neutral live presentation. Please join us on December 10th and bring your questions for our panel.

Expanding Your Data Center with FCoE – Q&A

At our recent live ESF Webcast, “Expert Insights: Expanding the Data Center with FCoE,” we examined the current state of FCoE and looked at how this protocol can expand the agility of the data center if you missed it, it’s now available  on-demand. We did not have time to address all the questions, so here are answers to them all. If you think of additional questions, please feel free to comment on this blog.

Q. You mentioned using 40 and 100G for inter-switch links.   Are there use cases for end point (FCoE target and initiator) 40 and 100G connectivity?

A. Today most end points are only supporting 10G, but we are starting to see 40G server offerings enter the market, and activity among the storage vendors designing these 40G products into their arrays.

Q. What about interoperability between FCoE switch vendors?

A. Each switch vendor has his own support matrix, and would need to be examined independently.

Q. Is FCoE supported on copper cable?

A. Yes, FCoE supports “Twin Ax” copper and is widely used for server to top of rack switch connections to seven meters.  In fact, Converged Network Adapters are now available that support 10GBASE-T copper cables with the familiar RJ-45 jack.   At least one major switch vendor has qualified FCoE running over 10GBASE-T to 30 meters.

Q. What distance does FCoE support?

A. Distance limits are dependent on the hardware in use and the buffering available for Priority Flow Control. The lengths can vary from 3m up to over 80km. Top of rack switches would fall into the 3m range while larger class switch/directors would support longer lengths.

Q. Can FCoE take part in management/orchestration by OpenStack Neutron?

A. As of this writing there are no OpenStack extensions in Neutron for FCoE-specific plugins.

Q. So how is this FC-BB-6 different than FIP snooping?

A. FIP Snooping is a part of FC-BB-5 (Appendix D), which allows switch devices to identify an FCoE Frame format and create a forwarding ACL to a known FCF. FC-BB-6 creates additional architectural elements for deployments, including a “switch-less” environment (VN2VN), and a distributed switch architecture with a controlling FCF. Each of these cases is independent from the other, and you would choose one instead of the others. You can learn more about VN2VN from our SNIA-ESF Webcast, “How VN2VN Will Help Accelerate Adoption of FCoE.”

Q. You mentioned DCB at the beginning of the presentation. Are there other purposes for DCB? Seems like a lot of change in the network to create a DCB environment for just FCoE. What are some of the other technologies that can take advantage of DCB?

A. First, DCB is becoming very ubiquitous. Unlike the early days of the standard, where only a few switches supported it, today most enterprise switches support DCB protocols. As far as other use cases for DCB, iSCSI benefits from DCB, since it eliminates dropped packets and the TCP/IP protocol’s backoff algorithm when packets are dropped, smoothing out response time for iSCSI traffic. There is a protocol known as RoCE or RDMA over Converged Ethernet. RoCE requires the lossless fabric DCB creates to achieve consistent low latency and high bandwidth.   This is basically the InfiniBand API running over Ethernet. Microsoft’s latest version of file serving protocol, SMB Direct, and the Hyper-V Live Migration can utilize RoCE, and there is an extension to iSCSI known as iSER, which replaces TCP/IP with RDMA for the iSCSI datamover; enabling all iSCSI reads and writes to be done as RDMA operations using RoCE.

Q. Great point about RoCE.   iSCSI RDMA (iSER) is required from DCB if the adapters support RoCE, right?

A. Agreed. Please see the answer above to the DCB question.

Q. Did that Boeing Aerospace diagram still have traditional FC links, and if yes, where?

A. There was no Fibre Channel storage attached in that environment. Having the green line in the ledger was simply to show that Fibre Channel would have it’s own color should there be any links.

Q. What is the price of a 10 Gbp CNA compare to a 10Gbps NIC ?

A. Price is dependent on vendor and economics. But, there are several approaches to delivering the value of FCoE which can influence pricing:

  • Purpose built silicon that offloads the FC and Ethernet protocol functions offer a number of advantages including high performance, low CPU overhead, advanced features, etc., though even this depends on the vendor’s implementation.    But, these added features come with the expectation of additional cost. But, the processing of the protocols has to be done somewhere, and if you need your server CPUs to process applications instead of network protocols, then the value is justified.
  • With the introduction of Open FCoE drivers with DCB supported NICs, new options are available for customers to deploy the value of FCoE at the host. Open FCoE offloads the FC processing onto the host CPU and standard 10GbE NICs with DCB support can be used to manage the Ethernet transport functions. Where you have excess CPU capacity on your server, you might be in a position to reduce costs and deploy a software driver with  a 10GbE or faster NIC enhanced with the limited set of hardware offloads necessary to achieve full performance with Open FCoE. However, Open FCoE isn’t available with every OS or every NIC, so you need to consider OS support and availability.
  • A third consideration is that most enterprise servers include some form of advanced 10GbE networking on the motherboard that either supports purpose built silicon or DCB enabled silicon. So, depending upon which server and OS you deploy, you may have several options via embedded silicon.

 

Upcoming Webcast: Is FCoE the Answer to Data Center Agility?

Fibre Channel over Ethernet (FCoE) has been growing in popularity year after year. From access layer, to multi-hop and beyond, FCoE has established itself as a true solution in the data center.

Interested in learning how the Data Center is expanding with FCoE? Join us on August 20th, at 4:00 pm ET, 1:00 pm PT for our live Webcast, “Expanding the Data Center with FCoE.”  Continuing our conversation from our February Webcast, “Use Cases for iSCSI and FCoE,” which is now available on demand. This live SNIA Webcast examines the current state of FCoE and looks at how this protocol can expand the agility of the data center.

  • We’ll take an unbiased look at the data center using FCoE, covering:
  • The history and evolution of convergence
  • Using FCoE as a storage overlay
  • Single-hop, multi-hop and beyond
  • 40G/100G   – Where does it fit
  • Futures:
    • OpenStack
    • Defining Network Functions Virtualization (NFV)
    • Mapping NFV to FCoE
  • Real-world Use Cases

This will be a vendor-neutral live presentation. Please join us on August 20th and bring your questions for our expert panel. Register now.