The Evolution of Congestion Management in Fibre Channel

The Fibre Channel (FC) industry introduced Fabric Notifications as a key resiliency mechanism for storage networks in 2021 to combat congestion, link integrity, and delivery errors. Since then, numerous manufacturers of FC SAN solutions have implemented Fabric Notifications and enhanced the overall user experience when deploying FC SANs.

On August 27, 2024, the SNIA Data, Networking & Storage Forum is hosting a live webinar, “The Evolution of Congestion Management in Fibre Channel,” for a deep dive into Fibre Channel congestion management. We’ve convened a stellar, multi-vendor group of Fibre Channel experts with extensive Fibre Channel knowedge and different technology viewpoints to explore the evolution of Fabric Notifications and the available solutions of this exciting new technology. You’ll learn: Read More

Fibre Channel SAN Hosts and Targets Q&A

At our recent SNIA Networking Storage Forum (NSF) webcast “How Fibre Channel Hosts and Targets Really Communicate” our Fibre Channel (FC) experts explained exactly how Fibre Channel works, starting with the basics on the FC networking stack, link initialization, port types, and flow control, and then dove into the details on host/target logins and host/target IO. It was a great tutorial on Fibre Channel. If you missed it, you can view it on-demand. The audience asked several questions during the live event. Here are answers to them all:

Q. What is the most common problem that we face in the FC protocol?

A. Much the same as any other network protocol, congestion is the most common problem found in FC SANs. It can take a couple of forms including, but not limited to, host oversubscription and “Fan-in/Fan-out” ratios of host ports to storage ports, but it is probably the single largest generator of support cases. Another common problem is the ‘Host cannot see target’ kind of problem.  

Read More

Demystifying the Fibre Channel SAN Protocol

Every wonder how Fibre Channel (FC) hosts and targets really communicate? Join the SNIA Networking Storage Forum (NSF) on September 23, 2021 for a live webcast, “How Fibre Channel Hosts and Targets Really Communicate.” This SAN overview will dive into details on how initiators (hosts) and targets (storage arrays) communicate and will address key questions, like:

  • How do FC links activate?
  • Is FC routable?
  • What kind of flow control is present in FC?
  • How do initiators find targets and set up their communication?
  • Finally, how does actual data get transferred between initiators and hosts, since that is the ultimate goal?
Read More

Beyond NVMe-oF Performance Hero Numbers

When it comes to selecting the right NVMe over Fabrics™ (NVMe-oF™) solution, one should look beyond test results that demonstrate NVMe-oF’s dramatic reduction in latency and consider the other, more important, questions such as “How does the transport really impact application performance?” and “How does the transport holistically fit into my environment?”

To date, the focus has been on specialized fabrics like RDMA (e.g., RoCE) because it provides the lowest possible latency, as well as Fibre Channel because it is generally considered to be the most reliable.  However, with the introduction of NVMe-oF/TCP this conversation must be expanded to also include considerations regarding scale, cost, and operations. That’s why the SNIA Networking Storage Forum (NSF) is hosting a webcast series that will dive into answering these questions beyond the standard answer “it depends.”

Read More

Notable Questions on NVMe-oF 1.1

At our recent SNIA Networking Storage Forum (NSF) webcast, Notable Updates in NVMe-oF™ 1.1we explored the latest features of NVMe over Fabrics (NVMe-oF), discussing what’s new in the NVMe-oF 1.1 release, support for CMB and PMR, managing and provisioning NVMe-oF devices with SNIA Swordfish™, and FC-NVMe-2. If you missed the live event, you can watch it here. Our presenters received many interesting questions on NVMe-oF and here are answers to them all:

Q. Is there an implementation of NVMe-oF with direct CMB access?

A. The Controller Memory Buffer (CMB) was introduced in NVMe 1.2 and first supported in the NVMe-oF 1.0 specification. It’s supported if the storage vendor has implemented this within the hardware and the network supports it. We recommend that you ask your favorite vendor if they support the feature.

Q. What is the different between PMR in an NVMe device and the persistent memory in general?

Read More

Storage Congestion on the Network Q&A

As more storage traffic traverses the network, the risk of congestion leading to higher-than-expected latencies and lower-than expected throughput has become common. That’s why the SNIA Networking Storage Forum (NSF) hosted a live webcast earlier this month, Introduction to Incast, Head of Line Blocking, and Congestion Management. In this webcast (which is now available on-demand), our SNIA experts discussed how Ethernet, Fibre Channel and InfiniBand each handles increased traffic.

The audience at the live event asked some great questions, as promised, here are answers to them all.

Q. How many IP switch vendors today support Data Center TCP (DCTCP)? Read More

Network Speeds Questions Answered

Last month, the SNIA Networking Storage Forum (NSF) hosted a webcast on how increases in networking speeds are impacting storage. If you missed the live webcast, New Landscape of Network Speeds, it’s now available on-demand. We received several interesting questions on this topic. Here are our experts’ answers:

Q. What are the cable distances for 2.5 and 5G Ethernet?

A. 2.5GBASE-T and 5GBASE-T Ethernet are designed to run on existing UTP cabling, so it should reach 100 meters on both Cat5e and Cat6 cabling. Reach of 5GBASE-T on Cat 5e may be less under some conditions, for example if many cables are bundled tightly together. Cabling guidelines and field test equipment are available to aid in the transition.

Q. Any comments on why U.2 drives are so rare/uncommon in desktop PC usage? M.2 are very common in laptops, and some desktops, but U.2’s large capacity seems a better fit for desktop.

A. M.2 SSDs are more popular for laptops and tablets due to their small form factor and sufficient capacity.  U.2 SSDs are used more often in servers, though some desktops and larger laptops also use a U.2 SSD for the larger capacity.

 

Q. What about using Active Copper cables to get a bit more reach over Passive Copper cables before switching to Active Optical cables?

A. Yes active copper cables can provide longer reach than passive copper cables, but you have to look at the expense and power consumption. There may be many cases where using an active optical cable (AOC) will cost the same or less than an active copper cable.

Q. For 100Gb/s signaling (future standard) is it expected to work over copper cable (passive or active) or only optical?

A. Yes, though the maximum distances will be shorter. With 25Gb/s signaling the maximum copper cable length is 5m. With 50Gb/s signaling the longest copper cables are 3m long. With 100Gb/s we expect the longest copper cables will be about 2m long.

Q. So what do you see as the most prevalent LAN speed today and what do you see in next year or two?

A. For Ethernet, we see desktops mostly on 1Gb with some moving to 2.5G, 5Gb or 10Gb. Older servers are largely 10Gb but new servers are mostly using 25GbE or 50GbE, while the most demanding servers and fastest flash storage arrays have 100GbE connections. 200GbE will show up in a few servers starting in late 2019, but most 200GbE and 400GbE usage will be for switch-to-switch links during the next few years. In the world of Fibre Channel, most servers today are on 16G FC with a few running 32G and a few of the most demanding servers or fastest flash storage arrays using 64G. 128G FC for now will likely be just for switch-to-switch links. Finally for InfiniBand deployments, older servers are running FDR (56Gb/s) and newer servers are using EDR (100Gb/s). The very newest, fastest HPC and ML/AI servers are starting to use HDR (200Gb/s) InfiniBand.

If you’re new to SNIA NSF, we encourage you to check out the SNIA NSF webcast library. There you’ll find more than 60 educational, vendor-neutral on-demand webcasts produced by SNIA experts.

 

Intro to Incast, Head of Line Blocking, and Congestion Management

For a long time, the architecture and best practices of storage networks have been relatively well-understood. Recently, however, advanced capabilities have been added to storage that could have broader impacts on networks than we think.

The three main storage network transports – Fibre Channel, Ethernet, and InfiniBand – all have mechanisms to handle increased traffic, but they are not all affected or implemented the same way. For instance, utilizing a protocol such as NVMe over Fabrics will offer very different methodologies for handling congestion avoidance, burst handling, and queue management when looking at one networking in comparison to another.

Unfortunately, many network administrators may not understand how different storage solutions place burdens upon their networks. As more storage traffic traverses the network, customers face the risk of congestion leading to higher-than-expected latencies and lower-than expected throughput.

That’s why the SNIA Networking Storage Forum (NSF) is hosting a live webcast on June 18, 2019, Introduction to Incast, Head of Line Blocking, and Congestion Management where our NSF experts will cover:

  • Typical storage traffic patterns
  • What is Incast, what is head of line blocking, what is congestion, what is a slow drain, and when do these become problems on a network?
  • How Ethernet, Fibre Channel, InfiniBand handle these effects
  • The proper role of buffers in handling storage network traffic
  • Potential new ways to handle increasing storage traffic loads on the network

Register today to save your spot for June 18th. As always, our experts will be available to answer your questions. We hope to see you there.

The Impact of New Network Speeds on Storage

In the last few years, Ethernet equipment vendors have announced big increases in line speeds, shipping 25, 50, and 100 Gigabits-per -second (Gb/s) speeds and announcing 200/400 Gb/s. At the same time Fibre Channel vendors have launched 32GFC, 64GFC and 128GFC technology while InfiniBand has reached 200Gb/s (called HDR) speed.

But who exactly is asking for these faster new networking speeds, and how will they use them? Are there servers, storage, and applications that can make good use of them? How are these new speeds achieved? Are new types of signaling, cables and transceivers required? How will changes in PCIe standards and bandwidth keep up? And do the faster speeds come with different distance limitations?

These are among the questions our panel of experts will answer at the next live SNIA Networking Storage Forum (NSF) webcast on May 21, 2019, “New Landscape of Network Speeds.” Join us to learn:

  • How these new speeds are achieved
  • Where they are likely to be deployed for storage
  • What infrastructure changes are needed to support them

Register today to save your spot. And don’t forget to bring your questions. Our experts will be available to answer them on the spot.

Introducing the Networking Storage Forum

At SNIA, we are dedicated to staying on top of storage trends and technologies to fulfill our mission as a globally recognized and trusted authority for storage leadership, standards, and technology expertise. For the last several years, the Ethernet Storage Forum has been working hard to provide high quality educational and informational material related to all kinds of storage.

From our “Everything You Wanted To Know About Storage But Were Too Proud To Ask” series, to the absolutely phenomenal (and required viewing) “Storage Performance Benchmarking” series to the “Great Storage Debates” series, we’ve produced dozens of hours of material.

Technologies have evolved and we’ve come to a point where there’s a need to understand how these systems and architectures work – beyond just the type of wire that is used. Today, there are new systems that are bringing storage to completely new audiences. From scale-up to scale-out, from disaggregated to hyperconverged, RDMA, and NVMe-oF – there is more to storage networking than just your favorite transport.

For example, when we talk about NVMe™ over Fabrics, the protocol is broader than just one way of accomplishing what you need. When we talk about virtualized environments, we need to examine the nature of the relationship between hypervisors and all kinds of networks. When we look at “Storage as a Service,” we need to understand how we can create workable systems from all the tools at our disposal.

Bigger Than Our Britches

As I said, SNIA’s Ethernet Storage Forum has been working to bring these new technologies to the forefront, so that you can see (and understand) the bigger picture. To that end, we realized that we needed to rethink the way that our charter worked, to be even more inclusive of technologies that were relevant to storage and networking.

So…

Introducing the Networking Storage Forum. In this group we’re going to continue producing top-quality, vendor-neutral material related to storage networking solutions. We’ll be talking about:

  • Storage Protocols (iSCSI, FC, FCoE, NFS, SMB, NVMe-oF, etc.)
  • Architectures (Hyperconvergence, Virtualization, Storage as a Service, etc.)
  • Storage Best Practices
  • New and developing technologies

… and more!

Generally speaking, we’ll continue to do the same great work that we’ve been doing, but now our name more accurately reflects the breadth of work that we do.

We’re excited to launch this new chapter of the Forum. If you work for a vendor, are a systems integrator, university or someone who manages storage, we welcome you to join the NSF. We are an active group that honestly has a lot of fun. If you’re one of our loyal followers, we hope you will continue to keep track of what we’re doing. And if you’re new to this Forum, we encourage you to take advantage of the library of webcasts, white papers, and published articles that we have produced here. There’s a wealth of un-biased, educational information there, we don’t think you’ll find anywhere else!

If there’s something that you’d like to hear about – let us know! We are always looking to hear about headaches, concerns, and areas of confusion within the industry where we can shed some light. Stay current with all things NSF: