Storage Congestion on the Network Q&A

As more storage traffic traverses the network, the risk of congestion leading to higher-than-expected latencies and lower-than expected throughput has become common. That’s why the SNIA Networking Storage Forum (NSF) hosted a live webcast earlier this month, Introduction to Incast, Head of Line Blocking, and Congestion Management. In this webcast (which is now available on-demand), our SNIA experts discussed how Ethernet, Fibre Channel and InfiniBand each handles increased traffic.

The audience at the live event asked some great questions, as promised, here are answers to them all.

Q. How many IP switch vendors today support Data Center TCP (DCTCP)? Read More

Network Speeds Questions Answered

Last month, the SNIA Networking Storage Forum (NSF) hosted a webcast on how increases in networking speeds are impacting storage. If you missed the live webcast, New Landscape of Network Speeds, it’s now available on-demand. We received several interesting questions on this topic. Here are our experts’ answers:

Q. What are the cable distances for 2.5 and 5G Ethernet?

A. 2.5GBASE-T and 5GBASE-T Ethernet are designed to run on existing UTP cabling, so it should reach 100 meters on both Cat5e and Cat6 cabling. Reach of 5GBASE-T on Cat 5e may be less under some conditions, for example if many cables are bundled tightly together. Cabling guidelines and field test equipment are available to aid in the transition.

Q. Any comments on why U.2 drives are so rare/uncommon in desktop PC usage? M.2 are very common in laptops, and some desktops, but U.2’s large capacity seems a better fit for desktop.

A. M.2 SSDs are more popular for laptops and tablets due to their small form factor and sufficient capacity.  U.2 SSDs are used more often in servers, though some desktops and larger laptops also use a U.2 SSD for the larger capacity.

 

Q. What about using Active Copper cables to get a bit more reach over Passive Copper cables before switching to Active Optical cables?

A. Yes active copper cables can provide longer reach than passive copper cables, but you have to look at the expense and power consumption. There may be many cases where using an active optical cable (AOC) will cost the same or less than an active copper cable.

Q. For 100Gb/s signaling (future standard) is it expected to work over copper cable (passive or active) or only optical?

A. Yes, though the maximum distances will be shorter. With 25Gb/s signaling the maximum copper cable length is 5m. With 50Gb/s signaling the longest copper cables are 3m long. With 100Gb/s we expect the longest copper cables will be about 2m long.

Q. So what do you see as the most prevalent LAN speed today and what do you see in next year or two?

A. For Ethernet, we see desktops mostly on 1Gb with some moving to 2.5G, 5Gb or 10Gb. Older servers are largely 10Gb but new servers are mostly using 25GbE or 50GbE, while the most demanding servers and fastest flash storage arrays have 100GbE connections. 200GbE will show up in a few servers starting in late 2019, but most 200GbE and 400GbE usage will be for switch-to-switch links during the next few years. In the world of Fibre Channel, most servers today are on 16G FC with a few running 32G and a few of the most demanding servers or fastest flash storage arrays using 64G. 128G FC for now will likely be just for switch-to-switch links. Finally for InfiniBand deployments, older servers are running FDR (56Gb/s) and newer servers are using EDR (100Gb/s). The very newest, fastest HPC and ML/AI servers are starting to use HDR (200Gb/s) InfiniBand.

If you’re new to SNIA NSF, we encourage you to check out the SNIA NSF webcast library. There you’ll find more than 60 educational, vendor-neutral on-demand webcasts produced by SNIA experts.

 

Intro to Incast, Head of Line Blocking, and Congestion Management

For a long time, the architecture and best practices of storage networks have been relatively well-understood. Recently, however, advanced capabilities have been added to storage that could have broader impacts on networks than we think.

The three main storage network transports – Fibre Channel, Ethernet, and InfiniBand – all have mechanisms to handle increased traffic, but they are not all affected or implemented the same way. For instance, utilizing a protocol such as NVMe over Fabrics will offer very different methodologies for handling congestion avoidance, burst handling, and queue management when looking at one networking in comparison to another.

Unfortunately, many network administrators may not understand how different storage solutions place burdens upon their networks. As more storage traffic traverses the network, customers face the risk of congestion leading to higher-than-expected latencies and lower-than expected throughput.

That’s why the SNIA Networking Storage Forum (NSF) is hosting a live webcast on June 18, 2019, Introduction to Incast, Head of Line Blocking, and Congestion Management where our NSF experts will cover:

  • Typical storage traffic patterns
  • What is Incast, what is head of line blocking, what is congestion, what is a slow drain, and when do these become problems on a network?
  • How Ethernet, Fibre Channel, InfiniBand handle these effects
  • The proper role of buffers in handling storage network traffic
  • Potential new ways to handle increasing storage traffic loads on the network

Register today to save your spot for June 18th. As always, our experts will be available to answer your questions. We hope to see you there.