Cabling, Connectors and Transceivers Questions Answered

In our recent live SNIA Network Storage Forum webcast, “Next-generation Interconnects: The Critical Importance of Connectors and Cables” provided an outstanding tutorial on the latest in the impressive array of data center infrastructure components designed to address expanding requirements for higher-bandwidth and lower-power. They covered common pluggable connectors and media types, copper cabling and transceivers, and real world use cases. If you missed the live event, it is available on-demand.

We ran out of time to answer all the questions from the live audience. As promised, here are answers to them all.

Q. For 25GbE, is the industry consolidating on one of the three options?

Read More

Next-generation Interconnects: The Critical Importance of Connectors and Cables

Modern data centers consist of hundreds of subsystems connected with optical transceivers, copper cables, and industry standards-based connectors. As data demands escalate, it drives the throughput of these interconnects to increase rapidly, making the maximum reach of copper cabling very short. At the same time, data centers are expanding in size, with nodes stretching further apart. This is making longer-reach optical technologies much more popular. However, optical interconnect technologies are more costly and complex than copper with many new buzz-words and technology concepts.

The rate of change from the vast uptick in data demand accelerates new product development at an incredible pace. While much of the enterprise is still on 10/40/100GbE and 128GFC speeds, the optical standards bodies are beginning to deliver 800G, with 1.6Tb transceivers in discussion! The introduction of new technologies creates a paradigm shift that requires changes and adjustments throughout the network.

Read More

Notable Questions on NVMe-oF 1.1

At our recent SNIA Networking Storage Forum (NSF) webcast, Notable Updates in NVMe-oF™ 1.1we explored the latest features of NVMe over Fabrics (NVMe-oF), discussing what’s new in the NVMe-oF 1.1 release, support for CMB and PMR, managing and provisioning NVMe-oF devices with SNIA Swordfish™, and FC-NVMe-2. If you missed the live event, you can watch it here. Our presenters received many interesting questions on NVMe-oF and here are answers to them all:

Q. Is there an implementation of NVMe-oF with direct CMB access?

A. The Controller Memory Buffer (CMB) was introduced in NVMe 1.2 and first supported in the NVMe-oF 1.0 specification. It’s supported if the storage vendor has implemented this within the hardware and the network supports it. We recommend that you ask your favorite vendor if they support the feature.

Q. What is the different between PMR in an NVMe device and the persistent memory in general?

Read More

The Latest on NVMe-oF 1.1

Since its introduction, NVMe over Fabrics (NVMe-oF™) has not been resting on any laurels. Work has been ongoing, and several updates are worth mentioning. And that’s exactly what the SNIA Networking Storage Forum will be doing on June 30th, 2020 at our live webcast, Notable Updates in NVMe-oF 1.1.

There is more to a technology than its core standard, of course, and many different groups have been hard at work at improving upon, and fleshing out, many of the capabilities related to NVMe-oF.  In this webcast, we will explore a few of these projects and how they relate to implementing the technology. In particular, this webcast will be covering:

  • A summary of new items introduced in NVMe-oF 1.1
  • Updates regarding enhancements to FC-NVMe-2
  • How SNIA’s provisioning model helps NVMe-oF Ethernet Bunch of Flash (EBOF) devices
  • Managing and provisioning NVMe-oF devices with SNIA Swordfish

Register today for a look at what’s new in NVMe-oF. We hope to see you on June 30th.

An FAQ on QUIC

The SNIA Networking Storage Forum‘s recent live webcast “QUIC – Will It Replace TCP/IP” was a fascinating presentation that was both highly-rated and well-attended. Lars Eggert, technical director of networking at NetApp and current chair of the IETF working group that is delivering this new Internet protocol, explained the history of the protocol, how it is being adopted today, and what the future of QUIC deployment is likely to be. The session generated numerous questions. Here are answers to both the ones Lars had time to answer during the live event as well as those we didn’t get to.

Q. Is QUIC appropriate/targeted to non-HTTP uses like NFS, SMB, ISCSI, etc.?

Read More

The Potential Impact of QUIC – Will it Replace TCP/IP?

Have you heard about QUIC? Although initially proposed as the acronym for “Quick UDP Internet Connections,” IETF’s use of the word QUIC is not an acronym; it is merely the name of the protocol. QUIC is a new UDP-based transport protocol for the Internet, and specifically, the web. Originally designed and deployed by Google, it already makes up 35% of Google’s egress traffic, which corresponds to about 7% of all Internet traffic. Due to its ability to improve connection-oriented web application performance, it is gaining enthusiastic interest by many other large Internet players in the ongoing IETF standardization process, which is likely to lead to an even greater deployment.

Read More

Will NVMe-oF™ Mean the End of iSCSI?

iSCSI is a block storage protocol for storage networking. It’s been around since 1988, is supported by multiple operating systems, and has been a standard since 2000.  

iSCSI has been used mostly for so-called “secondary” block storage, meaning storage for applications that are important but not mission-critical, and storage that must deliver good—but not great—performance.

Read More

Got SPDK Questions?

We kicked-off our 2020 webcast program by diving into how The Storage Performance Development Kit (SPDK) fits in the NVMe landscape. Our SPDK experts, Jim Harris and Ben Walker, did an outstanding job presenting on this topic. In fact, their webcast, “Where Does SPDK Fit in the NVMe-oF Landscape” received at 4.9 rating on a scale of 1-5 from the live audience. If you missed the webcast, I highly encourage you to watch it on-demand. We had some great questions from the attendees and here are answers to them all:

Q. Which CPU architectures does SPDK support?

Read More

SPDK in the NVMe-oF™ Landscape

The Storage Performance Development Kit (SPDK) has gained industry-wide recognition as a framework for building highly performant and efficient storage software with a focus on NVMe™. This includes software drivers and libraries for building NVMe over Fabrics (NVMe-oF) host and target solutions. On January 9, 2020, the SNIA Networking Storage Forum is going to kick-off its 2020 webcast program by diving into this topic with a live webcast “Where Does SPDK Fit in the NVMe-oF Landscape.”

Read More

Software Defined Storage Q&A

The SNIA Networking Storage Forum (NSF) recently hosted a live webcast, What Software Defined Storage Means for Storage Networking where our experts, Ted Vojnovich and Fred Bower explained what makes software defined storage (SDS) different from traditional storage. If you missed the live event, you can watch it on-demand at your convenience. We had several questions at the live event and here are our experts’ answers to them all:

Q. Are there cases where SDS can still work with legacy storage so that high priority flows, online transaction processing (OLTP) can use SAN for the legacy storage while not so high priority and backup data flows utilize the SDS infrastructure?

A.  The simple answer is, yes. Like anything else, companies are using different methods and architectures to resolve their compute and storage requirements. Just like public cloud may be used for some non-sensitive/vital data and in-house cloud or traditional storage for sensitive data. Of course, this adds costs, so benefits need to be weighed against the additional expense.

Q. What is the best way to mitigate unpredictable network latency that can go out of the bounds of a storage required service level agreement (SLA)?

A.  There are several ways to mitigate latency. Generally speaking, increased bandwidth contributes to better network speed because the “pipe” is essentially larger and more data can travel through it. There are other means as well to reduce latency such the use of offloads and accelerators. Remote Direct Memory Access (RDMA) is one of these and is being used by many storage companies to help handle the increased capacity and bandwidth needed in Flash storage environments. Edge computing should also be added to this list as it relocated key data processing and access points from the center of the network to the edge where it can be gathered and delivered more efficiently.

Q. Can you please elaborate on SDS scaling in comparison with traditional storage?

A.  Most SDS solutions are designed to scale-out both performance and capacity to avoid bottlenecks whereas most traditional storage has always had limited scalability, scaling up in capacity only. This is because as a scale-up storage system begins to reach capacity, the controller becomes saturated and performance suffers. The workaround for this problem with traditional storage is to upgrade the storage controller or purchase more arrays, which can often lead to unproductive and hard to manage silos.

Q. You didn’t talk much about distributed storage management and namespaces (i.e. NFS or AFS)?

A.  Storage management consists of monitoring and maintaining storage health, platform health, and drive health. It also includes storage provisioning such as creating each LUN /share/etc., or binding LUNs to controllers and servers. On top of that, storage management involves storage services like disk groups, snapshot, dedupe, replication, etc. This is true for both SDS and traditional storage (Converged Infrastructure and Hyper-Converged Infrastructure will leverage this ability in storage). NFS is predominately a non-Windows (Linux, Unix, VMware) file storage protocol while AFS is no longer popular in the data center and has been replaced as a file storage protocol by either NFS or SMB (in fact, it’s been a long time since somebody mentioned “AFS”).

Q. How does SDS affect storage networking? Are SAN vendors going to lose customers?

A. SAN vendors aren’t going anywhere because of the large existing installed base which isn’t going quietly into the night. Most SDS solutions focus on Ethernet connectivity (as diagrams state) while traditional storage is split between Fibre Channel and Ethernet; InfiniBand is more of a niche storage play for HPC and some AI or machine learning customers.

Q. Storage costs for SDS are highly dependent on scale and replication or erasure code. An erasure coded multi-petabyte solution can be significantly less than a traditional storage solution.

A.  It’s a processing complexity vs. cost of additional space tradeoff. Erasure coding is processing intense but requires less storage capacity. Making copies uses less processing power but consumes more capacity. It is true to say replicating copies uses more network bandwidth. Erasure coding tends to be used more often for storage of large objects or files, and less often for latency-sensitive block storage.

If you have more questions on SDS, let us know in the comment box.