Cabling, Connectors and Transceivers Questions Answered

In our recent live SNIA Network Storage Forum webcast, “Next-generation Interconnects: The Critical Importance of Connectors and Cables” provided an outstanding tutorial on the latest in the impressive array of data center infrastructure components designed to address expanding requirements for higher-bandwidth and lower-power. They covered common pluggable connectors and media types, copper cabling and transceivers, and real world use cases. If you missed the live event, it is available on-demand.

We ran out of time to answer all the questions from the live audience. As promised, here are answers to them all.

Q. For 25GbE, is the industry consolidating on one of the three options?

Read More

Why Object Storage is Important

Object storage is a secure, simple, scalable, and cost-effective means of embracing the explosive growth of unstructured data enterprises generate every day. Object storage adoption is on the rise. That’s why the SNIA Networking Storage Forum (NSF) is hosting “Object Storage: What, How and Why.”  This webcast, with experts Chris Evans of Bookend LTD, Rick Vanover of Veeam, and Alex McDonald, Vice Chair of SNIA NSF and NetApp, the will explain how object storage works, its benefits and why it’s important.

Like other storage technologies, object storage brings its own set of unique characteristics to the market. Join us on February 19th at 10:00 am PT/1:00 pm ET to learn:

Read More

We’ve Been Thinking…What Does Hyperconverged Mean to Storage?

Here at the SNIA Ethernet Storage Forum (ESF), we’ve been discussing how hyperconverged adoption will impact storage. Converged Infrastructure (CI), Hyperconverged Infrastructure (HCI), along with Cluster or Cloud In a Box (CIB) are popular trend topics that have gained both industry and customer adoption. As part of data infrastructures, CI, HCI, and CIB enable simplified deployment of resources (servers, storage, I/O networking, hypervisor, application software) across different environments.

But what do these approaches mean for the storage environment? What are the key concerns and considerations related specifically to storage? How will the storage be connected to (or included in) the platform? Who will protect and backup the data? And most importantly, how do you know that you’re asking the right questions in order to get to the right answers?

Find out on March 15th in a live SNIA-ESF webcast, “What Does Hyperconverged Mean to Storage.” We’ve invited expert Greg Schulz, founder and analyst of Server StorageIO, to answer the questions we’ve been debating. Join us, as Greg will move beyond the hype (pun intended) to discuss:

  • What are the storage considerations for CI, CIB and HCI
  • Why fast applications and fast servers need fast I/O
  • Networking and server-storage I/O considerations
  • How to avoid aggravation-causing aggregation (bottlenecks)
  • Aggregated vs. disaggregated vs. hybrid converged
  • Planning, comparing, benchmarking and decision-making
  • Data protection, management and east-west I/O traffic
  • Application and server north-south I/O traffic

Register today and please bring your questions. We’ll be on-hand to answer them during this event. We hope to see you there!

Update: If you missed the live event, it’s now available  on-demand. You can also  download the webcast slides.

Buffers, Queues, and Caches, Oh My!

Buffers and Queues are part of every data center architecture, and a critical part of performance – both in improving it as well as hindering it. A well-implemented buffer can mean the difference between a finely run system and a confusing nightmare of troubleshooting. Knowing how buffers and queues work in storage can help make your storage system shine.

However, there is something of a mystique surrounding these different data center components, as many people don’t realize just how they’re used and why. Join our team of carefully-selected experts on February 14th in the next live webcast in our “Too Proud to Ask” series, “Everything You Wanted to Know About Storage But Were Too Proud To Ask – Part Teal: The Buffering Pod” where we’ll demystify this very important aspect of data center storage. You’ll learn:

  • What are buffers, caches, and queues, and why you should care about the differences?
  • What’s the difference between a read cache and a write cache?
  • What does “queue depth” mean?
  • What’s a buffer, a ring buffer, and host memory buffer, and why does it matter?
  • What happens when things go wrong?

These are just some of the topics we’ll be covering, and while it won’t be exhaustive look at buffers, caches and queues, you can be sure that you’ll get insight into this very important, and yet often overlooked, part of storage design.

Register today and spend Valentine’s Day with our experts who will be on-hand to answer your questions on the spot!

Update: If you missed the live event, it’s now available  on-demand. You can also  download the webcast slides.

What’s Happening with 25GbE

In July 2014, IEEE 802.3 voted to form a Study Group for 25Gb/s Ethernet.   There has been a lot attention in the networking press lately about 25Gb/s Ethernet, but many people are asking what is it and how did we get here.   After all, 802.3 already has completed standards for  40Gb/s and  100Gb/s and is currently working on 400Gb/s, so from a pure speed perspective, starting a 25Gb/s project now does look like a step backwards.

(Warning: the following discussion contains excessive physical layer jargon.)

The Sweet Spot

25GbE as a port speed is attractive because it makes use of 25Gb/s per lane signaling technology that has been in development for years in the industry, culminating in the recent completion of 802.3bj, the standard for 100GbE over backplane or twinax copper that utilizes four parallel lanes of 25Gb/s signaling to achieve the 100Gb/s port speed.  Products implementing 25Gb/s signaling in CMOS technology are just starting to come to market, and the rate will likely be a sweet spot for many years, as higher rate signaling of 40Gb/s or 50Gb/s is still in early technology development phases.  The ability to implement this high speed I/O in CMOS is important because it allows combining high-speed I/O with many millions of logic gates needed to implement Ethernet switches, controllers, FPGAs, and microprocessors.  Thus specifying a MAC rate of 25Gb/s to utilize 25Gb/s serdes technology can enable product developers to optimize for both the lowest cost/bit and the highest overall bandwidth utilization of the switching fabric.

4-Lane to 1-Lane Evolution

To see how we got here and why 25Gb/s is interesting, it is useful to back up a couple of generations and look at 10Gb/s and 40Gb/s Ethernet.   Earliest implementations of 10GbE relied on rather wide parallel electrical interfaces: XGMII and the 16-Bit interface.   Very soon after, however, 4-lane serdes-based interfaces became the norm starting with XAUI (for chip-to-chip and chip-to-optical module use) which was then adapted to longer reaches on twinax and backplane (10GBASE-CX4 and 10GBASE-KX4).    Preceding  10GbE achieving higher volumes  (~2009)  was the specification and technical feasibility of 10Gb/s on a single electrical serial lane. XFI was the first followed by 10GBASE-KR (backplane) and SFI (as an optical module interface and for direct attach twinax cable using the SFP+ pluggable form factor).   KR and SFI started to ramp around 2009 and are still the highest volume share of 10GbE ports in datacenter applications. The takeaway, in my opinion, is that single-lane interfaces helped the 10GbE volume ramp by reducing interconnect cost.  Now look forward to 40GbE  and 10GbE.  The initial  standard, 802.3ba,  was  completed in 2010.   So during the time that this specification was being developed, 10Gb/s serial interfaces were gaining traction, and consensus formed around the use of multiple 10Gb/s lanes in parallel to make the 40GbE and 100GbE electrical interfaces.  For example, there is a great similarity between 10GBASE-KR, and one lane of the 40GBASE-KR4 four-lane interface.  In a similar fashion 10Gb/s SFI for twinax  & optics in the SFP+ form factor is similar to a lane of the 40GbE equivalent interfaces for twinax and optics in the QSFP+ form factor.

But how does this get to 25Gb/s?

Due to the similarity in technology needed to make 10GbE and 40GbE, it has because a common feature in Ethernet switch and NIC chips to implement a four-lane port for 40GbE that can be configured to use each lane separately yielding four 10GbE ports.

From there it is a natural extension that 100GbE ports being implemented using 802.3bj technology (4x25Gb/s) also can be configured to support four independent ports operating at 25Gb/s.   This is such a natural conclusion that multiple companies are implementing 25GbE even though it is not a standard.

In some environments, the existence of a standard is not a priority.   For example, when a large-scale datacenter of compute, storage and networking is architected, owned and operated by one entity, that entity validates the necessary configuration to meet its requirements.  For the broader market, however,  there  is typically a requirement for multi-vendor interoperability across a diverse set of configurations and uses.  This is where Ethernet and IEEE 802.3 has provided value to the industry for over 30 years.

Where’s the Application?

Given the nature of their environment, it is the Cloud datacenter operators that are poised to be the early adopters of 25GbE.  Will it also find a home in more traditional enterprise and storage markets?  Time will tell, but in many environments ease of use, long shelf life, and multi-vendor interoperability are the priorities. For any environment, having the 25GbE specification maintained IEEE 802.3 will facilitate those needs.