NVMe®/TCP Q&A

The SNIA Networking Storage Forum (NSF) had an outstanding response to our live webinar, “NVMe/TCP: Performance, Deployment, and Automation.” If you missed the session, you can watch it on-demand and download a copy of the presentation slides at the SNIA Educational Library. Our live audience gave the presentation a 4.9 rating on a scale of 1-5, and they asked a lot of detailed questions, which our presenter, Erik Smith, Vice Chair of SNIA NSF, has answered here.

Q: Does the Centralized Discovery Controller (CDC) layer also provide drive access control or is it simply for discovery of drives visible on the network?

A: As defined in TP8010, the CDC only provides transport layer discovery. In other words, the CDC will allow a host to discover transport layer information (IP, Port, NQN) about the subsystem ports (on the array) that each host has been allowed to communicate with. Provisioning storage volumes to a particular host is additional functionality that COULD be added to an implementation of the CDC. (e.g., Dell has a CDC implementation that we refer to as SmartFabric Storage Software (SFSS).

Q: Can you provide some examples of companies that provide CDC and drive access control functionalities? Read More

Automating Discovery for NVMe IP-based SANs

NVMe® IP-based SANs (including transports such as TCP, RoCE, and iWARP) have the potential to provide significant benefits in application environments ranging from the Edge to the Data Center. However, before we can fully unlock the potential of the NVMe IP-based SAN, we first need to address the manual and error prone process that is currently used to establish connectivity between NVMe Hosts and NVM subsystems.  This process includes administrators explicitly configuring each Host to access the appropriate NVM subsystems in their environment. In addition, any time an NVM Subsystem interface is added or removed, a Host administrator may need to explicitly update the configuration of impacted hosts to reflect this change. 

Due to the decentralized nature of this configuration process, using it to manage connectivity for more than a few Host and NVM subsystem interfaces is impractical and adds complexity when deploying an NVMe IP-based SAN in environments that require a high-degrees of automation.

Read More

Beyond NVMe-oF Performance Hero Numbers

When it comes to selecting the right NVMe over Fabrics™ (NVMe-oF™) solution, one should look beyond test results that demonstrate NVMe-oF’s dramatic reduction in latency and consider the other, more important, questions such as “How does the transport really impact application performance?” and “How does the transport holistically fit into my environment?”

To date, the focus has been on specialized fabrics like RDMA (e.g., RoCE) because it provides the lowest possible latency, as well as Fibre Channel because it is generally considered to be the most reliable.  However, with the introduction of NVMe-oF/TCP this conversation must be expanded to also include considerations regarding scale, cost, and operations. That’s why the SNIA Networking Storage Forum (NSF) is hosting a webcast series that will dive into answering these questions beyond the standard answer “it depends.”

Read More

Author of NVMe™/TCP Spec Answers Your Questions

900 people have already watched our SNIA Networking Storage Forum webcast, What NVMe™/TCP Means for Networked Storage? where Sagi Grimberg, lead author of the NVMe/TCP specification, and J Metz, Board Member for SNIA, explained what NVMe/TCP is all about. If you haven’t seen the webcast yet, check it out on-demand.

Like any new technology, there’s no shortage of areas for potential confusion or questions. In this FAQ blog, we try to clear up both.

Q. Who is responsible for updating NVMe Host Driver?

A. We assume you are referring to the Linux host driver (independent OS software vendors are responsible for developing their own drivers). Like any device driver and/or subsystem in Linux, the responsibility of maintenance is on the maintainer(s) listed under the MAINTAINERS file. The responsibility of contributing is shared by all the community members.

Q. What is the realistic timeframe to see a commercially available NVME over TCP driver for targets? Is one year from now (2020) fair? Read More

When NVMe™ over Fabrics Meets TCP

In the storage world, NVMe™ is arguably the hottest thing going right now. Go to any storage conference – either vendor-related or vendor-neutral, and you’ll see NVMe as the latest and greatest innovation. It stands to reason, then, that when you want to run NVMe over a  network, you must understand NVMe over Fabrics (NVMe-oF). Meanwhile, TCP is by far the most popular networking transport protocol both for storage and non-storage traffic.

TCP – the long-standing mainstay of networking – is the newest transport technology to be approved by the NVM Express ® organization, enabling NVMe/TCP. This can mean really good things for storage and storage networking – but what are the tradeoffs?

With any new technology, though, there can still be a bit of confusion.  No  technology is a panacea; and with any new development there will always be a need to gauge where it is best used (like a tool in a toolbox).

Learn more on January 22nd when the SNIA Networking Storage Forum hosts a live webcast, What NVMe™/TCP Means for Networked Storage. In this webcast, we’ve brought together the lead author of the NVMe/TCP specification, Sagi Grimberg, and J. Metz, member of the SNIA and NVMe Boards of Directors, to discuss:

  • What is NVMe/TCP
  • How NVMe/TCP works
  • What are the trade-offs?
  • What should network administrators know?
  • What kind of expectations are realistic?
  • What technologies can make NVMe/TCP work better?
  • And more…

Obviously, we can’t cover the entire world of NVMe and TCP networking in an hour, but we  can  start to raise the questions – and approach the answers – that must be addressed in order to make informed decisions. Speaking of questions, bring yours. Sagi and J. will be answering them on the 22nd. Register today to save your spot.