A Q&A on Discovery Automation for NVMe-oF IP-Based SANs

In order to fully unlock the potential of the NVMe® IP based SANs, we first need to address the manual and error prone process that is currently used to establish connectivity between NVMe Hosts and NVM subsystems. Several leading companies in the industry have joined together through NVM Express to collaborate on innovations to simplify and automate this discovery process.

This was the topic of discussion at our recent SNIA Networking Storage Forum webcast “NVMe-oF: Discovery Automation for IP-based SANs” where our experts, Erik Smith and Curtis Ballard, took a deep dive on the work that is being done to address these issues. If you missed the live event, you can watch it on demand here and get a copy of the slides. Erik and Curtis did not have time to answer all the questions during the live presentation. As promised, here are answers to them all.

Q. Is the Centralized Discovery Controller (CDC) highly available, and is this visible to the hosts?  Do they see a pair of CDCs on the network and retry requests to a secondary if a primary is not available?

Read More

Automating Discovery for NVMe IP-based SANs

NVMe® IP-based SANs (including transports such as TCP, RoCE, and iWARP) have the potential to provide significant benefits in application environments ranging from the Edge to the Data Center. However, before we can fully unlock the potential of the NVMe IP-based SAN, we first need to address the manual and error prone process that is currently used to establish connectivity between NVMe Hosts and NVM subsystems.  This process includes administrators explicitly configuring each Host to access the appropriate NVM subsystems in their environment. In addition, any time an NVM Subsystem interface is added or removed, a Host administrator may need to explicitly update the configuration of impacted hosts to reflect this change. 

Due to the decentralized nature of this configuration process, using it to manage connectivity for more than a few Host and NVM subsystem interfaces is impractical and adds complexity when deploying an NVMe IP-based SAN in environments that require a high-degrees of automation.

Read More

Q&A: Security of Data on NVMe-oF

Ensuring the security of data on NVMe® over Fabrics was the topic of our SNIA Networking Storage Forum (NSF) webcast “Security of Data on NVMe over Fabrics, the Armored Truck Way.” During the webcast our experts outlined industry trends, potential threats, security best practices and much more. The live audience asked several interesting questions and here are answers to them.

Q. Does use of strong authentication and network encryption ensure I will be compliant with regulations such as HIPAA, GDPR, PCI, CCPA, etc.?

A. Not by themselves. Proper use of strong authentication and network encryption will reduce the risk of data theft or improper data access, which can help achieve compliance with data privacy regulations. But full compliance also requires establishment of proper processes, employee training, system testing and monitoring. Compliance may also require regular reviews and audits of systems and processes plus the involvement of lawyers and compliance consultants.

Q. Does using encryption on the wire such as IPsec, FC_ESP, or TLS protect against ransomware, man-in-the middle attacks, or physical theft of the storage system?

Read More

Protecting NVMe over Fabrics Data from Day One, The Armored Truck Way

With ever increasing threat vectors both inside and outside the data center, a compromised customer dataset can quickly result in a torrent of lost business data, eroded trust, significant penalties, and potential lawsuits. Potential vulnerabilities exist at every point when scaling out NVMe® storage, which requires data to be secured every time it leaves a server or the storage media, not just when leaving the data center. NVMe over Fabrics is poised to be the one of the most dominant storage transports of the future and securing and validating the vast amounts of data that will traverse this fabric is not just prudent, but paramount.

Read More

A Q&A on NVMe-oF Performance Hero Numbers

Last month, the SNIA Networking Storage Forum (NSF) hosted a live webcast “NVMe-oF: Looking Beyond Performance Hero Numbers.”It was extremely popular, in fact it has been viewed almost 800 times in just two weeks! If you missed it, it’s available on-demand, along with the presentation slides at the SNIA Educational Library. Our audience asked several great questions during the live event and our expert presenters, Erik Smith, Rob Davis and Nishant Lodha have kindly answered them all here.

Q. There are initiators for Linux but not for Windows? What are my options to connect NVMe-oF to Windows Server?

A. Correct. For many of the benchmarks, a standard Linux based initiator was used as it provided a consistent platform on which several of the compared NVMe Fabrics are supported/available.

Regarding what Fabrics are available on Microsoft Windows, it is best to check with your Microsoft representative for the most current information. As far as we are aware, Microsoft does not natively support NVMe-oF, but there are 3rd party Windows drivers available from other vendors, just search the web for “Windows NVMe-oF Initiator”.

In addition, some SmartNICs/DPU/Programmable NICs have the ability to terminate NVMe-oF and present remote NVMe devices as local, such options can be considered for your Windows deployment.

Read More

Beyond NVMe-oF Performance Hero Numbers

When it comes to selecting the right NVMe over Fabrics™ (NVMe-oF™) solution, one should look beyond test results that demonstrate NVMe-oF’s dramatic reduction in latency and consider the other, more important, questions such as “How does the transport really impact application performance?” and “How does the transport holistically fit into my environment?”

To date, the focus has been on specialized fabrics like RDMA (e.g., RoCE) because it provides the lowest possible latency, as well as Fibre Channel because it is generally considered to be the most reliable.  However, with the introduction of NVMe-oF/TCP this conversation must be expanded to also include considerations regarding scale, cost, and operations. That’s why the SNIA Networking Storage Forum (NSF) is hosting a webcast series that will dive into answering these questions beyond the standard answer “it depends.”

Read More

Optimizing NVMe over Fabrics Performance Q&A

Almost 800 people have already watched our webcast “Optimizing NVMe over Fabrics Performance with Different Ethernet Transports: Host Factors” where SNIA experts covered the factors impacting different Ethernet transport performance for NVMe over Fabrics (NVMe-oF) and provided data comparisons of NVMe over Fabrics tests with iWARP, RoCEv2 and TCP. If you missed the live event, watch it on-demand at your convenience.

The session generated a lot of questions, all answered here in this blog. In fact, many of the questions have prompted us to continue this discussion with future webcasts on NVMe-oF performance. Please follow us on Twitter @SNIANSF for upcoming dates.

Q. What factors will affect the performance of NVMe over RoCEv2 and TCP when the network between host and target is longer than typical Data Center environment? i.e., RTT > 100ms

Read More

Optimizing NVMe over Fabrics Performance with Different Ethernet Transports: Host Factors

NVMe over Fabrics technology is gaining momentum and getting more traction in data centers, but there are three kinds of Ethernet based NVMe over Fabrics transports: iWARP, RoCEv2 and TCP.

How do we optimize NVMe over Fabrics performance with different Ethernet transports? That will be the discussion topic at our SNIA Networking Storage Forum Webcast, “Optimizing NVMe over Fabrics Performance with Different Ethernet Transports: Host Factorson September 16, 2020.

Setting aside the considerations of network infrastructure, scalability, security requirements and complete solution stack, this webcast will explore the performance of different Ethernet-based transports for NVMe over Fabrics at the detailed benchmark level. We will show three key performance indicators: IOPs, Throughput, and Latency with different workloads including: Sequential Read/Write, Random Read/Write, 70%Read/30%Write, all with different data sizes. We will compare the result of three Ethernet based transports: iWARP, RoCEv2 and TCP.

Read More

Notable Questions on NVMe-oF 1.1

At our recent SNIA Networking Storage Forum (NSF) webcast, Notable Updates in NVMe-oF™ 1.1we explored the latest features of NVMe over Fabrics (NVMe-oF), discussing what’s new in the NVMe-oF 1.1 release, support for CMB and PMR, managing and provisioning NVMe-oF devices with SNIA Swordfish™, and FC-NVMe-2. If you missed the live event, you can watch it here. Our presenters received many interesting questions on NVMe-oF and here are answers to them all:

Q. Is there an implementation of NVMe-oF with direct CMB access?

A. The Controller Memory Buffer (CMB) was introduced in NVMe 1.2 and first supported in the NVMe-oF 1.0 specification. It’s supported if the storage vendor has implemented this within the hardware and the network supports it. We recommend that you ask your favorite vendor if they support the feature.

Q. What is the different between PMR in an NVMe device and the persistent memory in general?

Read More

The Latest on NVMe-oF 1.1

Since its introduction, NVMe over Fabrics (NVMe-oF™) has not been resting on any laurels. Work has been ongoing, and several updates are worth mentioning. And that’s exactly what the SNIA Networking Storage Forum will be doing on June 30th, 2020 at our live webcast, Notable Updates in NVMe-oF 1.1.

There is more to a technology than its core standard, of course, and many different groups have been hard at work at improving upon, and fleshing out, many of the capabilities related to NVMe-oF.  In this webcast, we will explore a few of these projects and how they relate to implementing the technology. In particular, this webcast will be covering:

  • A summary of new items introduced in NVMe-oF 1.1
  • Updates regarding enhancements to FC-NVMe-2
  • How SNIA’s provisioning model helps NVMe-oF Ethernet Bunch of Flash (EBOF) devices
  • Managing and provisioning NVMe-oF devices with SNIA Swordfish

Register today for a look at what’s new in NVMe-oF. We hope to see you on June 30th.