The Life of a Storage Packet

Keeping storage as close to the application as possible and reasonable is important, but different types of storage can make a big difference for performance as well as types of workloads. Starting with the basics and working to more complexity, find out how storage really works in this first of the Packet Walk series of the “Napkin Dialogues” series. Warning: You’re on your own when tipping the pizza delivery person!

Download (PDF, 1.71MB)

 

Block Storage in OpenStack Q&A

The team at SNIA-ESF and I were very pleased with how many people attended our live “Block Storage in the Open Source Cloud called OpenStack.” If you missed it, please check it out on demand. We had several great questions during the live event. As promised here are answers to all of them. If you have additional questions, please feel free to comment on this blog.

Q. How is the support for OpenStack, if we hit a roadblock or need some features?

A. The OpenStack community has many avenues for contacting developers for support. The official place to report issues, file bugs or ask for new features is Launchpad. https://launchpad.net/openstack. It is the central place for all of the many OpenStack projects to file bugs or feature requests. This is also the location where every OpenStack project tracks its current release cycle and all of the features, called blueprints. Another good source of information are the public mailing lists. A good place to start for the mailing list is here, https://wiki.openstack.org/wiki/Mailing_Lists. Finally, developers are also on the public Internet Relay Chat channels associated with their projects. The developers are live and interactive, on each of the channels. You can find the information about the IRC system that OpenStack developers use here: https://wiki.openstack.org/wiki/IRC.

Q. Why was Python chosen as the programming language? Which version of Python is used as there are incompatibilities between versions?

A. The short answer here is that Python is a great language for rapid development and deployment that is mature and has a wide variety of publicly available libraries for doing work. The current released version of OpenStack uses Python 2.7. The OpenStack community is making efforts to ensure that we can eventually migrate to Python 3.x. New libraries that are being developed have to be Python 3.x compatible.

Q. Is it possible to replicate the backed up volumes at the OpenStack layer or do you defer to the back end array for data replication?

A. Currently, there is no built in support for volume replication in Cinder. The Cinder community is actively working on how to implement volume replication in the next release Liberty, which will ship in the fall of 2015. As with any major new feature in Cinder, the community has to design the new feature core such that it works with the 40+ vendor arrays, in such a way that it’s consistent. As the array support grows, the amount of up front design becomes more and more important and difficult at the same time. We have a specification that we are currently working on that will get us closer to implementing replication.

Q. Who, or what, creates the FC zones?

A. In Cinder, the block storage project, the component that creates and manages Fibre Channel zones is called the Fibre Channel Zone manager. A good document to read up on the zone manager is here: http://www.brocade.com/downloads/documents/at_a_glance/fc-zone-manager-ag.pdf. The official OpenStack documentation on the zone manager is here: http://docs.openstack.org/kilo/config-reference/content/section_fc-zoning.html. The zone manager is automatically called after Cinder Fibre Channel volume driver exports its volume from the array. The zone manager then adds the zones as requested by the driver to make the volume available to the virtual machine.

Q. Does the Cinder and Nova attachment process work over VLANs?

A. Yes. It’s entirely dependent on how the OpenStack admin deploys the Nova and Cinder services. As long as the Nova hosts can see the Cinder services and arrays behind the Cinder volume drivers, then it should just work.

Q. Is the FCZM a native component of the Cinder project? Or is it an add-on?

A. As I mentioned earlier, the Fibre Channel zone manager is part of the Cinder project. There has been some discussions, as part of the Cinder community, to possibly break out the zone manager into it’s own Python library, in which case it would be available to any Python project. Currently, it’s built into Cinder itself.

Q. Does Cinder involve itself in the I/O path as well or is it only the control path responsible for allocating storage?

A. Cinder is almost entirely control plane provisioning mechanism only. There are a few operations where the Cinder services actually does I/O. When a user wants to create an image from a volume, then Cinder attaches the volume to itself, and then copies the bytes from the volume into an image. Cinder also has a backup service that allows a user to backup a volume to an external service. In that case, the Cinder backup service directs copying the bytes into the configured backup storage. When Cinder attaches a volume to a Nova VM, or a bare metal node, Cinder is not involved in any I/O. Cinder’s job is to simply ensure that the volume is exported from the back-end array and make it available to Nova to see. After that, it’s entirely up to the transport protocol, iSCSI, FC, NFS, etc. to do the I/O for the volume.

Q. Is Nova aware of the LUN usage %?

A. Nova doesn’t track statistics against the volumes that it has attached to its virtual machines.

Q. Where do the vendor specific parts of Cinder fit in? Are there vendor specific “volume managers”?

A. The vendor specific components of Cinder exist in what are called Cinder volume drivers.     Those drivers are really nothing more than a python module that conforms to a volume driver API that is defined by the Cinder volume manager. You can get an idea of what the features that the drivers can support on the Cinder Support Matrix here:

https://wiki.openstack.org/wiki/CinderSupportMatrix

Q. If Cinder is only for control plane, which project in OpenStack is for data path?

A. There isn’t a project in OpenStack that manages the data path for volumes.

Q. Is there a volume detachment process as well and when does that come into play?

A. My presentation primarily focused around one aspect of the interaction between Nova and Cinder, which was volume attachment. I briefly discussed the volume detachment process, but it is conducted in basically the same way. An end user asks Nova to detach the volume. Nova then removes the volume from the VM, then removes the SCSI device from the compute host itself, and then tells Cinder to terminate the connection from the array to the compute host.

Q. If a virtual machine is moved to a different physical machine, how’s that handled in Cinder?

A. This process in OpenStack is called live migration. Nova does all of the work of moving the VM’s data, from one host to another. One facet of that is migrating any Cinder volume that may be attached to the VM. Nova understands which volumes are attached to the VM and knows which one of those volume(s) are Cinder volumes. When the VM is migrated, Nova coordinates with Cinder to ensure that all volumes are attached to the destination host and VM, as well as ensures that the volumes are detached from the originating compute host.

Q. Why doesn’t Cinder use SNIA SMI-S API to manage/consume SAN, NAS or Switch fabric instead of each storage vendor building Cinder drivers? SMI already covers all scenarios for the Cinder scenarios for FC, iSCSI, SAS etc.

A. Cinder itself doesn’t really manage the storage array communication itself. It’s entirely up to the individual vendor drivers to decide how best to communicate with its storage array. The HP 3PAR volume driver uses REST to communicate with the array, as do several other vendor drivers in Cinder. Other drivers use ssh. There are no strict rules on how a Cinder volume driver can choose to communicate with its back-end. This allows vendors to make the best use of their array interfaces as they see fit.

Q. Are there Horizon extensions or extension points for showing what physical resources your storage is coming from? Or is that something a storage vendor would need to implement?

A. Horizon doesn’t really know much about where storage is coming from other than it’s a Cinder volume. Horizon uses the available Cinder APIs to talk to Cinder to do work and fetch information about Cinder’s resources. I know of a few vendors that are writing Horizon plugins that add extra capabilities to view more detailed information about their specific array. As of today though, there is no API in Cinder to describe the internals of a volume on the vendor’s array.

 

 

 

 

 

 

NFS 4.2 Q&A

We received several great questions at our What’s New in NFS 4.2 Webcast. We did not have time to answer them all, so here is a complete Q&A from the live event. If you missed it, it’s now available on demand.

Q. Are there commercial Linux or windows distributions available which have adopted pNFS?

A. Yes. RedHat RHEL6.2, Suse SLES 11.3 and Ubuntu 14.10 all support the pNFS capable client. There aren’t any pNFS servers on Linux so far; but commercial systems such as NetApp (file pNFS), EMC (block pNFS), Panasas (object pNFS) and maybe others support pNFS servers. Microsoft Windows has no client or server support for pNFS.

Q. Are we able to prevent it from going back to NFS v3 if we want to ensure file lock management?

A. An NFSv4 mount (mount -t nfs4) won’t fall back to an nfs3 mount. See man mount for details.

Q. Can pNFS metadata servers forward clients to other metadata servers?

A. No, not currently.

Q. Can pNfs provide a way similar to synchronous writes? So data’s instantly safe in at least 2 locations?

A. No; that kind of replication is a feature of the data servers. It’s not covered by the NFSv4.1 or pNFS specification.

Q. Does hole punching depend on underlying file system in server?

A. If the underlying server supports it, then hole punching will be supported. The client & server do this silently; a user of the mount isn’t aware that it’s happening.

Q. How are Ethernet Trunks formed? By the OS or by the NFS client or NFS Server or other?

A. Currently, they’re not! Although trunking is specified and is optional, there are no servers that support it.

Q. How do you think vVols could impact NFS and VMware’s use of NFS?

A. VMware has committed to supporting NFSv4.1 and there is currently support in vSphere 6. vVols adds another opportunity for clients to inform the server with IO hints; it is an area of active development.

Q. In pNFS, does the callback call to the client must come from the original-called-to metadata server?

A. Yes, the callback originates from the MDS.

Q. Is hole punched in block units?

A. That depends on the server.

Q. Is there any functionality like SMB continuous availability?

A. Since it’s a function of the server, and much of the server’s capabilities are unspecified in NFSv4, the answer is – it depends. It’s a question for the vendor of your server.

Q. NFS has historically not been used in large HPC cluster environments for cluster-wide storage, for performance reasons. Do you see these changes as potentially improving this situation?

A. Yes. There’s much work being done on the performance side, and the cluster parallelism that pNFS brings will have it outperform NFSv3 once clients employ more of its capabilities.

Q. Speaking of the Amazon adoption for NFSv4.0. Do you have insight / guess on why Amazon did not select NFSv4.1, which has a lot more performance / scalability advantages over NFSv4.0?

A. No, none at all.

New Webcast: Block Storage in the Open Source Cloud called OpenStack

On June 3rd at 10:00 a.m. SNIA-ESF will present its next live Webcast “Block Storage in the Open Source Cloud called OpenStack.” Storage is a major component of any cloud computing platform. OpenStack is one of largest and most widely supported Open Source cloud computing platforms that exist in the market today.  The OpenStack block storage service (Cinder) provides persistent block storage resources that OpenStack Nova compute instances can consume.

I will be moderating this Webcast, presented by a core member of the OpenStack Cinder team, Walt Boring. Join us, as we’ll dive into:

  • Relevant components of OpenStack Cinder
  • How block storage is managed by OpenStack
  • What storage protocols are currently supported
  • How it all works together with compute instances

I encourage you to register now to block your calendar. This will be a live and interactive Webcast, please bring your questions. I look forward to “seeing” you on June 3rd

Next Webcast: The 2015 Ethernet Roadmap for Networked Storage

The ESF is excited to announce our next live Webcast, “The 2015 Ethernet Roadmap for Networked Storage.”

For over three decades, Ethernet has advanced on a simple “powers-of-ten” speed increases, and this model has served the industry well.   Ethernet is changing in big ways and the Ethernet Alliance has captured the latest changes in the 2015 Ethernet Roadmap.

On June 30th at 10:00 a.m. PT an expert panel comprised of Scott Kipp, President of the Ethernet Alliance, David Chalupsky, Chair IEEE P802.3bq/bz TFs and the Ethernet Alliance BASE-T Subcommittee and myself will present the Ethernet Alliance’s 2015 Ethernet Roadmap for the networking technology that underlies most of future network storage.

SNIA has focused on protocols and usage models and more or less just takes Ethernet for granted.   The biggest technology disruption in the storage space is the emergence into the mainstream of Non-Volatile Memory (NVM), FLASH in particular.   NVM increasingly moves system bottlenecks from the storage subsystem to the network.   Developments in NVM — most recently 3D FLASH — assure that the cost per GB will continue aggressive declines and demand for bandwidth will go up.   NVM will become more prevalent, making the roadmap for Ethernet increasingly more important to the storage networking community.

This will be a live and interactive session. I encourage you to register now and bring your questions for our experts. I hope to see you on June 30th.

Next Webcast: What’s New in NFS 4.2

We’re excited to announce our next ESF Webcast on NFSv4.2. With NFSv4.1 implemented on several commercial NFS systems, an established Linux client and a new pNFS Linux server, there is a continued growth of NFS usage in the IT industry. NFSv4.1, first introduced in 2010, meets many needs in the modern datacenter, but there are still technologies and advanced techniques that NFS developers want to deliver.

Join me and J Metz on April 28th at 10:00 a.m. PT as we’ll cover a brief update of where we are with NFSv4.1 and more detail on the proposed features for NFSv4.2 that are currently being ratified at the IETF.  This will be a live, interactive session. Register now and please bring your questions.

If you need a primer on NFS before this event, I encourage you to check our 4-part Webcast mini-series available on demand:

Register today. I hope to see you on April 28th.

 

 

 

 

Ethernet Connected Drives Webcast Q&A

At our recent SNIA ESF Webcast “Visions for Ethernet Connected Drives” Chris DePuy of the Dell’Oro Group discussed potential benefits, use cases, and challenges of Ethernet connected drives. It’s not surprising that we had a lot of questions given that this market is in its infancy. As promised during our live event, here are answers to questions from the audience. If you think of additional questions, please feel free to comment on this blog.

Q. Will this also mandate new protocols to be used for storage like RDMA?

A. We did not receive any feedback from the technology companies we surveyed about RDMA specifically, but new protocols very well may be required to make effective and cost-effective use of eDrives. Storage systems offer many capabilities beyond just standard Ethernet networking and new protocols may be required to deliver those as well as new services in this new storage system architecture.

Q. Is White Box bought primarily by cloud customers?

A. Yes, in our research, substantially all purchases of White Box storage devices are purchased by cloud service providers.

Q. I may have missed it but aren’t we really talking about the HGST Open Ethernet Drive Architecture and the Seagate Kinetic Open Storage Platform? Both use Ethernet interfaces but HGST puts Debian on each HDD and Seagate has a key-value API for applications to directly write to the HDD. The actual deployment of these Ethernet HDDs would be in Ethernet Layer 2 switched backplanes in a 4U chassis being built by Supermicro, Xyratex (Seagate) and several others.

A. Given this was a presentation made to a neutral industry association; we chose not to discuss specific vendors. To answer your questions, yes, we are talking about Ethernet Connected Drives from HGST and Seagate, but we also integrated feedback from other suppliers of related technology, as well, including Toshiba. To your other question, yes, we have seen enclosures with embedded Ethernet switch technology connecting to the Ethernet drives from various other vendors. In our research for this webinar, we have also seen Ethernet switch technology embedded into enclosures that don’t use Ethernet connected drives, as well, but these would have systems to convert traditional HDD interfaces, but the network would see Ethernet as the outward facing interface.

Q. Doesn’t that take space on the drive when you put CPU and more memory?

A. We asked this question, too, but learned that there is sufficient space to maintain the HDD and all the parts in the same form factors we historically have known.

Q. What can one implement in these internal processors used in Ethernet drives? For instance can we run erasure codes such as Jerasure or XOR based codes yet do the basic tasks needed for the Ethernet drives?

A. We did not receive specific feedback during the surveys for this webinar about where one would run erasure coding. Generally, though, the decision will lead to design considerations for which CPU and memory choices would be made for each drive, which in turn would change economics as to whether the overall system is affordable/feasible. Note that doing erase coding on the drives increases the amount of intelligence required on the drive, for the arithmetic, for the requisite peer-to-peer networking, and for maintaining state information about other relevant drives required for completing the erasure codes. New software to manage all this would be required as well.

Q. Can I ran Ceph OSD plus Erasure code based on open source Jerasure in the Ethernet connected drive internal ARM processor?

A. We did not receive specific feedback during the surveys for this webinar about where one would run erasure coding. Generally, though, the decision will lead to design considerations for which CPU and memory choices would be made for each drive, which in turn would change economics as to whether the overall system is affordable/feasible.

Q. Erasure coding is more complex compared to RAID, how do I implement erasure coding with Ethernet drives?

A. We did not receive specific feedback during the surveys for this webinar about where or how one would run erasure coding.

Q. Does the economics assume including the cost of the Ethernet Ports? If so are you assuming unmanaged or managed Ethernet ports?

A. In the slides, we portrayed a simplistic capital spending model that considered just servers and hard drives. In reality, there are many other factors that play into both CAPEX and OPEX comparisons between conventional and Ethernet Connected Drive architectures. Examples include the cost differential between using Ethernet switching versus traditional HDD interfaces and how much memory and CPU is needed to support a particular use case.

Q. How does the increased number of network ports needed influence this price equation?

A. In the slides, we portrayed a simplistic capital spending model that considered just servers and hard drives. In reality, there are many other factors that play into both CAPEX and OPEX comparisons between conventional and Ethernet Connected Drive architectures. Examples include the cost differential between using Ethernet switching versus traditional HDD interfaces, how much memory and CPU is needed to support a particular use case.

Q. I’m confused how Power and Cooling could be saved. If you need X number of drives to store data then you would need the same number of drives in the connected drive model wouldn’t you? Perhaps more if the e-drives lack efficiency features?

A. The general point is that proponents of Ethernet Connected Drives argue there won’t be a need for storage-oriented servers, and so the savings would result from there being fewer of them consuming power.

Q. I guess the protocol would change commanding the drives?

A. There is no single approach that has been agreed upon. During the presentation, we said there are multiple technical approaches, one of which includes using Key Value APIs, and the other is to install an Operating System onto each drive that could run whatever you want on it.

Q. Are Ethernet connected drives JBODS on Ethernet?

A. Yes, that is the way we view it, too. Sometimes they are even called, “eBODS” where the traditional JBOD controller is replaced with an Ethernet switch.

Q. How is data protected–i.e., RAID or other mechanism.

A. In our surveys, we learned that the most common method would be to leverage erasure coding that is commonly associated with object oriented storage systems.

Q. How will photonics impact this concept?

A. Photonics is involved in data center Ethernet for higher speed communications. In our surveys, we did not encounter a single instance of a vendor discussing photonics at the Ethernet Connected Drive. For HDDs, 1GbE provides more than enough bandwidth for the drive.

Q. Are the servers today connecting the storage just dumb boxes that expose storage? Don’t they do processing as well? With Ethernet drives we’re removing that computational node it seems.

A. This is a very good point. Today’s conventional storage systems have significant computing capabilities – we think these could be used to do computing as well as performing storage-oriented tasks as they do primarily today. We expect that in the future, the servers that are packaged in external storage systems will be organized in a way that allows customers to run storage functions as well as more traditional purposes that would allow us to just call them ‘servers.’ In fact, there are several startups that are popularizing this idea.

Q. When it comes to HDD manufacturers there are only three left…WD (HGST), Seagate (Samsung) and Toshiba. When it comes to SSD or flash drives there are more manufacturers. Seagate is using a dual Serial Gigabit Media Independent Interface (SGMII) on its Kinetic HDDs. What other ways are there to do Ethernet on an HDD?

A. We did not receive any feedback from the technology companies we surveyed about this topic. Note, that SNIA recently started an “Object Drive Technical Work Group” to help drive standards for Ethernet-connected drives. If this topic is of interest, we encourage you to join that TWG.

Q. Have you seen any indication of a ratio between CPU power and Memory vs. the size of the storage? What is the typical White Box? EG Intel (version?) Memory (in GB?) Storage (in TB?)

A. The uses cases we presented are based on vendor-supplied viewpoints that implicitly incorporate the answers to your question, but don’t specifically address it. What we learned is that in these use-cases, there is an assumed positive TCO savings, but not every vendor agrees with these calculations – again without providing specifics like you are asking about.

Q. How can you eliminate the object servers? You still need that functionality somewhere if you ever hope to find the data again, or protect it… You may move away from dedicated Object servers but that code has to run somewhere thus saying they are eliminated is wrong…

A. This is a very good point. The use cases offered to us suggest that this code would either reside in the Ethernet Connected Drive, or on the server running an application itself, or both. This is why we made the point that the applications would have to be re-written to take advantage of the proposed new architecture.

Q. Is the cost of Ethernet HDDs expected to be the same as current HDDs and why?
Ethernet HDDs have more processing capabilities so shouldn’t they cost more (is that 10% more?)

A. Correct. If more components were added to an otherwise identical HDD, then, the cost would be greater. This is paramount to one of the main dissenting views we learned about during the survey process. It does raise the question as to whether it makes sense to deliver underlying HDDs that are NOT identical to traditional HDDs to offset costs somehow – maybe with lower speeds, or whether these Ethernet Connected Drives would be sold at lower margins by the HDD vendors.

Q. Do Server power TCO numbers take account of lower power consumption of next generation servers as indicated by Intel?

A. We do not know what version of servers was used in these vendor-supplied TCO calculations.

Q. If you are planning to offload processing to the processor on the HDD then you are assuming that the HDD vendors will expose those drives for user access – is there any evidence of this?

A. There is no single approach that has been agreed upon, and therefore no single answer to this question. During the presentation, we said there are multiple technical approaches, one of which includes using Key Value APIs, and the other is to install an Operating System onto each drive that could run whatever you want on it.

Q. How is redundancy handled on eHDD based appliance… aka a drive fails?

A. The custom-built software would presumably be developed to handle this. And obviously, the eHDD has to add enough CPU and memory to manage all this — which of course adds cost.

Q. It seems that with the CPUs on each drive, the archive, object or whatever the application would need to be rewritten to support this specific method of parallel processing. Is anyone doing this now?

A. During the survey process, we learned that many applications were being ported to this environment, some of which apparently do take advantage of parallel computing. Given we were planning to immediately divulge information to the public, we were not presented with details.

Q. What is nearline storage?

A. This is the way it was described to us by some of the technology companies we surveyed, but the meaning is that it represents a more traditional storage system you might see in an enterprise where many drives are stopped (not rotating) and are turned on when a request comes in.

Q. Why are analytics specifically optimized for Ethernet attached storage devices – the presenter seems to anticipate that processing can be pushed onto the drive, and if this is the case why can’t other drive interfaces do this – PCIe attached storage should be even more amenable for this.

A. The presenter was sharing views compiled by the responses of various technology companies during a series of interviews conducted before this webcast. Analytics is a large, growing industry today and exists without Ethernet Connected Drives. Some of the companies surveyed offered the view that putting processing capabilities into each HDD may enhance the overall system’s performance.

Q. Can the presenter comment on the value of scale-out for E-Drives, versus legacy SAN scale out?

A. Some of the technology companies interviewed by the presenter suggested that systems based on Ethernet Connected Drives may scale to larger capacities than traditional architectures on the basis that the storage-oriented servers no longer present an impediment to scaling.

Q. Just as object storage addresses RAID smart drives could provide the meta data needed by the swift controllers to do deduplication, or the controller may do deduplication as a pre-process or post process like we have seen on NetApp or Data Domain evolve over years.
If we use optic connections the port density issue is resolved and this end up looking like something from 2001 (the movie) correct?

A. Photonics is involved in data center Ethernet for higher speed communications. In our surveys, we did not encounter a single instance of a vendor discussing photonics at the Ethernet Connected Drive. As noted above, 1GbE is more than sufficient for eHDDs.

Q. FYI…48TB Capacity Kinetic Storage Appliance $5000.00 street price
White Box 2U Dual Xeon storage server with 48TB RAW…$8000 street price

A. Thank you for sharing! You may have noticed we did not mention specific vendors during the presentation – perhaps others viewing your question will take note of your viewpoint.

Q. To the extent that hyperscale cloud environments have servers with open sockets or slots for direct attach storage of drives, how are there financial savings to connect through Ethernet instead of direct attach? Will servers of the future remove these slots and sockets? Are there other cluster wide benefits with regards to performance for data accessed directly through the network instead of through the server with the local storage, when the data is accessed by a large number of servers?

A. Hyperscalers are buying storage-related hardware at a fraction of the price that systems OEMs are selling them for mainly because they do not demand software that enterprises value so much – they leverage open source and make their own for their very specific needs. If you look at the slide about the ‘White Box Effect’ in the presentation, you get a sense for just how much less they pay – or anyone else who buys a White Box pays – but make no mistake about it, these devices don’t do much unless you integrate them into a working system intended to store and safely retain data. To answer your question, we observe that these hyperscalers are such large customers of components and systems that they could choose to request custom hardware designs with customized specifications – more of this kind of interface, fewer of that kind, etc. As an analogy, in the networking industry, one of the largest buyers of the underlying network technology like processors, Ethernet interfaces and optics are the handful of hyperscalers – and in fact these customers are larger than most vendors.

Q. Why would each drive not know about other drives storage? How does this differ from existing storage servers?

A. In the traditional storage architecture, a central system is involved. The dissenting viewpoint we received from some of the technology companies we interviewed was a counterpoint that may exist only under certain design scenarios. Our view is that if a system is designed with the goal in mind to make each drive aware of each other’s contents, then that is technically possible of course. But at a cost, as you add CPU, memory, and software to do this.

Q. I can see flash and Wi-Fi Ethernet connected drives providing Internet of Things storage for values that can be harvested impendent of when the value was stored. Thus getting a low power system that could live off of USB type power or power over Ethernet being why corporations would look at this.

A. I think the point you are making is that flash consumes very little power, right? This revolutionary technology (lets just say, non volatile memory to keep it general) is causing all kinds of disruptive changes in the storage industry, and as costs come down for NVM, all kinds of different scenarios become possible.

Q. Cost model might need to include a simpler lower cost local server with the Ethernet drive clusters by adding a cost item to the left side of their equation, comments?

A. Agreed – the equation we provided was simplistic and could be expanded to include many other terms and other simultaneous equations as well. We just thought that providing it would frame the discussion on the slide instead of just saying it verbally.

Q. Obviously, it will be higher, but how do you envision this changing Ethernet bandwidth requirements? Will Ethernet connected drives only become a reality once 40, 25, 100 Gb becomes the mainstream for Ethernet networks?

A. Network bandwidth needs will be a function of how the servers interact with the drives – I can see scenarios where traffic might be kept more locally, or where asking each drive for ‘the answer’ instead of ‘all of its data’ so it can be processed in a server, might actually cause your premise that traffic increases. The point I’m getting to is that it depends on what applications these Ethernet Connected Drives are used for. Nevertheless, Metcalfe’s law (all available bandwidth installed will be consumed) has not yet been repealed publicly that I’m aware of.

Q. With Ethernet connected drives are we still stuck with the fundamental issue that HDD are still transactionally inefficient and thus while a novel concept the basic drive unless improvements are made in transactional efficiencies are improved remain the bottleneck?

A. We think HDDs will co-exist with Flash/NVM for a very long time. Some very smart engineers are working to make this co-existence increasingly efficient, taking into account the strengths and weaknesses of both storage media.

 

 

SNIA ESF Leadership Welcomes Chad Hintz

The ESF continues our busy schedule hosting informative Webcasts, writing and publishing articles and participating at industry conferences. 2015 also brings a change in ESF leadership. I’d like to welcome Chad Hintz of Cisco as our newest ESF board member. Chad has been elected as chair of our Storage over Ethernet Special Interest Group (SIG).

The Storage over Ethernet SIG is focused on a growing trend among modern data centers to deploy consolidated Ethernet networks as the primary network infrastructure for all LAN and storage traffic. Technologies such as Data Center Bridging (DCB), Fibre Channel over Ethernet (FCoE) among others, offer organizations a robust environment to support mixed workloads with each using the most appropriate protocol (including NFS, SMB and iSCSI) over a shared Ethernet physical transport. The Storage over Ethernet SIG offers educational and thought leadership materials related to these technologies and the business value they offer to organizations of all sizes. Especially appreciated by our audience is that this information comes from SNIA and thus is vendor-neutral.

Chad brings a wealth of expertise to ESF. He is a Technical Solutions Architect focusing on designing enterprise solutions for customers around data center technologies. He holds 3 CCIEs in routing and switching, security and storage and has held certifications from Novell, VMware, and Cisco. We’re confident his expertise and passion for Ethernet Storage will be a big asset to our group.

We are looking forward to having Chad on our team to help guide the many activities we have planned this year. Other members of the 2015 board include myself as ESF chair, Alex McDonald (NetApp), and Mike Jochimsen (Emulex).

As I mentioned, the ESF is busy creating vendor-neutral educational on Ethernet connected storage networking technologies. I encourage you to check out some of our recent and upcoming content:

Upcoming Webcast – Visions for Ethernet Connected Drives

On-demand Webcast – Benefits of RDMA in Accelerating Ethernet Storage Connectivity

Article – Cloud File Services

Article – Weave Your Cloud with a Data Fabric

 

 

Benefits of RDMA in Accelerating Ethernet Storage Q&A

At our recent live Webcast “Benefits of RDMA in Accelerating Ethernet Storage Connectivity” experts from Emulex, Intel and Microsoft had an insightful discussion on the ways RDMA is having an impact on Ethernet storage. The live event was attended by nearly 200 people and feedback was overwhelming positive with several attendees thanking us for our vendor neutral presentation and one attendee commenting that it was, “Probably the most clearly comprehensible yet comprehensive webinar I’ve attended in some time.” If you missed the Webcast, it’s now available on demand. We did not have time to get to everyone’s questions, so as promised, below are answers to all of them. If you have additional questions, please ask them in the comments section in this blog and we’ll get back to you as soon as possible.

Q.  Is RDMA over RoCEv2 in production?

A. The IBTA released the RoCEv2 Specification in September 2014.  In order to support that specification changes may be required across the RDMA stack, including firmware, drivers & operating systems.   Schedules for implementation of that specification will vary by operating system.   For example, the OpenFabrics Alliance (OFA) has not released an Open Fabrics Enterprise Distribution (OFED) version that implements that standard yet, although it is in process now.   Once OFA completes their OFED stack implementation, the Linux distribution vendors will then incorporate and support the updated OFED stack.   Implementations provided prior to full OFA and Distro vendor support would be preliminary, potentially incompatible with the OFED release, and require confirmation by the distro vendor with regard to the nature/level of support they would be providing

Q. I would have liked a list of Windows applications that take advantage of SMB Direct – both in a Hyper-V host or bare metal.

A.  In Windows, any file-based application can make use of SMB3 and SMB Direct due to the native file-based programming interface support. No application changes are required. For certain enterprise applications such as Hyper-V and SQL Server, SMB3 is officially supported, and more information can be found in the product catalog at www.microsoft.com.

Q. Are there any particular benefits in using one network protocol over another for SMB Direct/RDMA (iWARP vs. RoCE vs. IB)?

A.  There are no hard and fast rules; any adapter or protocol can be suitable for many scenarios. Of the Ethernet-based protocols we considered in today’s webcast

  • iWARP offers the benefit of operation over TCP with its reliability and routability, well-suited to a broad range of installed infrastructure.
  • RoCE offers a lightweight, efficient protocol when a DCB-enabled switched fabric is available. RoCE, however, is not routable.
  • RoCEv2 offers similar properties to RoCE, with the possibility to scale to larger routed and DCB-enabled fabrics.

Q. Who are the vendors offering iWARP capable RNICs?

A. Chelsio Communications has production iWARP adapters today, and both Intel and Qlogic have publicly committed to future iWARP controllers.

Q. How much testing has been done with SMB3, and in particular SMB direct, over WAN connections?

A. The SMB2 protocol was originally designed to adapt to WAN scenarios, and supports a credit-based management of large amounts of data to be outstanding, to make best use of WAN-type long pipes. The SMB3 protocol retains these design attributes, and the SMB Direct protocol also supports similar deep pipelining. The iWARP protocol, being layered on standard TCP, is well suited to such deployments, and RoCE WAN adapters are potentially available. Please contact the respective technology vendors for information on any available testing results.

Q. I love a future webcast for RDMA enabled distributed filesystems.

A. Thanks for the suggestion! We’re always looking for ideas for future webcasts and SNIA-ESF will consider this as a potential follow-on.

Q.  Is Live Migration the scenario where “packet size” is 1MB?

A.  All SMB Direct scenarios have workloads that range anywhere up to 8MB. For large file copies, most SMB3 clients request from 1MB to 8MB per operation, for Hyper-V live migration, transfers are typically similar, during the bulk transfer phase.

Q. SMB3 is being compared to FC for enterprise. If Ethernet based protocols are of interest, wouldn’t FCoE give the same performance as FC (same stack) vs. SMB3?

A. SMB3 with SMB Direct enables many workloads not possible with Fibre Channel over Ethernet, and performance comparisons are therefore difficult. Perhaps another SNIA webcast could investigate this!

Q.  Regarding your SMB direct example with lots of small operations, how do you deal with the overhead of registering and unregistering buffers for the RDMA operations?

A. As answered later in the session, the registration and unregistration is not a protocol matter, but in the case of the Windows implementation, it is strictly performed for the specific buffers of each operation, which is critical for security, data integrity, and system protection. The standard “Fast Register Work Request” method is used, and careful implementation has shown that the overhead does not negatively impact performance, even for small I/O (4KB/operation). Check out Jose Barreto’s blog, which contains many benchmark results.

Q. But isn’t Live Migration done in 1MB “chunks”? So not “small” I/Os?

A. As answered later in the session, Hyper-V Live Migration is done in several phases, the first phase is the initial bulk copy of memory, done in large chunks, but immediately after it a second phase of copying individual pages which were dirtied by the live-running VM is performed. These operations are typically 4KB. Note: The faster the initial phase goes, the less work there is in this second phase, but in both phases, the faster, the better, and RDMA accelerates both.

Q. Are iSER and iWARP alternatives to one another?

A.  iWARP is an RDMA protocol, and iSER is a mapping of iSCSI to iWARP, as well as RoCE/InfiniBand.

Q. What’s Intel’s roadmap for RoCE and/or iWARP?

A.  Intel is committed to iWARP and plans to incorporate it in future server chipsets and SOCs. See http://www.intel.com/content/www/us/en/ethernet-products/accelerating-ethernet-iwarp-video.html for more information.

Q. Is there any other Transport being used other than IB to create a reliable transport for RoceV2? Puristically it is possible?

A. RoCE was developed to leverage Infiniband as much as possible.   For that reason, the Infiniband transport was chosen when the RoCE standard was developed.   As the RoCEv2 standard was developed, the underlying Infiniband network protocol was replaced with IPv4 / IPv6 in order to provide the layer 3 routability  and UDP to provide stateless encapsulation (and indication) of the Infiniband transport header that was retained.   While it may be possible to develop a reliable transport to replace Infiniband, the RoCE standards body has elected not to go that route  as of this writing.

 

 

 

New Webcast: Visions For Ethernet Connected Drives

Mark your calendar for March 25th as SNIA-ESF, together with the Dell’Oro Group, will be hosting a live Webcast, “Visions for Ethernet Connected Drives.” The arrival of mass-storage services, the emergence of analytics applications and the adoption of object storage by the cloud-services industry have provided an impetus for new storage hardware architectures. One such underlying hardware technology is the Ethernet connected hard drive, which is in early stages of availability.

Please join us on March 25th to hear Chris DePuy, Vice President of Dell’Oro Group share findings from interviews with storage-related companies, including those selling hard drives, semiconductors, peripherals and systems, as he will present some common themes uncovered, including:

  • What system-level architectural changes may be needed to support Ethernet connected drives
  • What capabilities may emerge as a result of the availability of these new drives
  • What part of the value chain spends the time and money to package working solutions

We will also present some revenue and unit statistics about the storage systems and hard drive markets and will discuss potential market scenarios that may unfold as a result of the object storage and Ethernet connected drive trends.

I’ll be hosting the event and together with Chris, taking your questions. I hope you’ll join us.