RDMA for Persistent Memory over Fabrics – FAQ

In our most recent SNIA Networking Storage Forum (NSF) webcast Extending RDMA for Persistent Memory over Fabrics, our expert speakers, Tony Hurson and Rob Davis outlined extensions to RDMA protocols that confirm persistence and additionally can order successive writes to different memories within the target system. Hundreds of people have seen the webcast and have given it a 4.8 rating on a scale of 1-5! If you missed, it you can watch it on-demand at your convenience. The webcast slides are also available for download.

We had several interesting questions during the live event. Here are answers from our presenters:

Q. For the RDMA Message Extensions, does the client have to qualify a WRITE completion with only Atomic Write Response and not with Commit Response?
A. If an Atomic Write must be confirmed persistent, it must be followed by an additional Commit Request. Built-in confirmation of persistence was dropped from the Atomic Request because it adds latency and is not needed for some application streams.

Q. Why do you need confirmation for writes? From my point of view, the only thing required is ordering.

A. Agreed, but only if the entire target system is non-volatile! Explicit confirmation of persistence is required to cover the “gap” between the Write completing in the network and the data reaching persistence at the target.

Q. Where are these messages being generated? Does NIC know when the data is flushed or committed?

A. They are generated by the application that has reserved the memory window on the remote node. It can write using RDMA writes to that window all it wants, but to guarantee persistence it must send a flush.

Q. How is RPM presented on the client host?

A. The application using it sees it as memory it can read and write.

Q. Does this RDMA commit response implicitly ACK any previous RDMA sends/writes to same or different MR?

A. Yes, the new Commit (and Verify and Atomic Write) Responses have the same acknowledgement coalescing properties as the existing Read Response. That is, a Commit Response is explicit (non-coalesced); but it coalesces/implies acknowledgement of prior Write and/or Send Requests.

Q. Does this one still have the current RMDA Write ACK?

A. See previous general answer. Yes. A Commit Response implicitly acknowledges prior Writes.

Q. With respect to the Race Hazard explained to show the need for explicit completion response, wouldn’t this be the case even with a non-volatile Memory, if the data were to be stored in non-volatile memory. Why is this completion status required only on the non-volatile case?

A. Most networked applications that write over the network to volatile memory do not require explicit confirmation at the writer endpoint that data has actually reached there. If so, additional handshake messages are usually exchanged between the endpoint applications. On the other hand, a writer to PERSISTENT memory across a network almost always needs assurance that data has reached persistence, thus the new extension.

Q. What if you are using multiple RNIC with multiple ports to multiple ports on a 100Gb fabric for server-to-server RDMA? How is order kept there…by CPU software or ‘NIC teaming plus’?

A. This would depend on the RNIC vendor and their implementation.

Q. What is the time frame for these new RDMA messages to be available in verbs API?

A. This depends on the IBTA standards approval process which is not completely predicable, roughly sometime the first half of 2019.

Q. Where could I find more details about the three new verbs (what are the arguments)?

A. Please poll/contact/Google the IBTA and IETF organizations towards the end of calendar year 2018, when first drafts of the extension documents are expected to be available.

Q. Do you see this technology used in a way similar to Hyperconverged systems now use storage or could you see this used as a large shared memory subsystem in the network?

A. High-speed persistent memory, in either NVDIMM or SSD form factor, has enormous potential in speeding up hyperconverged write replication. It will require however substantial re-write of such storage stacks, moving for example from traditional three-phase block storage protocols (command/data/response) to an RDMA write/confirm model. More generally, the RDMA extensions are useful for distributed shared PERSISTENT memory applications.

Q. What would be the most useful performance metrics to debug performance issues in such environments?

A. Within the RNIC, basic counts for the new message types would be a baseline. These plus total stall times encountered by the RNIC awaiting Commit Responses from the local CPU subsystem would be useful. Within the CPU platform basic counts of device write and read requests targeting persistent memory would be useful.

Q. Do all the RDMA NIC’s have to update their firmware to support this new VERB’s? What is the expected performance improvement with the new Commit message?

A. Both answers would depend on the RNIC vendor and their implementation.

Q. Will the three new verbs be implemented in the RNIC alone, or will they require changes in other places (processor, memory controllers, etc.)?

A. The new Commit request requires the CPU platform and its memory controllers to confirm that prior write data has reached persistence. The new Atomic Write and Verify messages however may be executed entirely within the RNIC.

Q. What about the future of NVMe over TCP – this would be much simpler for people to implement. Is this a good option?

A. Again this would depend on the NIC vendor and their implementation. Different vendors have implemented various tests for performance. It is recommended that readers do their own due diligence.

 

File vs. Block vs. Object Storage – Are Worlds Colliding?

When it comes to storage, a byte is a byte is a byte, isn’t it?

One of the enduring truths about simplicity is that scale makes everything hard, and with that comes complexity. And when we’re not processing the data, how do we store it and access it?

The only way to manage large quantities of data is to make it addressable in larger pieces, above the byte level. For that, we’ve designed sets of data management protocols that help us do several things: address large lumps of data by some kind of name or handle, organize it for storage on external storage devices with different characteristics, and provide protocols that allow us to programmatically write, find, and read it.

On April 17th, the SNIA Ethernet Storage Forum will host another of its “Great Debates” webcasts. This time, it’s “File vs. Block vs. Object Storage.” In this live webcast, our experts, Mark Carlson, Alex McDonald and Saqib Jang will compare three types of data organization: file, block and object storage, and the access methods that support them. Each has its own set of use cases, advantages and disadvantages. Each provides data management ranging from simple to sophisticated, and each makes different demands on storage devices and programming technologies.

Perhaps you’re comfortable with block and file, but are interested in investigating the more recent class of object storage and access. Perhaps you’re happy with your understanding of objects, but would really like to understand files a bit better. Or perhaps you want to understand how file, block and object are implemented on the underlying storage systems – and how one can be made to look like the other, depending on how the storage is accessed. Join us as we discuss and debate:

  • Storage devices
    • How different types of storage drive different management & access solutions
    • Which use cases tend to favor block, file or object
  • Block
    • Where everything is in fixed-size chunks
    • SCSI and SCSI-based protocols, and how FC and iSCSI fit in
  • Files
    • When everything is a stream of bytes
    • NFS and SMB
  • Objects
    • When everything is a BLOB
    • HTTP, key value and RESTful interfaces
  • Altogether…
    • When files, blocks and objects collide, it will rock your world!

I will be moderating this “friendly debate” where there won’t be winners or losers, just more information on these three popular data storage technologies. We hope you will register today to come join the debate on April 17th.

And if you missed our first hugely popular “Great Debate” – Fibre Channel vs. iSCSI, it’s now available on-demand.

Why is Blockchain Storage Different?

The SNIA Ethernet Storage Forum (ESF), specifically ESF Vice Chair, Alex McDonald, spent Halloween explaining storage requirements for modern transactions in our webcast, “Transactional Models & Their Storage Requirements.” Starting with the fascinating history of the first transactional system in a bakery in 1951 (really!), to a discussion on Bitcoin, it was an insightful look at the changing role of storage amid modern transactions. If you missed it, you can watch it on-demand at your convenience. We received some great questions during the live event. Here are answers to them all:

Q. How many nodes are typical in the blockchain ledger? Read More

Storage for Transactional Systems: From Banking to Facebook

We’re all accustomed to transferring money from one bank account to another; a credit to the payer becomes a debit to the payee. But that model uses a specific set of sophisticated techniques to accomplish what appears to be a simple transaction. Today, we’re also well acquainted with ordering goods online, reserving an airline seat over the Internet, or simply updating a photograph on Facebook. Can these applications use the same banking models, or are new techniques required? It’s a question we’ll tackle at our next Ethernet Storage Forum webcast on October 31st “Transactional Models & Their Storage Requirements.”

One of the more important concepts in storage is the notion of  transactions,  which are used in databases, financials, and other mission critical workloads. However, in the age of cloud and distributed systems, we need to update our thinking about what constitutes a transaction. We need to understand how new theories and techniques allow us to undertake transactional work in the face of unreliable and physically dispersed systems. It’s a topic full of interesting concepts (and lots of acronyms!). In this webcast, we’ll provide a brief tour of traditional transactional systems and their use of storage, we’ll explain new application techniques and transaction models, and we’ll discuss what storage systems need to look like to support these new advances. And yes, we’ll decode all the acronyms and nomenclature too.

You will learn:

  • A brief history of transactional systems from banking to Facebook
  • How the Internet and distributed systems have changed and how we view transactions
  • An explanation of the terminology, from ACID to CAP and beyond
  • How applications, networks & particularly storage have changed to meet these demands

You may have noticed this webcast is on Halloween, October 31st. We promise it will be a treat not a trick! I encourage you to register today.

Q&A – When Compute, Networking and Storage Intersect

In Part Vermillion of our SNIA Ethernet Storage Forum (ESF) “Everything You Wanted To Know About Storage But Were Too Proud To Ask” webcast series – we examined the terms and concepts are at the heart of where compute, networking and storage intersect. That’s why we called it “What if Programming and Networking Had a Storage Baby” If you missed the live webcast, you can watch it on-demand.

The discussion from our panel of experts generated a lot of good questions. As promised, here are answers to them all. Read More

An FAQ to Make Your Storage System Hum

In our most recent “Everything You Wanted To Know About Storage But Were Too Proud To Ask” webcast series – Part Sepia – Getting from Here to There, we discussed terms and concepts that have a profound impact on storage design and performance. If you missed the live event, I encourage you to check it our on-demand. We had many great questions on encapsulation, tunneling, IOPS, latency, jitter and quality of service (QoS). As promised, our experts have gotten together to answer them all.

Q. Is there a way to measure jitter?

A. Jitter can be measured directly as a statistical function of the latency, typically as the Variance or Standard Deviation of the latency. For example a storage device might show an average latency of 5ms with a standard deviation of 1.5ms. This means roughly 95% of the transactions have a latency between 2ms and 8ms (average latency plus/minus two standard deviations), however many storage customers measure jitter indirectly by showing the 99.9%, 99.99%, or 99.999% latency. For example if my storage system has 99.99% latency of 8ms, it means 99.99% of transactions have latency <=8ms and 1/10,000 of transactions have latency >8ms. Percentile latency is an indirect measure of jitter but often easier to calculate or understand than the actual jitter.

Q. Can jitter be easily characterized for storage, media, and networks.   How and what tools are available for doing this?

A. Jitter is usually easy to measure on a network using standard network monitoring and reporting tools. It may or may not be easy to measure on storage systems or storage media, depending on the tools available (either built-in to the storage OS or using an external management or monitoring tool).   If you can record the latency of each transaction or packet, then it’s easy to calculate and show the jitter using standard statistical measures such as Variance or Standard Deviation of the latency. What most customers do is just measure the 99.9%, 99.99%, or 99.999% latency. This is an indirect measure of jitter but is often much easier to report and understand than the actual jitter.

Q.  Generally IOPS numbers are published for a particular block size like 8k write/read size, but in reality, IO request per second could be of mixed sizes, what is your perspective on this?

A. Most IOPS benchmarks test only one I/O size at a time. Most individual real workloads (for example databases) also use only one I/O size.  It is true that a storage controller or HDD/SSD might need to support multiple workloads simultaneously, each with a different I/O size.  While it is possible to run benchmarks with a mix of different I/O sizes, it’s rarely done because then there are too many workload combinations to test and publish. Some storage devices do not perform well if they must handle both small random and large sequential workloads simultaneously, so a smart storage controller might assign different workload types to different disk groups.

Q. One often misconfigured parameter is queue depth. Can you talk about how this relates to IOPS, latency and jitter?

A. Queue depth indicates how many tasks or I/Os can be lined up for a particular resource, such as a storage controller, network interface, or CPU. Having a higher queue depth ensures the resource stays highly utilized because it always has a new task to do as soon as it finishes its current task(s). This can result in higher IOPS because the CPU is less likely to have idle time waiting for new tasks to be put into its queue. But it could also increase latency because longer queues mean each task spends more time waiting in a queue.  It’s easy to misconfigure queue depth because you it needs to be deep enough to keep the resource (CPU/controller/interface) busy but not so deep that each transaction spends a long time in the queue.

Q. Can you please repeat all your examples of tunneling? GRE, MPLS, what others? How can it be IPv4 via IPv6?

A. VXLAN, LISP, GRE, MPLS, IPSEC.   Any time you encapsulate and send one protocol over another and decapsulate at the other end to send the original frame that process is tunneling. In the case we showed of IPv6 over IPv4, you are taking an original IPv6 frame with its IPv6 header of source address to destination address all IPv6 and sending it over and IPv4 enabled network we are encapsulating the IPv6 frame with an IPv4 header and “tunneling” IPv6 over the IPv4 network.

Q. I think it’d be possible to configure QoS to a point that exceeds the system capacity. Are there any safeguards on avoiding this scenario?

A. Some types of QoS allow over-provisioning and others do not. For example a QoS that imposes only maximum limits (and no minimum guarantees) on workloads might not prevent many workloads from exceeding system capacity. If the QoS allows over-provisioning, then you should use system monitoring and alerts to warn you when system capacity has been exceeded, or when any workloads are not getting their minimum guaranteed performance.

Q. Is there any research being done on using storage analytics along with artificial intelligence (AI) to assist with QoS?  

A. There are a number of storage analytics products, both third party and storage vendor specific that help with QoS. Whether any of these tools may be described as using AI is debatable, since we’re in the early days of using AI to do much in the storage arena. There are many QoS research projects, and no doubt they will eventually make their way into commercially available products if they prove useful.

Q. Are there any methods (measurements) to calculate IOPS/MBps in tier capable storage? Would it be wrong metric if we estimate based on medium level, example tier 2 (between 1 and 3)?

A. This question needs refinement, since tiering is sometimes a cache model rather than a data movement model. And knowing the answer may not actually help! Vendors do have tools (normally internal, since they are quite complex) that can help with the planning of tiered storage.

By now, we hope you’re not “too proud” to ask some of these storage networking questions. We’ve produced four other webcasts in this “Everything You Wanted To Know About Storage,” series to date. They are all available on-demand. And you can register here for our next one on July 6th where we’ll bring in experts to discuss:

  • Storage APIs and POSIX
  • Block, File, and Object storage
  • Byte Addressable and Logical Block Addressing
  • Log Structures and Journaling Systems

The Ethernet Storage Forum team and I hope to see you there!

Update: If you missed the live event, it’s now available  on-demand. You can also  download the webcast slides.

 

 

Too Proud to Ask Webcast Series Continues – Getting from Here to There Pod

As part of the SNIA Ethernet Storage Forum’s successful “Everything You Wanted To Know About Storage But Were Too Proud To Ask” series, we’ve discussed numerous topics about storage devices, protocols, and networks. As we examine some of these topics further, we begin to tease out some subtle nuances; subtle, yet important nevertheless.

On May 9th we’ll take on the terms and concepts that affect Storage Architectures as a whole in “Everything You Wanted To Know About Storage But Were Too Proud To Ask – Part Sepia – Getting from Here to There.” In particular, we’ll be looking at those aspects that can help or hinder storage systems inside the network:

  • Encapsulation vs. Tunneling
  • IOPS vs. Latency vs. Jitter
  • Quality of Service (QoS)

Each of these topics has a profound impact on storage designs and performance, but they are often misunderstood. We’re going to help you become clear on all of these very important storage concepts so that you can grok storage just a little bit more.

We hope you will join us on May 9th at 10:00 am PT and that you won’t be “too proud” to ask our experts your questions! Register today.

Think there may be other storage topics you feel you should understand better? Check out the rest of the webcasts in this series here.

Update: If you missed the live event, it’s now available  on-demand. You can also  download the webcast slides.

Storage Expert Takes on Hyperconverged Questions

Last month, we were fortunate enough to have Greg Schulz, analyst and founder of Server Storage IO, as a guest speaker at our SNIA Ethernet Storage Forum webcast, “What Does Hyperconverged Mean to Storage.” If you missed it, it’s now available on-demand. Greg fielded many great questions during the live event, but we didn’t have time to get to them all. So here they are:

Q. What is the difference between Converged Infrastructure (CI) and Hyperconverged Infrastructure (HCI)?

A. HCI is aggregated. You scale compute and storage in lock step. Converged is disaggregated. You can scale the compute independently of the storage. There are some software solutions that can support both hyper-converged (aggregated) and converged (disaggregated) deployments.

 Q.  What is your definition of “Little Data”?

A. Little Data is anything that’s not Big Data. It encompasses traditional databases, traditional structured, semi-structured and even some unstructured data.

Q. With convergence, what is the impact on the IT organization?

A. There is an opportunity for organizations to converge how they manage data infrastructure resources and services delivery. In other words, the technology can be leveraged to help an organization itself converge. Another impact is how converged solutions are protected, backed up, BC/BR/DR and related management done. Traditionally there are separate IT teams for compute, storage, and networking, especially in a large organization. New technology solutions may allow an organization to converge those teams.

Q. Is there a hybrid strategy? Where a complete information system is composed of HCI/CI building blocks? If yes, what management tools would span these components?

A. Sure, why not? Certainly you can converge your environment into a particular CI/HCI solution or approach, likewise, different CI/HCI solutions can co-exist along with other solutions in a given environment in hybrid ways. Have a hybrid strategy that looks at how technologies and solutions adapt to your needs and environment. Focus on how it’s going to work for you, vs. you having to work for them.

Q. What does FUZE stand for?

A. FUZE is not an acronym. It is the actual fuzing as in melding and bringing together things – literally fuzing thing together.

Q. Do HCI vendors re-balance (compute, I/O, storage) automatically as more nodes are added?

A. Solutions vary in how they rebalance the workloads. Some are dynamic while others rebalance on intervals; it varies how, when and what they rebalance. So, as you add capacity as you make changes, you need to make sure resources are properly allocated to address performance.

Q. Can’t you offload those CPU cycles caused by I/O to another CPU?

A. That’s an interesting question. Yes, move the application to another CPU. There is software that will leverage the resources on another CPU. Most HCI and CI solutions are running on a stack that requires hardware somewhere.

Q. This discussion has touched on compute and storage scaling. What about network between compute in the CI/HCI infrastructure and external to other compute, databases, or end-users?

A. Both CI and HCI need to connect to other resources, but in most cases the highest levels of network traffic are inside the CI or HCI stack because the compute and storage resources are contained within. Their connections to outside clients or servers data exchange, application integration, or client access is important but usually not very demanding on network bandwidth. (External connections for storage remote replication or backup could be bandwidth-intensive.)

Q. How can the current Enterprise Storage Products blend with either CI or HCI? Enterprise Storage is basically centralized storage architecture however the HCI is built mostly on ‘distributed storage architecture’. So how can current Enterprise Storage show use cases to the customer to sell their Enterprise Storage either as part of the HCI solution or exist along with HCI?

A. Generally enterprise storage products can be included in CI but are not blended with HCI. For example Dell EMC, Cisco (with NetApp and other storage vendors), IBM and Oracle offer CI solutions that include enterprise storage arrays in the rack. Most HCI platforms do not interoperate with enterprise storage arrays because the HCI platforms include their own storage. They can co-exist with enterprise storage arrays and that’s how most customers deploy them—some workloads run on the HCI infrastructure while others continue to use enterprise storage arrays.

Q. One of the HCI selling points is simplicity and cost reductions from a la carte. It seems that from what is being presented, that may not be the case. Can you elaborate on where HCI may become more complex, costly?

A. It comes down to value. You can buy all the components yourself and glue them all together and may come up with a lower total cost, but what is the value of your time? What is the cost of staff time to evaluate, test, deploy and maintain. The total value must be considered. It’s possible that HCI will be more costly than a disaggregated deployment that separates compute and storage, but this depends heavily on the workload and specific vendor product solution implementation.

Q. Current HCI “full stack” solutions claim compute and storage convergence, but what about the network? Given the east/west traffic introduced by HCI solutions, what networking solutions should customers be looking at?

A. Most of the common HCI solutions are packaged with server, storage, compute and most have networking included as well—typically the network adapters and sometimes also the switches. Some even have a backend software defined networking (SDN) capability as part of their stack.

Q. Related to HCI answer, what about vendors who allow for storage growth and/or server (compute) and storage additions. This allows for aggregated and dis-aggregated…yes?

A. Most HCI vendors require compute and storage to be added simultaneously, though many support different nodes with different ratios of compute and storage. This allow customers to change the ratio of compute and storage by adding different node types. And yes, some HCI vendors also support both a hyper-converged and disaggregated model, with the disaggregated model allowing compute and storage to be added separately.

Q. What are the tools available to make HCI work in a hybrid load environment, with different workload requirements, e.g.: VDI and Databases?

A. There are tools for moving and migrating applications, workloads, systems and VMs into CI/HCI environments, likewise for tuning, optimizing, gaining insight, analytics and reporting. Most of the CI/HCI solutions have tools built into them for optimizing PACE (Performance, Availability, Capacity, Economics) attributes along with server compute, memory, storage, and I/O resources. Some CI/HCI solutions are optimized for VDI/workspaces, while others are able to support general workloads including databases, and some even support HPC/SC or other specialized workloads.

Q. Does network performance affect HCI or CI performance?

A. Sometimes. Most hybrid HCI nodes are happy with the bandwidth of 10GbE, but if the nodes are all-flash or have many disks, then a faster speed may be required to avoid a network bottleneck. Network latency could affect HCI or CI performance in some cases, especially with all-flash storage. Of course a reliable network helps ensure reliable CI/HCI operations.

Update: If you missed the live event, it’s now available  on-demand. You can also  download the webcast slides.

Clearing Up Confusion on Common Storage Networking Terms

Do you ever feel a bit confused about common storage networking terms? You’re not alone. At our recent SNIA Ethernet Storage Forum webcast “Everything You Wanted To Know About Storage But Were Too Proud To Ask – Part Mauve,” we had experts from Cisco, Mellanox and NetApp explain the differences between:

  • Channel vs. Busses
  • Control Plane vs. Data Plane
  • Fabric vs. Network

If you missed the live webcast, you can watch it on-demand. As promised, we’re also providing answers to the questions we got during the webcast. Between these questions and the presentation itself, we hope it will help you decode these common, but sometimes confusing terms.

And remember, the “Everything You Wanted To Know About Storage But Were Too Proud To Ask” is a webcast series with a “colorfully-named pod” for each topic we tackle. You can register now for our next webcast: Part Teal, The Buffering Pod, on Feb. 14th.

Q. Why do we have Fibre and Fiber

A. Fiber Optics is the term used for the optical technology used by Fibre Channel Fabrics.   While a common story is that the “Fibre” spelling came about to accommodate the French (FC is after all, an international standard), in actuality, it was a marketing idea to create a more unique name, and in fact, it was decided to use the British spelling – “Fibre”.

Q. Will OpenStack change all the rules of the game?

A. Yes. OpenStack is all about centralizing the control plane of many different aspects of infrastructure.

Q. The difference between control and data plane matters only when we discuss software defined storage and software defined networking, not in traditional switching and storage.

A. It matters regardless. You need to understand how much each individual control plane can handle and how many control planes you have from a overall management perspective. In the case were you have too many control planes SDN and SDS can be a benefit to you.

Q. As I’ve heard that networks use stateless protocols, would FC do the same?

A.  Fibre Channel has several different Classes, which can be either stateful or stateless. Most applications of Fibre Channel are Class 3, as it is the preferred class for SCSI traffic, A connection between Fibre Channel endpoints is always stateful (as it involves a login process to the Fibre Channel fabric). The transport protocol is augmented by Fibre Channel exchanges, which are managed on a per-hop basis. Retransmissions are handled by devices when exchanges are incomplete or lost, meaning that each exchange is a stateful transmission, but the protocol itself is considered stateless in modern SCSI-transport Fibre Channel.

iSCSI, as a connection-oriented protocol, creates a nexus between an initiator and a target, and is considered stateful.  In addition, SMB, NFSv4, ftp, and TCP are stateful protocols, while NFSv2, NFSv3, http, and IP are stateless protocols.

Q. Where do CIFS/SMB come into the picture?

A. CIFFS/SMB is part of a network stack.   We need to have a separate talk about network stacks and their layers.   In this presentation, we were talking primarily about the physical layer of the networks and fabrics.   To overly simplify network stacks, there are multiple layers of protocols that run on top of the physical layer.   In the case of FC, those protocols include the control plane protocols (such as FC-SW), and the data plane protocols.   In FC, the most common data plane protocol is FCP (used by SCSI, FICON, and FC-NVMe).   In the case of Ethernet, those protocols also include the control plan (such as TCP/IP), and data plane protocols.   In Ethernet, there are many commonly used data plane protocols for storage (such as iSCSI, NFS, and CIFFS/SMB)

Update: If you missed the live event, it’s now available  on-demand. You can also  download the webcast slides.

No Shortage of Container Storage Questions

We covered a lot of ground in out recent SNIA Ethernet Storage Forum webcast, “Current State of Storage in the Container World.” We had a technical discussion on why containers are so compelling, how Docker containers work, persistent shared storage and future considerations for container storage. We received some great questions during the live event, and as promised, here are answers to them all.

Q. Docker cannot be installed on bare metal and requires a base OS to operate upon right?

A. That is correct.

Q. Does the application code need to be changed so that it can “fit and operate” in a container?

A. No, the application code does not need to change. The challenge most people face when migrating an application to a container is how to maintain the application’s state. One of the motivations for this webcast was to explain how to allow applications within containers to persist data. Hopefully the Docker Volume construct will meet your needs.

Q. Seems like containers share one OS/kernel… That suggests that there is just one OS in the “containerized” server… And yet there is still mention of hypervisor (or at least Hyper-V)… Can you clarify? If the containers share an OS, is a hypervisor needed?

A. You are correct, containers are designed to share a single kernel; therefore a hypervisor is not required to run containers. Having said that, VMware and Microsoft both offer options that run a single container in its own virtual machine (running a minimal operating system).

Q. Can the Docker Hub be compared to something like the GitHub?

A. Yes, that is a great analogy. Docker Hub (hub.docker.com) is to container images as GitHub (github.com) is to source code.

Q. What are the differences between the base and the host image?

A. If you’re referring to the webcast slides; the box labeled “Base Image” is the first layer in an image. The box labeled “Host OS” is not a layer, but represents the hosting operating system (kernel) that is shared by the containers.

Q. So there is a separate root per container?

A. In most cases the image will provide a root, therefore each container will have a separate root. This is made possible by a kernel feature called namespaces. Alternatively, Docker does allow you to share a directory between the host operating system and any number of containers though.

Q. If Deduplication is enabled on the storage LUNs, won’t that affect the performance of the containers?

A. Well implemented data reduction features (compression and deduplication) should have little to no effect on performance and should provide significant benefit by reducing the space required to store containers.

Q. Can you please quickly review the concept of copy-on-write with one or two sentences to boil it down?

A. How the copy-on-write works depends on whether the driver is file or block based. For the sake of simplicity, let’s assume a file-based implementation. Since the image layers are read-only, we need an area to store the changes that the container has made. This area is the copy-on-write layer. When a process reads a file that has not been modified, the file is read from one of the read only layers. When that file is modified and needs to be written back to disk, the new file is written to the copy-on-write layer as is the metadata that describes the file. The next time this file is read, it is read from copy-on-write layer. The graph driver is responsible for this functionality and varies by implementation.

Q. Can network locations be used for /data? If yes, how does the Docker Engine manage network authentication for the driver?

A. Yes, network locations can be used. The best practice is to use the Local Volume Driver, where you can pass in the required authentication via the options (see slide 15). Alternatively, the network location can be mounted on the host operating system and exposed to containers (see slides 21 & 22).

Q. Is this where VAAI like primitives would get implemented?

A. VAAI defines several in-band primitives.   The Docker Volume plug-in framework is completely out-of-band.   There can be some overlap in features though.   For example, the XCOPY primitive can be used to offload ‘copy jobs’ to an array.   If the vendor chooses to do so, a ‘copy job’ can be offloaded through the Docker Volume plug-in as well.   For example, a plug-in might implement a “clone” option that provides this service.

Q. Could you share some details about Kubernetes storage ? Persistent volumes and the difference from Docker volumes? Also, what is your perspective of Flocker?

A. Kubernetes has the concept of persistent storage. This abstraction is also called a volume. In addition, Kubernetes provides a plug-in option as well. The Kubernetes implementation predates the Docker Volume and is currently not compatible.

Q. Comment on mainframe: IBM runs Linux on zSeries, therefore can run Linux Docker containers.

A. Thanks, that’s good to know.

Q. How many operating systems changes on the x86 platform? How many on the mainframe platform? Can x86 architecture run the same code/OS from 40 years ago? Docker on mainframe?

A. The mainframe architecture has been very solid and consistent for many years.

Q. What is a big challenge for storage in container environment?

A. I don’t think storage has a challenge in the container environment. I think, with a properly implemented Docker Volume Plug-in, storage provides a solution to the persistent shared storage need in a container environment.

Q. Do you ever look into RexRay or VMDK storage drivers?

A. Yes, these are both examples of Docker Volume plug-in implementations.

Update: If you missed the live event, it’s now available  on-demand. You can also  download the webcast slides.