Software Defined Storage Q&A

The SNIA Networking Storage Forum (NSF) recently hosted a live webcast, What Software Defined Storage Means for Storage Networking where our experts, Ted Vojnovich and Fred Bower explained what makes software defined storage (SDS) different from traditional storage. If you missed the live event, you can watch it on-demand at your convenience. We had several questions at the live event and here are our experts’ answers to them all:

Q. Are there cases where SDS can still work with legacy storage so that high priority flows, online transaction processing (OLTP) can use SAN for the legacy storage while not so high priority and backup data flows utilize the SDS infrastructure?

A.  The simple answer is, yes. Like anything else, companies are using different methods and architectures to resolve their compute and storage requirements. Just like public cloud may be used for some non-sensitive/vital data and in-house cloud or traditional storage for sensitive data. Of course, this adds costs, so benefits need to be weighed against the additional expense.

Q. What is the best way to mitigate unpredictable network latency that can go out of the bounds of a storage required service level agreement (SLA)?

A.  There are several ways to mitigate latency. Generally speaking, increased bandwidth contributes to better network speed because the “pipe” is essentially larger and more data can travel through it. There are other means as well to reduce latency such the use of offloads and accelerators. Remote Direct Memory Access (RDMA) is one of these and is being used by many storage companies to help handle the increased capacity and bandwidth needed in Flash storage environments. Edge computing should also be added to this list as it relocated key data processing and access points from the center of the network to the edge where it can be gathered and delivered more efficiently.

Q. Can you please elaborate on SDS scaling in comparison with traditional storage?

A.  Most SDS solutions are designed to scale-out both performance and capacity to avoid bottlenecks whereas most traditional storage has always had limited scalability, scaling up in capacity only. This is because as a scale-up storage system begins to reach capacity, the controller becomes saturated and performance suffers. The workaround for this problem with traditional storage is to upgrade the storage controller or purchase more arrays, which can often lead to unproductive and hard to manage silos.

Q. You didn’t talk much about distributed storage management and namespaces (i.e. NFS or AFS)?

A.  Storage management consists of monitoring and maintaining storage health, platform health, and drive health. It also includes storage provisioning such as creating each LUN /share/etc., or binding LUNs to controllers and servers. On top of that, storage management involves storage services like disk groups, snapshot, dedupe, replication, etc. This is true for both SDS and traditional storage (Converged Infrastructure and Hyper-Converged Infrastructure will leverage this ability in storage). NFS is predominately a non-Windows (Linux, Unix, VMware) file storage protocol while AFS is no longer popular in the data center and has been replaced as a file storage protocol by either NFS or SMB (in fact, it’s been a long time since somebody mentioned “AFS”).

Q. How does SDS affect storage networking? Are SAN vendors going to lose customers?

A. SAN vendors aren’t going anywhere because of the large existing installed base which isn’t going quietly into the night. Most SDS solutions focus on Ethernet connectivity (as diagrams state) while traditional storage is split between Fibre Channel and Ethernet; InfiniBand is more of a niche storage play for HPC and some AI or machine learning customers.

Q. Storage costs for SDS are highly dependent on scale and replication or erasure code. An erasure coded multi-petabyte solution can be significantly less than a traditional storage solution.

A.  It’s a processing complexity vs. cost of additional space tradeoff. Erasure coding is processing intense but requires less storage capacity. Making copies uses less processing power but consumes more capacity. It is true to say replicating copies uses more network bandwidth. Erasure coding tends to be used more often for storage of large objects or files, and less often for latency-sensitive block storage.

If you have more questions on SDS, let us know in the comment box.

 

 

 

 

 

 

 

 

 

 

What Does Software Defined Storage Means for Storage Networking?

Software defined storage (SDS) is growing in popularity in both cloud and enterprise accounts. But why is it appealing to some customers and what is the impact on storage networking? Find out at our SNIA Networking Storage Forum webcast on October 22, 2019 “What Software Defined Storage Means for Storage Networking” where our experts will discuss:

  • What makes SDS different from traditional storage arrays?
  • Does SDS have different networking requirements than traditional storage appliances?
  • Does SDS really save money?
  • Does SDS support block, file and object storage access?
  • How data availability is managed in SDS vs. traditional storage
  • What are potential issues when deploying SDS?

Register today to save your spot on Oct. 22nd.   This event is live, so as always, our SNIA experts will be on-hand to answer your questions.

Too Proud to Ask Webcast Series Opens Pandora’s Box – Storage Management

Storage can be something of a “black box,” a monolithic entity that is at once mysterious and scary. That’s why we created “The Everything You Wanted To Know About Storage But Were Too Proud to Ask” webcast series. So far, we’ve explored various and sundry aspects of storage, focusing on “the naming of the parts.” Our goal has been to break down some of the components of storage and explain how they fit into the greater whole. Read More

Software Defined Networks for SANs?

Previously, I’ve blogged about the VN2VN (virtual node to virtual node) technology coming with the new T11-FC-BB6 specification. In a nutshell, VN2VN enables an “all Ethernet” FCoE network, eliminating the requirement for an expensive Fibre Channel Forwarding (FCF) enabled switch. VN2VN dramatically lowers the barrier of entry for deploying FCoE. Host software is available to support VN2VN, but so far only one major SAN vendor supports VN2VN today. The ecosystem is coming, but are there more immediate alternatives for deploying FCoE without an FCF-enabled switch or VN2VN-enabled target SANs? The answer is that full FC-BB5 FCF services could be provided today using Software Defined Networking (SDN) in conjunction with standard DCB-enabled switches by essentially implementing those services in host-based software running in a virtual machine on the network. This would be an alternative “all Ethernet” storage network supporting Fibre Channel protocols. Just such an approach was presented at SNIA’s Storage Developers Conference 2013 in a presentation entitled, “Software-Defined Network Technology and the Future of Storage,” Stuart Berman, Chief Executive Officer, Jeda Networks. (Note, of course neither approach is relevant to SAN networks using Fibre Channel HBAs, cables, and switches.)

Interest in SDN is spreading like wildfire. Several pioneering companies have released solutions for at least parts of the SDN puzzle, but kerosene hit the wildfire with the $1B acquisition of Nicira by VMware. Now a flood of companies are pursuing an SDN approach to everything from wide area networks to firewalls to wireless networks. Applying SDN technology to storage, or more specifically to Storage Area Networks, is an interesting next step. See Jason Blosil’s blog below, “Ethernet is the right fit for the Software Defined Data Center.”

To review, an SDN abstracts the network switch control plane from the physical hardware. This abstraction is implemented by a software controller, which can be a virtual appliance or virtual machine hosted in a virtualized environment, e.g., a VMware ESXi host. The benefits are many; the abstraction is often behaviorally consistent with the network being virtualized but simpler to manipulate and manage for a user. The SDN controller can automate the numerous configuration steps needed to set up a network, lowering the amount of touch points required by a network engineer. The SDN controller is also network speed agnostic, i.e., it can operate over a 10Gbps Ethernet fabric and seamlessly transition to operate over a 100Gbps Ethernet fabric. And finally, the SDN controller can be given an unlimited amount of CPU and memory resources in the host virtual server, scaling to a much greater magnitude compared to the control planes in switches that are powered by relatively low powered processors.

So why would you apply SDN to a SAN? One reason is SSD technology; storage arrays based on SSDs move the bandwidth bottleneck for the first time in recent memory into the network. An SSD array can load several 10Gbps links, overwhelming many 10G Ethernet fabrics. Applying a Storage SDN to an Ethernet fabric and removing the tight coupling of speed of the switch with the storage control plane will accelerate adoption of higher speed Ethernet fabrics. This will in turn move the network bandwidth bottleneck back into the storage array, where it belongs.

Another reason to apply SDN to Storage Networks is to help move certain application workloads into the Cloud. As compute resources increase in speed and consolidate, workloads require deterministic bandwidth, IOPS and/or resiliency metrics which have not been well served by Cloud infrastructures. Storage SDNs would apply enterprise level SAN best practices to the Cloud, enabling the migration of some applications which would increase the revenue opportunities of the Cloud providers. The ability to provide a highly resilient, high performance, SLA-capable Cloud service is a large market opportunity that is not cost available/realizable with today’s technologies.

So how can SDN technology be applied to the SAN? The most viable candidate would be to leverage a Fibre Channel over Ethernet (FCoE) network. An FCoE network already converges a high performance SAN with the Ethernet LAN. FCoE is a lightweight and efficient protocol that implements flow control in the switch hardware, as long as the switch supports Data Center Bridging (DCB). There are plenty of standard “physical” DCB-enabled Ethernet switches to choose from, so a Storage SDN would give the network engineer freedom of choice. An FCoE based SDN would create a single unified, converged and abstracted SAN fabric. To create this Storage SDN you would need to extract and abstract the FCoE control plane from the switch removing any dependency of a physical FCF. This would include the critical global SAN services such as the Name Server table, the Zoning table and State Change Notification. Containing the global SAN services, the Storage SDN would also have to communicate with initiators and targets, something an SDN controller does not do. Since FCoE is a network-centric technology, i.e., configuration is performed from the network, a Storage SDN can automate large SAN’s from a single appliance. The Storage SDN should be able to create deterministic, end-to-end Ethernet fabric paths due to the global view of the network that an SDN controller typically has.

A Storage SDN would also be network speed agnostic, since Ethernet switches already support 10Gbps, 40Gbps, and 100Gbps this would enable extremely fast SANs not currently attainable. Imagine the workloads, applications and consolidation of physical infrastructure possible with a 100Gbps Storage SDN SAN all controlled by a software FCoE virtual server connecting thousands of servers with terabytes of SSD storage? SDN technology is bursting with solutions around LAN traffic; now we need to tie in the SAN and keep it as non-proprietary to the hardware as possible.

Ethernet is the right fit for the Software Defined Data Center

“Software Defined” is a  label being used to define advances in  network and storage virtualization and promises to greatly improve infrastructure management and accelerate business agility. Network virtualization itself isn’t a new concept and has been around in various forms for some time (think vLANs). But, the commercialization of server virtualization seems to have paved the path to extend virtualization throughout the data center infrastructure, making the data center an IT environment delivering dynamic and even self-deployed services. The networking stack has been getting most of the recent buzz and I’ll focus on that portion of the infrastructure here.

VirtualizationChangesWhat is driving this trend in data networking? As I mentioned, server virtualization has a lot to do with the new trend. Virtualizing applications makes a lot of things better, and makes some things more complicated. Server virtualization enables you to achieve much higher application density in your data center. Instead of a one-to-one relationship between the application and server, you can host tens of applications on the same physical  server. This is great news for data centers that run into space limitations or for businesses looking for greater efficiency out of their existing hardware.

YesteryearThe challenge, however, is that these applications aren’t stationary. They can move from one physical server to another. And this mobility can add complications for the networking guys. Networks must be aware of virtual machines in ways that they don’t have to be aware of physical servers. For network admins of yesteryear, their domain was a black box of “innies” and “outies”. Gone are the days of “set it and forget it” in terms of networking devices. Or is it?

Software defined networks (aka SDN) promise to greatly simplify the network environment. By decoupling the control plane from the data plane, SDN allows administrators to treat a collection of networking devices as a single entity and can then use policies to configure and deploy networking resources more dynamically. Additionally, moving to a software defined infrastructure means that you can move control and management of physical devices to different applications within the infrastructure, which give you flexibility to launch and deploy virtual infrastructures in a more agile way.

network virtualizationSoftware defined networks aren’t limited to a specific physical transport. The theory, and I believe implementation, will be universal in concept. However, the more that hardware can be deployed in a consistent manner, the greater flexibility for the enterprise. As server virtualization becomes the norm, servers hosting applications with mixed protocol needs (block and file) will be more common. In this scenario, Ethernet networks offer advantages, especially as software defined networks come to play. Following is a list of some of the benefits of Ethernet in a software defined network environment.

Ubiquitous

Ethernet is a very familiar technology and is present in almost every compute and mobile device in an enterprise. From IP telephony to mobile devices, Ethernet is a networking standard commonly deployed and as a result, is very cost effective. The number of devices and engineering resources focused on Ethernet drives the economics in favor of Ethernet.

Compatibility

Ethernet has been around for so long and has proven to “just work.” Interoperability is really a non-issue and this extends to inter-vendor interoperability. Some other networking technologies require same vendor components throughout the data path. Not the case with Ethernet. With the rare exception, you can mix and match switch and adapter devices within the same infrastructure. Obviously, best practices would suggest that at least a single vendor within the switch infrastructure would simplify the environment with a common set of management tools, features,  and support plans. But, that might change with advances in SDN.

Highly Scalable

Ethernet is massively scalable. The use of routing technology allows for broad geographic networks. The recent adoption of IPv6 extends IP addressing way beyond what is conceivable at this point in time. As we enter the “internet of things” period in IT history, we will not lack for network scale. At least, in theory.

Overlay Networks

Overlay Networksallow you to extend L2 networks beyond traditional geographic boundaries. Two proposed standards are under review  by the Internet Engineering Task Force (IETF). These include Virtual eXtensible Local Area Networks (VXLAN) from VMware and Network Virtualization using Generic Routing Encapsulation (NVGRE) from Microsoft. Overlay networks combine L2  and L3 technologies to extend the L2 network beyond traditional geographic boundaries, as with hybrid clouds. You can think of overlay networks as essentially a generalization of a vLAN. Unlike with routing, overlay networks permit you to retain visibility and accessibility of your L2 network across larger geographies.

Unified Protocol Access

Ethernet has the ability to support mixed storage protocols, including iSCSI, FCoE, NFS, and CIFS/SMB. Support for mixed or unified environments can be more efficiently deployed using 10 Gigabit Ethernet (10GbE) and Data Center Bridging (required for FCoE traffic) as IP and FCoE traffic can share the same ports. 10GbE simplifies network deployment as the data center can be wired once and protocols can be reconfigured with software, rather than hardware changes.

Virtualization

Ethernet does very well in virtualized environments. IP address can easily be abstracted from physical ports to facilitate port mobility. As a result, networks built on an Ethernet infrastructure leveraging network virtualization can benefit from increased flexibility and uptime as hardware can be serviced or upgraded while applications are online.

Roadmap

For years, Ethernet has increased performance, but the transition from Gigabit Ethernet to 10 Gigabit Ethernet was a slow one. Delays in connector standards complicated matters. But, those days are over and the roadmap remains robust and product advances are accelerating. We are starting to see 40GbE devices on the market today, and will see 100GbE devices in the near future. As more and more data traffic is consolidated onto a shared infrastructure, these performance increases will provide the headroom for more efficient infrastructure deployments.

Some of the benefits listed above can be found with other networking technologies. But, Ethernet technology offers a unique combination of technology and economic value across a broad ecosystem of vendors that make it an ideal infrastructure for next generation data centers. And as these data centers are designed more and more around application services, software will be the lead conversation. To enable the vision of a software defined infrastructure, there is no better network technology than Ethernet.