Processing and Managing Edge Data Q&A

The SNIA Networking Storage Forum (NSF) kicked off our “Storage Life on the Edge” webcast series with a session on managing data from the edge to the cloud and back. We were fortunate to have a panel of experts, Dan Cummins, John Kim and David McIntyre to explain key considerations when managing and processing data generated at the edge. If you missed this introductory session, it’s available on-demand, along with the presentation slides at the SNIA Educational Library.

Our presenters spent a good percentage of time answering questions from our live audience. Here are answers to them all.

Q. Could an application be deployed simultaneously at near-edge, far edge and functional edge?

Read More

Next Live Webcast: NFS 101

Need a primer on NFS? On March 23, 2016, The Ethernet Storage Forum (ESF) will present a live Webcast “What is NFS? An NFS Primer.” The popular and ubiquitous Network File System (NFS) is a standard protocol that allows applications to store and manage data on a remote computer or server. NFS provides two services; a network part that connects users or clients to a remote system or server; and a file-based view of the data. Together these provide a seamless environment that masks the differences between local files and remote files.

At this Webcast, Alex McDonald, SNIA ESF Vice Chair, will provide an introduction and overview presentation to NFS. Geared for technologists and tech managers interested in understanding:

  • NFS history and development
  • The facilities and services NFS provides
  • Why NFS rose in popularity to dominate file based services
  • Why NFS continues to be important in the cloud

As always, the Webcast will be live and Alex and I will be on hand to answer your questions. Register today. Alex and I look forward to hearing from you on March 23rd.

Update: If you missed the live event, it’s now available  on-demand. You can also  download the webcast slides.

New Webcast: Block Storage in the Open Source Cloud called OpenStack

On June 3rd at 10:00 a.m. SNIA-ESF will present its next live Webcast “Block Storage in the Open Source Cloud called OpenStack.” Storage is a major component of any cloud computing platform. OpenStack is one of largest and most widely supported Open Source cloud computing platforms that exist in the market today.  The OpenStack block storage service (Cinder) provides persistent block storage resources that OpenStack Nova compute instances can consume.

I will be moderating this Webcast, presented by a core member of the OpenStack Cinder team, Walt Boring. Join us, as we’ll dive into:

  • Relevant components of OpenStack Cinder
  • How block storage is managed by OpenStack
  • What storage protocols are currently supported
  • How it all works together with compute instances

I encourage you to register now to block your calendar. This will be a live and interactive Webcast, please bring your questions. I look forward to “seeing” you on June 3rd

What’s Happening with 25GbE

In July 2014, IEEE 802.3 voted to form a Study Group for 25Gb/s Ethernet.   There has been a lot attention in the networking press lately about 25Gb/s Ethernet, but many people are asking what is it and how did we get here.   After all, 802.3 already has completed standards for  40Gb/s and  100Gb/s and is currently working on 400Gb/s, so from a pure speed perspective, starting a 25Gb/s project now does look like a step backwards.

(Warning: the following discussion contains excessive physical layer jargon.)

The Sweet Spot

25GbE as a port speed is attractive because it makes use of 25Gb/s per lane signaling technology that has been in development for years in the industry, culminating in the recent completion of 802.3bj, the standard for 100GbE over backplane or twinax copper that utilizes four parallel lanes of 25Gb/s signaling to achieve the 100Gb/s port speed.  Products implementing 25Gb/s signaling in CMOS technology are just starting to come to market, and the rate will likely be a sweet spot for many years, as higher rate signaling of 40Gb/s or 50Gb/s is still in early technology development phases.  The ability to implement this high speed I/O in CMOS is important because it allows combining high-speed I/O with many millions of logic gates needed to implement Ethernet switches, controllers, FPGAs, and microprocessors.  Thus specifying a MAC rate of 25Gb/s to utilize 25Gb/s serdes technology can enable product developers to optimize for both the lowest cost/bit and the highest overall bandwidth utilization of the switching fabric.

4-Lane to 1-Lane Evolution

To see how we got here and why 25Gb/s is interesting, it is useful to back up a couple of generations and look at 10Gb/s and 40Gb/s Ethernet.   Earliest implementations of 10GbE relied on rather wide parallel electrical interfaces: XGMII and the 16-Bit interface.   Very soon after, however, 4-lane serdes-based interfaces became the norm starting with XAUI (for chip-to-chip and chip-to-optical module use) which was then adapted to longer reaches on twinax and backplane (10GBASE-CX4 and 10GBASE-KX4).    Preceding  10GbE achieving higher volumes  (~2009)  was the specification and technical feasibility of 10Gb/s on a single electrical serial lane. XFI was the first followed by 10GBASE-KR (backplane) and SFI (as an optical module interface and for direct attach twinax cable using the SFP+ pluggable form factor).   KR and SFI started to ramp around 2009 and are still the highest volume share of 10GbE ports in datacenter applications. The takeaway, in my opinion, is that single-lane interfaces helped the 10GbE volume ramp by reducing interconnect cost.  Now look forward to 40GbE  and 10GbE.  The initial  standard, 802.3ba,  was  completed in 2010.   So during the time that this specification was being developed, 10Gb/s serial interfaces were gaining traction, and consensus formed around the use of multiple 10Gb/s lanes in parallel to make the 40GbE and 100GbE electrical interfaces.  For example, there is a great similarity between 10GBASE-KR, and one lane of the 40GBASE-KR4 four-lane interface.  In a similar fashion 10Gb/s SFI for twinax  & optics in the SFP+ form factor is similar to a lane of the 40GbE equivalent interfaces for twinax and optics in the QSFP+ form factor.

But how does this get to 25Gb/s?

Due to the similarity in technology needed to make 10GbE and 40GbE, it has because a common feature in Ethernet switch and NIC chips to implement a four-lane port for 40GbE that can be configured to use each lane separately yielding four 10GbE ports.

From there it is a natural extension that 100GbE ports being implemented using 802.3bj technology (4x25Gb/s) also can be configured to support four independent ports operating at 25Gb/s.   This is such a natural conclusion that multiple companies are implementing 25GbE even though it is not a standard.

In some environments, the existence of a standard is not a priority.   For example, when a large-scale datacenter of compute, storage and networking is architected, owned and operated by one entity, that entity validates the necessary configuration to meet its requirements.  For the broader market, however,  there  is typically a requirement for multi-vendor interoperability across a diverse set of configurations and uses.  This is where Ethernet and IEEE 802.3 has provided value to the industry for over 30 years.

Where’s the Application?

Given the nature of their environment, it is the Cloud datacenter operators that are poised to be the early adopters of 25GbE.  Will it also find a home in more traditional enterprise and storage markets?  Time will tell, but in many environments ease of use, long shelf life, and multi-vendor interoperability are the priorities. For any environment, having the 25GbE specification maintained IEEE 802.3 will facilitate those needs.

Ethernet is the right fit for the Software Defined Data Center

“Software Defined” is a  label being used to define advances in  network and storage virtualization and promises to greatly improve infrastructure management and accelerate business agility. Network virtualization itself isn’t a new concept and has been around in various forms for some time (think vLANs). But, the commercialization of server virtualization seems to have paved the path to extend virtualization throughout the data center infrastructure, making the data center an IT environment delivering dynamic and even self-deployed services. The networking stack has been getting most of the recent buzz and I’ll focus on that portion of the infrastructure here.

VirtualizationChangesWhat is driving this trend in data networking? As I mentioned, server virtualization has a lot to do with the new trend. Virtualizing applications makes a lot of things better, and makes some things more complicated. Server virtualization enables you to achieve much higher application density in your data center. Instead of a one-to-one relationship between the application and server, you can host tens of applications on the same physical  server. This is great news for data centers that run into space limitations or for businesses looking for greater efficiency out of their existing hardware.

YesteryearThe challenge, however, is that these applications aren’t stationary. They can move from one physical server to another. And this mobility can add complications for the networking guys. Networks must be aware of virtual machines in ways that they don’t have to be aware of physical servers. For network admins of yesteryear, their domain was a black box of “innies” and “outies”. Gone are the days of “set it and forget it” in terms of networking devices. Or is it?

Software defined networks (aka SDN) promise to greatly simplify the network environment. By decoupling the control plane from the data plane, SDN allows administrators to treat a collection of networking devices as a single entity and can then use policies to configure and deploy networking resources more dynamically. Additionally, moving to a software defined infrastructure means that you can move control and management of physical devices to different applications within the infrastructure, which give you flexibility to launch and deploy virtual infrastructures in a more agile way.

network virtualizationSoftware defined networks aren’t limited to a specific physical transport. The theory, and I believe implementation, will be universal in concept. However, the more that hardware can be deployed in a consistent manner, the greater flexibility for the enterprise. As server virtualization becomes the norm, servers hosting applications with mixed protocol needs (block and file) will be more common. In this scenario, Ethernet networks offer advantages, especially as software defined networks come to play. Following is a list of some of the benefits of Ethernet in a software defined network environment.

Ubiquitous

Ethernet is a very familiar technology and is present in almost every compute and mobile device in an enterprise. From IP telephony to mobile devices, Ethernet is a networking standard commonly deployed and as a result, is very cost effective. The number of devices and engineering resources focused on Ethernet drives the economics in favor of Ethernet.

Compatibility

Ethernet has been around for so long and has proven to “just work.” Interoperability is really a non-issue and this extends to inter-vendor interoperability. Some other networking technologies require same vendor components throughout the data path. Not the case with Ethernet. With the rare exception, you can mix and match switch and adapter devices within the same infrastructure. Obviously, best practices would suggest that at least a single vendor within the switch infrastructure would simplify the environment with a common set of management tools, features,  and support plans. But, that might change with advances in SDN.

Highly Scalable

Ethernet is massively scalable. The use of routing technology allows for broad geographic networks. The recent adoption of IPv6 extends IP addressing way beyond what is conceivable at this point in time. As we enter the “internet of things” period in IT history, we will not lack for network scale. At least, in theory.

Overlay Networks

Overlay Networksallow you to extend L2 networks beyond traditional geographic boundaries. Two proposed standards are under review  by the Internet Engineering Task Force (IETF). These include Virtual eXtensible Local Area Networks (VXLAN) from VMware and Network Virtualization using Generic Routing Encapsulation (NVGRE) from Microsoft. Overlay networks combine L2  and L3 technologies to extend the L2 network beyond traditional geographic boundaries, as with hybrid clouds. You can think of overlay networks as essentially a generalization of a vLAN. Unlike with routing, overlay networks permit you to retain visibility and accessibility of your L2 network across larger geographies.

Unified Protocol Access

Ethernet has the ability to support mixed storage protocols, including iSCSI, FCoE, NFS, and CIFS/SMB. Support for mixed or unified environments can be more efficiently deployed using 10 Gigabit Ethernet (10GbE) and Data Center Bridging (required for FCoE traffic) as IP and FCoE traffic can share the same ports. 10GbE simplifies network deployment as the data center can be wired once and protocols can be reconfigured with software, rather than hardware changes.

Virtualization

Ethernet does very well in virtualized environments. IP address can easily be abstracted from physical ports to facilitate port mobility. As a result, networks built on an Ethernet infrastructure leveraging network virtualization can benefit from increased flexibility and uptime as hardware can be serviced or upgraded while applications are online.

Roadmap

For years, Ethernet has increased performance, but the transition from Gigabit Ethernet to 10 Gigabit Ethernet was a slow one. Delays in connector standards complicated matters. But, those days are over and the roadmap remains robust and product advances are accelerating. We are starting to see 40GbE devices on the market today, and will see 100GbE devices in the near future. As more and more data traffic is consolidated onto a shared infrastructure, these performance increases will provide the headroom for more efficient infrastructure deployments.

Some of the benefits listed above can be found with other networking technologies. But, Ethernet technology offers a unique combination of technology and economic value across a broad ecosystem of vendors that make it an ideal infrastructure for next generation data centers. And as these data centers are designed more and more around application services, software will be the lead conversation. To enable the vision of a software defined infrastructure, there is no better network technology than Ethernet.

Ethernet and IP Storage – Today’s Technology Enabling Next Generation Data Centers

I continue to believe that IP based storage protocols will be preferred for future data center deployments. The future of IT is pointing to cloud based architectures, whether internal or external. At the core of the cloud is virtualization. And I believe that Ethernet and IP storage protocols offer the greatest overall value to unlock the potential of virtualization and clouds. Will other storage network technologies work? Of course. But, I’m not talking about whether a network “works”. I’m suggesting that a converged network environment with Ethernet and IP storage offers the best combined value for virtual environments and cloud deployments. I’ve written and spoken about this topic before. And I will likely continue to do so. So, let me mention a few reasons to choose IP storage, iSCSI or NAS, for use in cloud environments.

Mobility. One of the many benefits of server virtualization is the ability to non-disruptively migrate applications from one physical server to another to support load balancing, failover or redundancy, and servicing or updating of hardware. The ability to migrate applications is best achieved with networked storage since the data doesn’t have to move when a virtual machine (VM) moves. But, the network needs to maintain connectivity to the fabric when a VM moves. Ethernet offers a network technology capable of migrating or reassigning network addresses, in this case IP addresses, from one physical device to another. When a VM moves to another physical server, the IP addresses move with it. IP based storage, such as iSCSI, leverages the built in capabilities of TCP/IP over Ethernet to migrate network port addresses without interruption to applications.

Flexibility. Most data centers require a mixture of applications that access either file or block data. With server virtualization, it is likely that you’ll require access to file and block data types on the same physical server for either the guest or parent OS. The ability to use a common network infrastructure for both the guest and parent can reduce cost and simplify management. Ethernet offers support for multiple storage protocols. In addition to iSCSI, Ethernet supports NFS and CIFS/SMB resulting in greater choice to optimize application performance within your budget. FCoE is also supported on an enhanced 10Gb Ethernet network to offer access to an existing FC infrastructure. The added flexibility to interface with existing SAN resources enhances the value of 10Gb as a long-term networking solution.

Performance. Cost. Ubiquity. Other factors that enhance Ethernet storage and therefore IP storage adoption include a robust roadmap, favorable economics, and near universal adoption. The Ethernet roadmap includes 40Gb and 100Gb speeds which will support storage traffic and will be capable of addressing any foreseeable application requirements. Ethernet today offers considerable economic value as port prices continue to drop. Although Gb speeds offer sufficient bandwidth for most business applications, the cost per Gb of bandwidth with 10 Gigabit Ethernet (GbE) is now lower than GbE and therefore offers upside in cost and efficiency. Finally, nearly all new digital devices including mobile phones, cameras, laptops, servers, and even some home appliances, are being offered with WiFi connectivity over Ethernet. Consolidating onto a single network technology means that the networking infrastructure to the rest of the world is essentially already deployed. How good is that?

Some may view moving to a shared network as kind of scary. The concerns are real. But, Ethernet has been a shared networking platform for decades and continues to offer enhanced features, performance, and security to address its increased application. And just because it can share other traffic, doesn’t mean that it must. Physical isolation of Ethernet networks is just as feasible as any other networking technology. Some may choose this option. Regardless, selecting a single network technology, even if not shared across all applications, can reduce not only capital expense, but also operational expense. Your IT personnel can be trained on a single networking technology versus multiple specialized single purpose networks. You may even be able to reduce maintenance and inventory costs to boot.

Customers looking to architect their network and storage infrastructure for today and the future would do well to consider Ethernet and IP storage protocols. The advantages are pretty compelling.

Five Reasons to Deploy iSCSI Storage in the Cloud

IT as a Service (ITaaS), or cloud services (clouds), was one of the “buzz” topics at SNW this Spring. And industry groups, such as SNIA’s Cloud Storage Initiative, are beginning to address the standards, policies, and marketing messages that help define what is ITaaS or clouds.

Whatever the definition, “cloud” technologies appear ahead of attempts to describe them. In fact, many customers are deploying cloud solutions today. A few obvious examples include online email and CRM solutions that have been available for several years. Enabling these cloud offerings are technologies that are very complimentary, namely virtualization in its various forms, and Ethernet or IP networks. For this article, I’ll put in a specific plug for iSCSI.

iSCSI plays well in the Cloud

I have described what I think are the top five requirements for cloud deployments and how well iSCSI addresses them in an article published in SNS Europe magazine February 2010. You can read the article online here. I’ll describe these five requirements briefly: Cost, Performance, Security, Scalability, and Data Mobility.

Cost: As a cloud service provider, the cost of goods for your services is essentially your IT infrastructure. Keeping these costs low represents a competitive advantage. One of the ways to reduce cost is to move to higher volume, and therefore lower cost components. Ethernet offers the economies of scale to deliver the lowest cost networking infrastructure both in terms of capital and operating expense associated with its extensive deployment across all industries as well as simplified management.

Performance: Your cloud environment needs to scale in performance to meet the demands of a growing customer base. Ethernet offers a variety of means and price points to address performance. Gigabit Ethernet with addition of port bonding or teaming offers simple and cost effective scalability, sufficient for most business applications. 10 Gigabit Ethernet is now being deployed more readily since price points have dropped below the $500 per port range. We’ll soon see 10 Gigabit ports standard on server motherboards, which will offer significant increases in network bandwidth with fewer ports and cables to contend with.

Security: Because Ethernet was developed as a general purpose network, efforts were made to support data security in mixed traffic environments. The TCP/IP stack includes security protocols, such as CHAP and IPSec, to address these requirements. These security protocols extend to storage traffic as well.

Scalability: Scalability can be described in many ways. I have already referred to performance scalability. But, scalability also refers to geography. IP networks span the globe and offer the capabilities needed to address IT services of customers in diverse geographies, which is at the heart of Cloud services. Inherent abilities to route data traffic offer some additional advantages for storage.

Data Mobility: One of the features of IP networks that I believe is particularly well suited for clouds is virtual IP addressing. IP addresses can move from physical port to physical port, allowing you to migrate the network connectivity easily as you migrate other virtual objects, such as virtual servers. As a result, IP based storage protocols, such as iSCSI, are particularly well suited in highly virtualized cloud environments.

IP Networks for the Data Center

As the evolution of the data center continues to deliver dynamic and highly virtualized services, we will see that Ethernet storage networks, including iSCSI, will deliver the value required to make cloud service providers successful. IP networks offer the economics, performance, security, scalability, and mobility required for the current generation and next generation data center.

And for more on this topic, check out this webinar http://www.brighttalk.com/webcast/23778.