Ethernet Storage Market Momentum Continues in First Half of 2010

The inexorable growth of the market for Ethernet storage continued in the first half of 2010 – in fact we’re getting very close to Ethernet storage being the majority of networked storage in the Enterprise.

According to IDC’s recent Q2 2010 Worldwide Storage Systems Hardware Tracker, Ethernet Storage (NAS plus iSCSI) revenue market share climbed to 45%, up from 39% in 2009, 32% in 2008 and 28% in 2007, as shown below.

2007

2008

2009

Q2 2010

FC SAN

72%

68%

61%

55%

iSCSI SAN

6%

10%

13%

15%

NAS

22%

29%

26%

30%

In terms of capacity market share, we have already see the crossover point, with Ethernet Storage at 52% of the total PB shipped, up from 47% in 2009, 42% in 2008 and 37% in 2007, as shown in the following table.

2007

2008

2009

Q2 2010

FC SAN

62%

58%

53%

48%

iSCSI SAN

8%

13%

15%

18%

NAS

29%

29%

32%

34%

iSCSI over DCB: RELIABILITY AND PREDICTABLE PERFORMANCE

by Gary Gumanow

Following up from the previous blogpost on iSCSI over DCB, this blogpost highlights just some of the benefits that DCB can deliver. Welcome back.

DCB extends Ethernet by providing a network infrastructure that virtually eliminates packet loss, enabling improved data networking and management within the DCB network environment with features for priority flow control (P802.1Qbb), enhanced transmission selection (P802.1Qaz), congestion notification (P802.1Qau), and discovery. The result is a more deterministic network behavior. DCB is enabled through enhanced switches, server network adapters, and storage targets.

DCB delivers a “lossless” network, and makes the network performance extremely predictable. While standard Ethernet performs very well, its performance varies slightly (see graphic). With DCB, the maximum performance is the same, but performance varies very little. This is extremely beneficial for data center managers, enabling them to better predict performance levels and deliver smooth traffic flows. In fact, when under test, DCB eliminates packet re-transmission because of dropped packets making not only the network more efficient, but the host servers as well.

SEGREGATING AND PRIORITIZING TRAFFIC

In the past, storage networking best practice recommendations included physically separating data network from storage network traffic. Today’s servers commonly have quad-port GbE adapters to ensure sufficient bandwidth, so segregating traffic has been easy – for example, two ports can be assigned for storage networks and two for data networks. In some cases these Ethernet adapters aggregate ports together to deliver even greater throughput for servers.

With the onslaught of virtualization in the data center today, consolidated server environments have a different circumstance. Using virtualization software can simplify connectivity with multiple 10 GbE server adapters – consolidating bandwidth instead of distributing it among multiple ports and a tangle of wires. These 10 GbE adapters handle all the traffic – database, web, management, and storage – improving infrastructure utilization rates. But with traffic consolidated on fewer larger connections, how does IT segregate the storage and data networks, prioritize traffic, and guarantee service levels?

Data Center Bridging includes prioritization functionality, which improves management of traffic flowing over fewer, larger pipes. In addition to setting priority queues, DCB can allocate portions of bandwidth. For example, storage traffic can be configured as higher priority than web traffic – but the administrator can allocate 60% of bandwidth to the storage traffic and 40% to the Web, ensuring operations and predictable performance for all.

Thanks again for checking back here at this blog. Hopefully, you find this information useful and a good use of your last four minutes, and if you have any questions or comments, please let me know.

Regards,

Gary

Ethernet Storage Market Momentum Continues

Earlier this month IDC released their Q1 2010 Worldwide Storage Systems Hardware Tracker, a well-established analysis of revenue and capacity shipments for the quarter. For the purposes of classification, IDC calls networked storage (as opposed to direct-attached storage) “Fabric Attached Storage” – which consists of Fibre Channel SAN, iSCSI SAN and NAS.

In Q1, Ethernet Storage (NAS plus iSCSI) revenue market share climbed to 43%, up from 39% in 2009, 32% in 2008 and 28% in 2007 – demonstrating continued market momentum. A more detailed breakdown is:

2007

2008

2009

Q1 2010

FC SAN

72%

68%

61%

57%

iSCSI SAN

6%

10%

13%

14%

NAS

22%

29%

26%

29%

In terms of capacity market share, Ethernet Storage was 51% of the total PB shipped, up from 48% in 2009, 42% in 2008 and 37% in 2007, as shown in the following table.

2007

2008

2009

Q1 2010

FC SAN

62%

58%

53%

49%

iSCSI SAN

8%

13%

15%

17%

NAS

29%

29%

32%

34%

So, the evidence is that the gains seem in the trough of the recession in 2008 and 2009 are continuing into the recovery. There seem to be three major factors driving this:

· Continuing maturity and acceptance of the technology for enterprise applications

· Companies’ willingness to try something new to reduce costs

· The continued rapid growth of unstructured data driving NAS capacity.

But that’s just my opinion. What’s your take?

Enhancements for Ethernet: Data Center Bridging

 

by: Gary Gumanow

EXTENDING NETWORK STANDARDS

10 Gigabit Ethernet (10GbE), standardized by the IEEE in 2004, delivers ten times the bandwidth of previously available networking technologies, and its availability has prompted the development of new standards for the data center, delivering on the promise of a single networking fabric or technology to connect storage, the LAN, and Inter-Processing Communication, or IPC.

These new IEEE standards go by several names: “Converged Enhanced Ethernet,” “Data Center Ethernet,” and the industry-standard term “Data Center Bridging,” or DCB. In this series of blogpostings we provide an overview of the reasons DCB is being developed and how it can improve networking in the data center for applications, servers, and storage.

ETHERNET RELIES ON TCP FOR “LOSSLESS” COMMUNICATIONS

Many IT managers deploy their networks to be “lossless” by over-provisioning bandwidth so all network applications can operate at their desired bandwidths. However, networks are dynamic organisms, change over time, and most likely have changed once rolled out. IT managers don’t plan for their networks to drop packets from inception and therefore having inherent mechanisms to recover from them are a necessity.

While Ethernet is generally a reliable technology, data packets can be dropped in transmission due to network congestion, traffic load balancing, over-subscription, and loads on servers and switches. Dropped packets don’t cause big problems, but they can result in performance variations for applications.

Protocols, such as TCP (TCP/IP), run on top of Ethernet to ensure seamless communications. TCP will take note if packets are dropped during communication, and if so, those packets are re-sent, but if not, confirmation of a completed transmission is returned. With this “handshake,” TCP delivers “lossless” communications. TCP also provides a dynamic flow control mechanism, called windowing, which essentially elasticizes the two ends of a buffered connection based on the capabilities of those endpoints. Finally, TCP provides routing capabilities so that communications work seamlessly across routed and different networks. In a converged Ethernet environment, storage network technologies that do not have a built-in recovery mechanism like TCP, require safeguards against dropped packets. Enter Data Center Bridging, the most recent set of IEEE standards that provide this enhancement to Ethernet.

The next blogpost in this series will focus on some of the benefits of DCB and what makes it so compelling for storage in today’s data centers.

If DCB provides the benefit to your SAN as promised, what percentage improvement in performance would you need to see in order to implement a change to the infrastructure for DCB?

Are you seeing issues today with performance and TCP retransmits?

Five Reasons to Deploy iSCSI Storage in the Cloud

IT as a Service (ITaaS), or cloud services (clouds), was one of the “buzz” topics at SNW this Spring. And industry groups, such as SNIA’s Cloud Storage Initiative, are beginning to address the standards, policies, and marketing messages that help define what is ITaaS or clouds.

Whatever the definition, “cloud” technologies appear ahead of attempts to describe them. In fact, many customers are deploying cloud solutions today. A few obvious examples include online email and CRM solutions that have been available for several years. Enabling these cloud offerings are technologies that are very complimentary, namely virtualization in its various forms, and Ethernet or IP networks. For this article, I’ll put in a specific plug for iSCSI.

iSCSI plays well in the Cloud

I have described what I think are the top five requirements for cloud deployments and how well iSCSI addresses them in an article published in SNS Europe magazine February 2010. You can read the article online here. I’ll describe these five requirements briefly: Cost, Performance, Security, Scalability, and Data Mobility.

Cost: As a cloud service provider, the cost of goods for your services is essentially your IT infrastructure. Keeping these costs low represents a competitive advantage. One of the ways to reduce cost is to move to higher volume, and therefore lower cost components. Ethernet offers the economies of scale to deliver the lowest cost networking infrastructure both in terms of capital and operating expense associated with its extensive deployment across all industries as well as simplified management.

Performance: Your cloud environment needs to scale in performance to meet the demands of a growing customer base. Ethernet offers a variety of means and price points to address performance. Gigabit Ethernet with addition of port bonding or teaming offers simple and cost effective scalability, sufficient for most business applications. 10 Gigabit Ethernet is now being deployed more readily since price points have dropped below the $500 per port range. We’ll soon see 10 Gigabit ports standard on server motherboards, which will offer significant increases in network bandwidth with fewer ports and cables to contend with.

Security: Because Ethernet was developed as a general purpose network, efforts were made to support data security in mixed traffic environments. The TCP/IP stack includes security protocols, such as CHAP and IPSec, to address these requirements. These security protocols extend to storage traffic as well.

Scalability: Scalability can be described in many ways. I have already referred to performance scalability. But, scalability also refers to geography. IP networks span the globe and offer the capabilities needed to address IT services of customers in diverse geographies, which is at the heart of Cloud services. Inherent abilities to route data traffic offer some additional advantages for storage.

Data Mobility: One of the features of IP networks that I believe is particularly well suited for clouds is virtual IP addressing. IP addresses can move from physical port to physical port, allowing you to migrate the network connectivity easily as you migrate other virtual objects, such as virtual servers. As a result, IP based storage protocols, such as iSCSI, are particularly well suited in highly virtualized cloud environments.

IP Networks for the Data Center

As the evolution of the data center continues to deliver dynamic and highly virtualized services, we will see that Ethernet storage networks, including iSCSI, will deliver the value required to make cloud service providers successful. IP networks offer the economics, performance, security, scalability, and mobility required for the current generation and next generation data center.

And for more on this topic, check out this webinar http://www.brighttalk.com/webcast/23778.

Ethernet Storage at Spring SNW

Well we’re already into Day 1 of Spring SNW, and the SNIA tutorials educational program is well under way.

I presented to a packed room at 9:20am on the subject of Server and Storage Consolidation with iSCSI Arrays – a great audience for the first session of the day.

Tuesday afternoon’s agenda includes pNFS, Parallel Storage for Grid, Virtualization and Database Computing by Pranoop Erasani (NetApp), and Thursday’s agenda includes two tutorials by Gary Gumanow (Dell) & Jason Blosil (NetApp) — iSCSI SANs: Ideal Applications, Large and Small, and iSCSI: A Lossless Ethernet Fabric with DCB. If you’re at SNW check them out.

If you are not at SNW, you can access the tutorials at http://www.snia.org/education/tutorials/.

Another opportunity to find out about Ethernet Storage is in the IP Storage Hands-on Lab. This program is in its 7th year and continues to go from strength to strength.

Finally, I’m scheduled to do a podcast with Skip Jones (of the Fibre Channel Industry Association) on FCoE, iSCSI and Network Convergence. That should be posted on the Infosmack Podcasts section of Storage Monkeys by Wendesday.

Wow – the Ethernet Storage Forum and the FCIA co-operating! What is the world coming to?

Tags: , , ,

Ethernet Storage Market Momentum

Despite the recession of the past couple of years, the market for Ethernet Storage has shown grown significantly both in terms of equipment shipped and in terms of market share. Last week’s release by IDC of their Q42009 Worldwide Storage Systems Hardware Tracker, gave us an opportunity to see the latest numbers.

For the purposes of classification, IDC calls networked storage (as opposed to direct-attached storage) “Fabric Attached Storage” – which consists of Fibre Channel SAN, iSCSI SAN and NAS.

During 2009, Ethernet Storage revenues (NAS plus iSCSI) grew 51% year-over year to $5.372 billion, following a 5% decline in 2008. This compares with 25% revenue growth for the Fabric Attached Storage market as a whole, following a 17% decline in 2008. The revenue market share of Ethernet Storage climbed to 39% in 2009, up from 32% in 2008 and 28% in 2007.

In terms of capacity shipped, the story is even better. During 2009, vendors shipped 2,829 PB of Ethernet Storage – 128% year-over year, following 22% growth in 2008. This compares with 105% capacity growth for the Fabric Attached Storage market as a whole, following 9% growth in 2008. The capacity market share of Ethernet Storage climbed to 47%, up from 42% in 2008 and 37% in 2007.

The highest growth category of the whole sector was iSCSI, as has been the case for the past few years. In 2007 iSCSI SAN revenues grew by 62%; in 2008 we saw 26% growth (despite a recession which caused a 22% decline in FC SAN revenues); and in 2009 we saw 68% growth to $1.807 billion.

So, what’s contributing to the steady growth in tough economic times? My take is that there are probably three factors:
• Continuing maturity and acceptance of the technology
• Companies’ willingness to try something new to reduce costs in tough economic times
• The continued rapid growth of unstructured data driving NAS capacity.

What’s your take?

Wire Speed 10Gb/s iSCSI

I spent the past few days wondering what my first post for SNIA on Ethernet Storage should be about, and finally arrived at this – despite the fact that it’s old news…

A couple of months ago Ben Hacker posted a great blog on The Server Room section of Intel’s communities site, talking about Intel and Microsoft’s performance benchmarking work on iSCSI over 10Gb Ethernet. The post is entitled “1,000,000 IOPS with iSCSI – That’s Not a Typo…” – check it out, it’s very enlightening.

Ben’s blog in turn pointed to a joint Microsoft/Intel webcast going into detail about the results. Again, it’s good information.

The thing that stuck me here was not so much that you can get wire speed 10Gb/s iSCSI performance today, or that large virtual server environments can require those performance levels – the thing that struck me was that conventional wisdom is continually challenged by commodity Ethernet advances.

Eight years ago, most people believed that special NICs would be required to overcome the overhead imposed by TCP/IP and iSCSI over Gigabit Ethernet. That turned out to be wrong. Standard on-board Ethernet ports and software iSCSI initiators worked just great. The conventional wisdom today is that special NICs and drivers are required to overcome the overhead imposed by TCP/IP and iSCSI over 10 Gigabit Ethernet. That’s also wrong. The Intel/Microsoft benchmark used standard on-board ports and software iSCSI initiators.

We seem to keep forgetting that Moore’s Law continually delivers more CPU cycles than we expect (exponentials are funny like that), and the huge commodity R&D infrastructure around Ethernet continually finds ways to improve network performance.

Any bets on whether we’ll need special NICs to support iSCSI over 40Gb Ethernet on commodity servers when that emerges as a mainstream host port? I think the answer is obvious…

Tags: , , ,