Ethernet Storage Forum – 2012 Year in Review and What to Expect in 2013

As we come to a close of the year 2012, I want to share some of our successes and briefly highlight some new changes for 2013. Calendar year 2012 has been eventful and the SNIA-ESF has been busy. Here are some of our accomplishments:

  • 10GbE – With virtualization and network convergence, as well as the general availability of LOM and 10GBASE-T cabling, we saw this is a “breakout year” for 10GbE. In July, we published a comprehensive white paper titled “10GbE Comes of Age.” We then followed up with a Webcast “10GbE – Key Trends, Predictions and Drivers.” We ran this live once in the U.S. and once in the U.K. and combined, the Webcast has been viewed by over 400 people!
  • NFS – has also been a hot topic. In June we published a white paper “An Overview of NFSv4″ highlighting the many improved features NFSv4 has over NFSv3. A Webcast to help users upgrade, “NFSv4 – Plan for a Smooth Migration,” has also been well received with over 150 viewers to date.   A 4-part Webcast series on NFS is now planned. We kicked the series off last month with “Reasons to Start Working with NFSv4 Now” and will continue on this topic during the early part of 2013. Our next NFS Webcast will be “Advances in NFS – NFSv4.1 and pNFS.” You can register for that here.
  • Flash – The availability of solid state devices based on NAND flash is changing the performance efficiencies of storage. Our September Webcast “Flash – Plan for the Disruption” discusses how Flash is driving the need for 10GbE and has already been viewed by more than 150 people.

We have also added to expand membership and welcome new membership from Tonian and LSI to the ESF. We expect with this new charter to see an increase in membership participation as we drive incremental value and establish ourselves as a leadership voice for Ethernet Storage.

As we move into 2013, we expect two hot trends to continue – the broader use of file protocols in datacenter applications, and the continued push toward datacenter consolidation with the use of Ethernet as a storage network. In order to better address these two trends, we have modified our charter for 2013. Our NFS SIG will be renamed the File Protocol SIG and will focus on promoting not only NFS, but also SMB / CIFS solutions and protocols. The iSCSI SIG will be renamed to the Storage over Ethernet SIG and will focus on promoting data center convergence topics with Ethernet networks, including the use of block and file protocols, such as NFS, SMB, FCoE, and iSCSI, over the same wire. This modified charter will allow us to have a richer conversation around storage trends relevant to your IT environment.

So, here is to a successful 2012, and excitement for the coming year.

Two Storage Trails on the 10GbE Convergence Path

As the migration to 10Gb Ethernet moves forward, many data centers are looking to converge network and storage I/O to fully utilize a ten-fold increase in bandwidth.   Industry discussions continue regarding the merits of 10GbE iSCSI and FCoE.  Some of the key benefits of both protocols were presented in an iSCSI SIG webcast that included Maziar Tamadon and Jason Blosil on July 19th: Two Storage Trails on the 10Gb Convergence Path

It’s a win-win solution as both technologies offer significant performance improvements and cost savings.   The discussion is sure to continue.

Since there wasn’t enough time to respond to all of the questions during the webcast, we have consolidated answers to all of them in this blog post from the presentation team.   Feel free to comment and provide your input.

Question: How is multipathing changed or affected with FCoE?

One of the benefits of FCoE is that it uses Fibre Channel in the upper layers of the software stack where multipathing is implemented.   As a result, multipathing is the same for Fibre Channel and FCoE.

Question: Are the use of CNAs with FCoE offload getting any traction?   Are these economically viable?

The adoption of FCoE has been slower than expected, but is gaining momentum.   Fibre Channel is typically used for mission-critical applications so data centers have been cautious about moving to new technologies.     FCoE and network convergence provide significant cost savings, so FCoE is economically viable.

Question: If you run the software FCoE solution would this not prevent boot from SAN?

Boot from SAN is not currently supported when using FCoE with a software initiator and NIC.   Today, boot from SAN is only supported using FCoE with a hardware converged networked adapter (CNA).

Question:   How do you assign priority for FCoE vs. other network traffic.   Doesn’t it still make sense to have a dedicated network for data intensive network use?

Data Center Bridging (DCB) standards that enable FCoE allow priority and bandwidth to be assigned to each priority queue or link.     Each link may support one or more data traffic types. Support for this functionality is required between two end points in the fabric, such as between an initiator at the host with the first network connection at the top of rack switch, as an example. The DCBx Standard facilitates negotiation between devices to enable supported DCB capabilities at each end of the wire.

Question:   Category 6A uses more power that twin-ax or OM3 cable infrastructures, which in large build-outs is significant.

Category 6A does use more power than twin-ax or OM3 cables.   That is one of the trade-offs data centers should consider when evaluating 10GbE network options.

Question: Don’t most enterprise storage arrays support both iSCSI and FC/FCoE ports?   That seems to make the “either/or” approach to measuring uptake moot.

Many storage arrays today support either the iSCSI or FC storage network protocol. Some arrays support both at the same time. Very few support FCoE. And some others support a mixture of file and block storage protocols, often called Unified Storage. But, concurrent support for FC/FCoE and iSCSI on the same array is not universal.

Regardless, storage administrators will typically favor a specific storage protocol based upon their acquired skill sets and application requirements. This is especially true with block storage protocols since the underlying hardware is unique (FC, Ethernet, or even Infiniband). With the introduction of data center bridging and FCoE, storage administrators can deploy a single physical infrastructure to support the variety of application requirements of their organization. Protocol attach rates will likely prove less interesting as more vendors begin to offer solutions supporting full network convergence.

Question: I am wondering what is the sample size of your poll results, how many people voted?

We had over 60 live viewers of the webcast and over 50% of them participated in the online questions. So, the sample size was about 30+ individuals.

Question: Tape? Isn’t tape dead?

Tape as a backup methodology is definitely on the downward slope of its life than it was 5 or 10 years ago, but it still has a pulse. Expectations are that disk based backup, DR, and archive solutions will be common practice in the near future. But, many companies still use tape for archival storage. Like most declining technologies, tape will likely have a long tail as companies continue to modify their IT infrastructure and business practices to take advantage of newer methods of data retention.

Question: Do you not think 10 Gbps will fall off after 2015 as the adoption of 40 Gbps to blade enclosures will start to take off in 2012?

10GbE was expected to ramp much faster than what we have witnessed. Early applications of 10GbE in storage were introduced as early as 2006. Yet, we are only now beginning to see more broad adoption of 10GbE. The use of LOM and 10GBaseT will accelerate the use of 10GbE.

Early server adoption of 40GbE will likely be with blades. However, recognize that rack servers still outsell blades by a pretty large margin. As a result, 10GbE will continue to grow in adoption through 2015 and perhaps 2016. 40GbE will become very useful to reduce port count, especially at bandwidth aggregation points, such as inter-switch links. 40Gb ports may also be used to save on port count with the use of fanout cables (4x10Gb). However, server performance must continue to increase in order to be able to drive 40Gb pipes.

Question: Will you be making these slides available for download?

These slides are available for download at www.snia.org/?

Question: What is your impression of how convergence will change data center expertise?   That is, who manages the converged network?   Your storage experts, your network experts, someone new?

Network Convergence will indeed bring multiple teams together across the IT organization: server team, network team, and storage team to name a few. There is no preset answer, and the outcome will be on a case by case basis, but ultimately IT organizations will need to figure out how a common, shared resource (the network/fabric) ought to be managed and where the new ownership boundaries would need to be drawn.

Question: Will there be or is there currently a NDMP equivalent for iSCSI or 10GbE?

There is no equivalent to NDMP for iSCSI. NDMP is a management protocol used to backup server data to network storage devices using NFS or CIFS. SNIA oversees the development of this protocol today.

Question: How does the presenter justify the statement of “no need for specialized” knowledge or tools?   Given how iSCSI uses new protocols and concepts not found in traditional LAN, how could he say that?

While it’s true that iSCSI comes with its own concepts and subtleties, the point being made centered around how pervasive and widespread the underlying Ethernet know-how and expertise is.

Question: FC vs IP storage. What does IDC count if the array has both FC and IP storage which group does it go in. If a customer buys an array but does not use one of the two protocols will that show up in IDC numbers? This info conflicts SNIA’s numbers.

We can’t speak to the exact methods used to generate the analyst data. Each analyst firm has their own method for collecting and analyzing industry data. The reason for including the data was to discuss the overall industry trends.

Question: I noticed in the high-level overview that FCoE appeared not to be a ‘mesh’ network. How will this deal w/multipathing and/or failover?

The diagrams only showed a single path for FCoE to simplify the discussion on network convergence.   In a real-world, best-practices deployment there would be multiple paths with failover.     FCoE uses the same multipathing and failover capabilities that are available for Fibre Channel.

Question: Why are you including FCoE in IP-based storage?

The graph should indeed have read Ethernet storage rather than IP storage. This was fixed after the webinar and before the presentation got posted on SNIA’s website.

Everything You Need to Know About iSCSI

Are you considering deploying an iSCSI storage network, and would like to learn some of the best practices of configuring the environment, from host to storage? Well, now you can learn from an expert. The SNIA Ethernet Storage Forum will be sponsoring a live webcast with our guest speaker, Dennis Martin from Demartek. Dennis will share first-hand expertise and actionable best practices to effectively deploy iSCSI storage networks. A live Q&A will also be included. It doesn’t matter if you have a large, medium or small environment, Dennis will provide application specific recommendations that you won’t want to miss.

When: April 21st

Time: 8:00 am PT / 11:00 am ET

Free registration: http://www.brighttalk.com/webcast/26785

The SNIA ESF has several other web events planned for the rest of this calendar year.   Let us know what topics are important to you. We want to make these events highly educational.

Deploying SQL Server with iSCSI – Answers to your questions

by: Gary Gumanow

Last Wednesday (2/24/11), I hosted an Ethernet Storage Forum iSCSI SIG webinar with representatives from Emulex and NetApp to discuss the benefits of iSCSI storage networks in SQL application environments. You can catch a recording of the webcast on BrightTalk here.

The webinar was well attended, and while we received so many great questions during the webinar we just didn’t have time to answer all of them. Which brings us to this blogpost. We have included answers to these unanswered questions in our blog below.
We’ll be hosting another webinar real soon, so please check back for upcoming ESF iSCSI SIG topics. You’ll be able to register for this event shortly on BrightTalk.com.

Let’s get to the questions. We took the liberty of editing the questions for clarity. Please feel free to comment if we misinterpreted the question.

Question: Is TRILL needed in the data center to avoid pausing of traffic while extending the number of links that can be used?

Answer: The Internet Engineering Task Force (IETF) has developed a new shortest path frame Layer 2 (L2) routing protocol for multi-hop environments. The new protocol is called Transparent Interconnection of Lots of Links, or TRILL. TRILL will enable multipathing for L2 networks and remove the restrictions placed on data center environments by STP single-path networks.

Although TRILL may serve as an alternative to STP, it doesn’t require that STP be removed from an Ethernet infrastructure. Hybrid solutions that use both STP and TRILL are not only possible but also will be the norm for at least the near-term future. TRILL will also not automatically eliminate the risk of a single point of failure, especially in hybrid environments.

Another area where TRILL is not expected to play a role is the routing of traffic across L3 routers. TRILL is expected to operate within a single subnet. While the IETF draft standard document mentions the potential for tunneling data, it is unlikely that TRILL will evolve in a way that will expand its role to cover cross-L3 router traffic. Existing and well-established protocols such as Multiprotocol Label Switching (MPLS) and Virtual Private LAN Service (VPLS) cover these areas and are expected to continue to do so.

In summary, TRILL will help multipathing for L2 networks.

Question: How do you calculate bandwidth when you only have IOPS?
Answer:
The mathematical formula to calculate bandwidth is a function of IOPS and I/O size. The formula is simply IOP x I/O size. Example: 10,000 IOPS x 4k block size (4096 bytes) = 40.9 MB/sec.

Question: When deploying FCoE, must all 10GbE switches support Data Center Bridging (DCB) and FCoE? Or can some pass through FCoE?
Answer:
Today, in order to deploy FCoE, all switches in the data path must support both FCoE forwarding and DCB. Future standards include proposals to allow pass through of FCoE commands without having to support Fibre Channel services. This will allow for more cost effective networks where not all switch layers are needed to support the FCoE storage protocol.
Question: iSCSI performance is comparable to FC and FCoE. Do you expect to see iSCSI overtake FC in the near future?
Answer:
FCoE deployments are still very small compared to traditional Fibre Channel and iSCSI. However, industry projections by several analyst firms indicate that Ethernet storage protocols, such as iSCSI and FCoE, will overtake traditional Fibre Channel due to increased focus on shared data center infrastructures to address applications, such as private and public clouds. But, even the most aggressive forecasts don’t show this cross over for several years from now.
Customers looking to deploy new data centers are more likely today to consider iSCSI than in the past. Customers with existing Fibre Channel investments are likely to transition to FCoE in order to extend the investment of their existing FC storage assets. In either case, transitioning to 10Gb Ethernet with DCB capability offers the flexibility to do both.

Question: With 16Gb/s FC ratified, what product considerations would be considered by disk manufacturers?
Answer:
We can’t speak to what disk manufacturers will or won’t do regarding 16Gb/s disks. But, the current trend is to move away from Fibre Channel disk drives in favor of Serial Attached SCSI (SAS) and SATA disks as well as SSDs. 16Gb Fibre Channel will be a reality and will play in the data center. But, the prediction of some vendors is that the adoption rate will be much slower than previous generations.
Question: Why move to 10GbE if you have 8Gb Fibre Channel? The price is about the same, right?
Answer:
If your only network requirement is block storage, then Fibre Channel provides a high performance network to address that requirement. However, if you have a mixture of networking needs, such as NAS, block storage, and LAN, then moving to 10GbE provides sufficient bandwidth and flexibility to support multiple traffic types with fewer resources and with lower overall cost.
Question: Is the representation of number of links accurate when comparing Ethernet to Fibre Channel. Your overall bandwidth of the wire may be close, but when including protocol overheads, the real bandwidth isn’t an accurate comparison. Example: FC protocol overhead is only 5% vs TCP at 25%. iSCSI framing adds another 4%. So your math on how many FC cables equal 10 Gbps cables is not a fair comparison.

Answer: As pointed out in the question, comparing protocol performance requires more than just a comparison of wire rates of the physical transports. Based upon protocol efficiency, one could conclude that the comparison between FC and TCP/IP is unfair as designed because Fibre Channel should have produced greater data throughput from a comparable wire rate. However, the data in this case shows that iSCSI offers comparable performance in a real world application environment, rather than just a benchmark test. The focus of the presentation was iSCSI. FCoE and FC were only meant to provide a reference points. The comparisons were not intended to be exact nor precise. 10GbE and iSCSI offers the performance to satisfy business critical performance requirements. Customers looking to deploy a storage network should consider a proof of concept to ensure that a new solution can satisfy their specific application requirements.

Question: Two FC switches were used during this testing. Was it to solve an operation risk of no single point of failure?
Answer:
The use of two switches was due to hardware limitation. Each switch had 8-ports and the test required 8 ports at the target and the host. Since this was a lab setup, we weren’t configuring for HA. However, the recommendation for any production environment would be to use redundant switches. This would apply for iSCSI storage networks as well.
Question: How can iSCSI match all the distributed management and security capabilities of Fibre Channel / FCoE such as FLOGI, integrated name server, zoning etc?
Answer:
The feature lists between the two protocols don’t match exactly. The point of this presentation was to point out that iSCSI is closing the performance gap and has enough high-end features to make it enterprise-ready.
Question: How strong is the possibility that 40G Ethernet will be bypassed, with a move directly from 10G to 100G?
Answer: Vendors are shipping products today that support 40Gb Ethernet so it seems clear that there will be a 40GbE. Time will tell if customers bypass 40GbE and wait for 100GbE.

Thanks again for checking out our blog. We hope to have you on our next webinar live, but if not, we’ll be updating this blog frequently.

Gary Gumanow – iSCSI SIG Co-chairman, ESF Marketing Chair

SQL Server “rocks” with iSCSI – Emulex and NetApp tell why

The leading storage network technology for mission critical applications  today  is Fibre Channel (FC). Fibre Channel is a highly reliable and high performing network technology for block storage applications. But, for organizations that can’t afford single purpose networks or the added complexity of managing more than one network technology, FC may not be ideal. With the introduction of Fibre Channel over Ethernet (FCoE), the ability to deploy your FC storage resources over a shared Ethernet network is now possible. But, FCoE isn’t the only available option for block storage over Ethernet.

Initially used primarily by small and medium sized businesses or for less demanding applications, iSCSI is now finding broad application by larger enterprises for mission critical applications. Some of the drivers for increased iSCSI adoption in the enterprise include lower cost for 10Gb Ethernet components as well as the drive toward cloud based infrastructures which benefit from increased flexibility and scalability associated with IP network protocols.

On February 24th, the SNIA Ethrnet Storage Forum will present a live webcast to discuss the advantages of iSCSI storage for business applications and will show test results demonstrating the performance of  SQL Server deployed with  10GbE iSCSI. Hosted by Gary Gumanow, co-chair of the iSCSI SIG and ESF board member, this presentation will include content experts from  Emulex  and  NetApp along with a live Q&A.

Guest Speakers

Steve Abbott – Sr. Product Marketing Manager, Emulex

Wei Liu – Microsoft Alliance Engineer, NetApp

Data & Time: February 24th, 11am PT

Register today at http://www.brighttalk.com/webcast/25316

SNIA ESF

The SNIA Ethernet Storage Forum is dedicated to educating the IT community on the advantages and best use of Ethernet storage. This presentation is the first in a series of marketing activities that will primarily focus on data center applications during the calendar year 2011.

10GbE and iSCSI – Delivering Affordable Options

Last week a friend and I were talking about our children and the conversation turned to affordable options for college. It turns out my friend’s daughter is doing very well in school and eager to study agriculture, but is applying to only a couple state universities. I suggested that she should keep her options open and also apply to top agriculture schools like Cornell, but at $50,000/year cost is a major issue and sadly the reality is that colleges are becoming less and less affordable each year.

So it wasn’t enough for my friend’s daughter to have options, she needed affordable ones. I found myself comparing the college admissions world with my world, that of Information Technology, and thought how fortunate we were that technology breakthroughs like 10GbE not only provide improvements in performance and quality of service, but do so while delivering affordable options.

Let me explain.

First, it’s important to recognize the transition from 1GbE to 10GbE is happening now. Several factors are driving this transition, and with server virtualization, there is strong market demand for 10GbE. In fact, according to a May 2010 IT Brand Pulse survey, nearly two-thirds of the questioned IT managers indicated they had or were in the process of deploying 10GbE and half said server virtualization was driving the adoption of 10GbE.

Reality is that 10GbE is already being integrated and deployed onto motherboards with server manufacturers launching 10GbE as LAN on Motherboard (LOM) solutions on blade, rack, and tower servers. Furthermore, in their July 2010 report, the Dell’Oro Group estimated that 13.4M servers will ship with a 10GbE LOM by 2014 up from 0.5M in 2009 – they predicted that sometime in or during the back half of 2012, servers with 10GbE will outnumber those shipping with 1GbE. Additionally, the overall 10GbE per-port-pricing is already less than $400 and expected to drop further as 10GBaseT solutions are introduced.

But what does this mean for storage?

It comes down to affordable options and with 10GbE, there are multiple IP storage protocols to access both file and block data and 10GbE iSCSI is very compelling as there are even more options from which to choose. The standard baseline today is that operating systems provide an iSCSI software initiator that is essentially along with most standard operating systems and runs on most 10GbE NICs – sort of the basic converged 10GbE model.

To offload the server CPUs, for straightforward booting from an iSCSI SAN, and especially for enterprise-class manageability, converged iSCSI adapters that run both NIC and iSCSI offload are an option. In the past, this would have required the purchase of an iSCSI adapter. But today, some 10GbE LOMs and NICs can provide iSCSI adapter functionality as part of their standard offering or through a pay-as-you-go upgrade model.

So when you’re buying your server especially if you’re planning to use it for virtualization, for basic I/O scalability you should consider the benefits of a server with a 10GbE LOM and remember that it’ll also provide storage connectivity with an iSCSI software initiator. And if you’re undecided whether you need the benefits of an iSCSI adapter but want to keep your options open, keep in mind that some server LOM solutions already support iSCSI offload or can be upgraded to support it. This way, you have a 10GbE solution with built in affordable options for iSCSI.

I know this is somewhat of a stretch from the world of colleges and unfortunately 10GbE doesn’t help my friend’s daughter, but I do hope that she sees the value of having options and applies to Cornell – you never know if a scholarship or financial aid might come through and give her that affordable option.

Ethernet and IP Storage – Today’s Technology Enabling Next Generation Data Centers

I continue to believe that IP based storage protocols will be preferred for future data center deployments. The future of IT is pointing to cloud based architectures, whether internal or external. At the core of the cloud is virtualization. And I believe that Ethernet and IP storage protocols offer the greatest overall value to unlock the potential of virtualization and clouds. Will other storage network technologies work? Of course. But, I’m not talking about whether a network “works”. I’m suggesting that a converged network environment with Ethernet and IP storage offers the best combined value for virtual environments and cloud deployments. I’ve written and spoken about this topic before. And I will likely continue to do so. So, let me mention a few reasons to choose IP storage, iSCSI or NAS, for use in cloud environments.

Mobility. One of the many benefits of server virtualization is the ability to non-disruptively migrate applications from one physical server to another to support load balancing, failover or redundancy, and servicing or updating of hardware. The ability to migrate applications is best achieved with networked storage since the data doesn’t have to move when a virtual machine (VM) moves. But, the network needs to maintain connectivity to the fabric when a VM moves. Ethernet offers a network technology capable of migrating or reassigning network addresses, in this case IP addresses, from one physical device to another. When a VM moves to another physical server, the IP addresses move with it. IP based storage, such as iSCSI, leverages the built in capabilities of TCP/IP over Ethernet to migrate network port addresses without interruption to applications.

Flexibility. Most data centers require a mixture of applications that access either file or block data. With server virtualization, it is likely that you’ll require access to file and block data types on the same physical server for either the guest or parent OS. The ability to use a common network infrastructure for both the guest and parent can reduce cost and simplify management. Ethernet offers support for multiple storage protocols. In addition to iSCSI, Ethernet supports NFS and CIFS/SMB resulting in greater choice to optimize application performance within your budget. FCoE is also supported on an enhanced 10Gb Ethernet network to offer access to an existing FC infrastructure. The added flexibility to interface with existing SAN resources enhances the value of 10Gb as a long-term networking solution.

Performance. Cost. Ubiquity. Other factors that enhance Ethernet storage and therefore IP storage adoption include a robust roadmap, favorable economics, and near universal adoption. The Ethernet roadmap includes 40Gb and 100Gb speeds which will support storage traffic and will be capable of addressing any foreseeable application requirements. Ethernet today offers considerable economic value as port prices continue to drop. Although Gb speeds offer sufficient bandwidth for most business applications, the cost per Gb of bandwidth with 10 Gigabit Ethernet (GbE) is now lower than GbE and therefore offers upside in cost and efficiency. Finally, nearly all new digital devices including mobile phones, cameras, laptops, servers, and even some home appliances, are being offered with WiFi connectivity over Ethernet. Consolidating onto a single network technology means that the networking infrastructure to the rest of the world is essentially already deployed. How good is that?

Some may view moving to a shared network as kind of scary. The concerns are real. But, Ethernet has been a shared networking platform for decades and continues to offer enhanced features, performance, and security to address its increased application. And just because it can share other traffic, doesn’t mean that it must. Physical isolation of Ethernet networks is just as feasible as any other networking technology. Some may choose this option. Regardless, selecting a single network technology, even if not shared across all applications, can reduce not only capital expense, but also operational expense. Your IT personnel can be trained on a single networking technology versus multiple specialized single purpose networks. You may even be able to reduce maintenance and inventory costs to boot.

Customers looking to architect their network and storage infrastructure for today and the future would do well to consider Ethernet and IP storage protocols. The advantages are pretty compelling.

Ethernet Storage Market Momentum Continues

The inexorable growth of the market for Ethernet storage continued in the first half of 2010 – in fact we’re getting very close to Ethernet storage being the majority of networked storage in the Enterprise.

According to IDC’s recent Q2 2010 Worldwide Storage Systems Hardware Tracker, Ethernet Storage (NAS plus iSCSI) revenue market share climbed to 45%, up from 39% in 2009, 32% in 2008 and 28% in 2007, as shown below.

2007

2008

2009

Q2 2010

FC SAN

72%

68%

61%

55%

iSCSI SAN

6%

10%

13%

15%

NAS

22%

22%

26%

30%

In terms of capacity market share, we have already see the crossover point, with Ethernet Storage at 52% of the total PB shipped, up from 47% in 2009, 42% in 2008 and 37% in 2007, as shown in the following table.

2007

2008

2009

Q2 2010

FC SAN

62%

58%

53%

48%

iSCSI SAN

8%

13%

15%

18%

NAS

29%

29%

32%

34%

Ethernet Storage Market Momentum Continues in First Half of 2010

The inexorable growth of the market for Ethernet storage continued in the first half of 2010 – in fact we’re getting very close to Ethernet storage being the majority of networked storage in the Enterprise.

According to IDC’s recent Q2 2010 Worldwide Storage Systems Hardware Tracker, Ethernet Storage (NAS plus iSCSI) revenue market share climbed to 45%, up from 39% in 2009, 32% in 2008 and 28% in 2007, as shown below.

2007

2008

2009

Q2 2010

FC SAN

72%

68%

61%

55%

iSCSI SAN

6%

10%

13%

15%

NAS

22%

29%

26%

30%

In terms of capacity market share, we have already see the crossover point, with Ethernet Storage at 52% of the total PB shipped, up from 47% in 2009, 42% in 2008 and 37% in 2007, as shown in the following table.

2007

2008

2009

Q2 2010

FC SAN

62%

58%

53%

48%

iSCSI SAN

8%

13%

15%

18%

NAS

29%

29%

32%

34%

iSCSI over DCB: RELIABILITY AND PREDICTABLE PERFORMANCE

by Gary Gumanow

Following up from the previous blogpost on iSCSI over DCB, this blogpost highlights just some of the benefits that DCB can deliver. Welcome back.

DCB extends Ethernet by providing a network infrastructure that virtually eliminates packet loss, enabling improved data networking and management within the DCB network environment with features for priority flow control (P802.1Qbb), enhanced transmission selection (P802.1Qaz), congestion notification (P802.1Qau), and discovery. The result is a more deterministic network behavior. DCB is enabled through enhanced switches, server network adapters, and storage targets.

DCB delivers a “lossless” network, and makes the network performance extremely predictable. While standard Ethernet performs very well, its performance varies slightly (see graphic). With DCB, the maximum performance is the same, but performance varies very little. This is extremely beneficial for data center managers, enabling them to better predict performance levels and deliver smooth traffic flows. In fact, when under test, DCB eliminates packet re-transmission because of dropped packets making not only the network more efficient, but the host servers as well.

SEGREGATING AND PRIORITIZING TRAFFIC

In the past, storage networking best practice recommendations included physically separating data network from storage network traffic. Today’s servers commonly have quad-port GbE adapters to ensure sufficient bandwidth, so segregating traffic has been easy – for example, two ports can be assigned for storage networks and two for data networks. In some cases these Ethernet adapters aggregate ports together to deliver even greater throughput for servers.

With the onslaught of virtualization in the data center today, consolidated server environments have a different circumstance. Using virtualization software can simplify connectivity with multiple 10 GbE server adapters – consolidating bandwidth instead of distributing it among multiple ports and a tangle of wires. These 10 GbE adapters handle all the traffic – database, web, management, and storage – improving infrastructure utilization rates. But with traffic consolidated on fewer larger connections, how does IT segregate the storage and data networks, prioritize traffic, and guarantee service levels?

Data Center Bridging includes prioritization functionality, which improves management of traffic flowing over fewer, larger pipes. In addition to setting priority queues, DCB can allocate portions of bandwidth. For example, storage traffic can be configured as higher priority than web traffic – but the administrator can allocate 60% of bandwidth to the storage traffic and 40% to the Web, ensuring operations and predictable performance for all.

Thanks again for checking back here at this blog. Hopefully, you find this information useful and a good use of your last four minutes, and if you have any questions or comments, please let me know.

Regards,

Gary