Beyond Potatoes – Migrating from NFSv3

“It is a mistake to think you can solve any major problems just with potatoes.”
Douglas Adams (1952-2001, English humorist, writer and dramatist)

While there have been many advances and improvements to NFS over the last decade, some IT organizations have elected to continue with NFSv3 – like potatoes, it’s the staple filesystem protocol that just about any UNIX administrator understands.

Although adequate for many purposes and a familiar and well understood protocol, choosing and continuing to deploy NFSv3 has become increasingly difficult to justify in a modern datacenter. For example, NFSv3 makes promiscuous use of ports, something that is unsuitable for a variety of security reasons for use over a wide area network (WAN); plus increased server & client bandwidth demands and improved functionality of Network Attached Storage (NAS) arrays have outstripped NFSv3’s ability to deliver high throughput.
NFSv4 and the minor versions that follow it are designed to address many of the issues that NFSv3 poses. NFSv4 also includes features intended to enable its use in global wide area networks (WANs), and to improve the performance and resilience of NAS (Network Attached Storage):

  • Firewall-friendly single port operations
  • Internationalization support
  • Replication and migration facilities
  • Mandatory use of strong RPC security flavors that depend on cryptography, with support of access control that is compatible with both UNIX ® and Windows ®
  • Use of character strings instead of integers to represent user and group identifiers
  • Advanced and aggressive cache management features with delegations
  • (with NFSv4.1 pNFS, or parallel NFS) Trunking

In April 2003, the Network File System (NFS) version 4 Protocol was ratified as an Internet standard, described in RFC-3530, which superseded NFS Version 3 (NFSv3, specified in RFC-1813). Since the ratification of NFSv4, further advances have been made to the standard, notably NFSv4.1 (as described in RFC-5661, ratified in January 2010) that included several new features such as parallel NFS (pNFS). And further work is currently underway in the IETF for NFSv4.2.

Delegations with NFSv4

In NFSv3, clients have to function as if there is contention for the files they have opened, even though this is often not the case. As a result of this conservative approach to file locking, there are frequently many unneeded requests from the client to the server to find out whether an open file has been modified by some other client. Even worse, all write I/O in this scenario is required to be synchronous, further impacting client-side performance.
NFSv4 differs by allowing the server to delegate specific actions on a file to the client; this enables more aggressive client caching of data and the locks. A server temporarily cedes control of file updates and the locking state to a client via a delegation, and promises to notify the client if other clients are accessing the file. Once the client holds a delegation, it can perform operations on files with data has been cached locally, and thereby avoid network latency and optimize its use of I/O.

Trunking with pNFS

Many additional enhancements to NFSv4 are available with NFSv4.1, of which pNFS is a part. pNFS adds the capability to perform trunking at the NFS level by adding a session layer. The client establishes a session with an NFSv4.1 server, and can then create multiple TCP connections to the NFSv4.1 server, each potentially going over a different network interface on the client, and arriving on a different interface on the NFSv4.1 server. Now different requests sent over the same session identifier can go over different network paths, dramatically improving latency and increasing bandwidth.
Although client and server implementations of NFSv4.1 are available, they are in early stages of implementation and adoption. However, to take advantage of them in the future, it is important to plan now for the move to NFSv4 and beyond – and there are many servers and clients available now that support NFSv4. NFSv4 is a mature and stable protocol with many advantages in its own right over its predecessors NFSv3 and NFSv2.

Potatoes and Beyond

Now is the time to make the switchover; there really is no justification for not pursuing NFSv4 as the first NFS protocol version of choice. Although migrating from earlier versions of NFS requires some planning as there are significant differences between the two protocols, the benefits are impressive. To ensure a smooth migration to NFSv4 and beyond, the SNIA Ethernet Storage Forum NFS Special Interest Group has recently published an overview white paper “Migrating to NFSv4“. This covers internationalization support, automatic mounting of NFSv4 filesystems on demand, TCP protocol support amongst other considerations.
NFSv4 and NFSv4.1 have been developed for a reason; and NFSv4.2 is on the horizon. Like the potato, NFSv3 is a staple of the network Filesystem world. But as Douglas Adams said; “It is a mistake to think you can solve any major problems just with potatoes.” NFSv4 fixes many of NFSv3’s deficiencies, and represents a major advance that brings improved availability, performance and security; all the check-list items beyond potatoes that today’s users of network attached storage demand.

Everything You Need to Know About iSCSI

Are you considering deploying an iSCSI storage network, and would like to learn some of the best practices of configuring the environment, from host to storage? Well, now you can learn from an expert. The SNIA Ethernet Storage Forum will be sponsoring a live webcast with our guest speaker, Dennis Martin from Demartek. Dennis will share first-hand expertise and actionable best practices to effectively deploy iSCSI storage networks. A live Q&A will also be included. It doesn’t matter if you have a large, medium or small environment, Dennis will provide application specific recommendations that you won’t want to miss.

When: April 21st

Time: 8:00 am PT / 11:00 am ET

Free registration: http://www.brighttalk.com/webcast/26785

The SNIA ESF has several other web events planned for the rest of this calendar year.   Let us know what topics are important to you. We want to make these events highly educational.

Deploying SQL Server with iSCSI – Answers to your questions

by: Gary Gumanow

Last Wednesday (2/24/11), I hosted an Ethernet Storage Forum iSCSI SIG webinar with representatives from Emulex and NetApp to discuss the benefits of iSCSI storage networks in SQL application environments. You can catch a recording of the webcast on BrightTalk here.

The webinar was well attended, and while we received so many great questions during the webinar we just didn’t have time to answer all of them. Which brings us to this blogpost. We have included answers to these unanswered questions in our blog below.
We’ll be hosting another webinar real soon, so please check back for upcoming ESF iSCSI SIG topics. You’ll be able to register for this event shortly on BrightTalk.com.

Let’s get to the questions. We took the liberty of editing the questions for clarity. Please feel free to comment if we misinterpreted the question.

Question: Is TRILL needed in the data center to avoid pausing of traffic while extending the number of links that can be used?

Answer: The Internet Engineering Task Force (IETF) has developed a new shortest path frame Layer 2 (L2) routing protocol for multi-hop environments. The new protocol is called Transparent Interconnection of Lots of Links, or TRILL. TRILL will enable multipathing for L2 networks and remove the restrictions placed on data center environments by STP single-path networks.

Although TRILL may serve as an alternative to STP, it doesn’t require that STP be removed from an Ethernet infrastructure. Hybrid solutions that use both STP and TRILL are not only possible but also will be the norm for at least the near-term future. TRILL will also not automatically eliminate the risk of a single point of failure, especially in hybrid environments.

Another area where TRILL is not expected to play a role is the routing of traffic across L3 routers. TRILL is expected to operate within a single subnet. While the IETF draft standard document mentions the potential for tunneling data, it is unlikely that TRILL will evolve in a way that will expand its role to cover cross-L3 router traffic. Existing and well-established protocols such as Multiprotocol Label Switching (MPLS) and Virtual Private LAN Service (VPLS) cover these areas and are expected to continue to do so.

In summary, TRILL will help multipathing for L2 networks.

Question: How do you calculate bandwidth when you only have IOPS?
Answer:
The mathematical formula to calculate bandwidth is a function of IOPS and I/O size. The formula is simply IOP x I/O size. Example: 10,000 IOPS x 4k block size (4096 bytes) = 40.9 MB/sec.

Question: When deploying FCoE, must all 10GbE switches support Data Center Bridging (DCB) and FCoE? Or can some pass through FCoE?
Answer:
Today, in order to deploy FCoE, all switches in the data path must support both FCoE forwarding and DCB. Future standards include proposals to allow pass through of FCoE commands without having to support Fibre Channel services. This will allow for more cost effective networks where not all switch layers are needed to support the FCoE storage protocol.
Question: iSCSI performance is comparable to FC and FCoE. Do you expect to see iSCSI overtake FC in the near future?
Answer:
FCoE deployments are still very small compared to traditional Fibre Channel and iSCSI. However, industry projections by several analyst firms indicate that Ethernet storage protocols, such as iSCSI and FCoE, will overtake traditional Fibre Channel due to increased focus on shared data center infrastructures to address applications, such as private and public clouds. But, even the most aggressive forecasts don’t show this cross over for several years from now.
Customers looking to deploy new data centers are more likely today to consider iSCSI than in the past. Customers with existing Fibre Channel investments are likely to transition to FCoE in order to extend the investment of their existing FC storage assets. In either case, transitioning to 10Gb Ethernet with DCB capability offers the flexibility to do both.

Question: With 16Gb/s FC ratified, what product considerations would be considered by disk manufacturers?
Answer:
We can’t speak to what disk manufacturers will or won’t do regarding 16Gb/s disks. But, the current trend is to move away from Fibre Channel disk drives in favor of Serial Attached SCSI (SAS) and SATA disks as well as SSDs. 16Gb Fibre Channel will be a reality and will play in the data center. But, the prediction of some vendors is that the adoption rate will be much slower than previous generations.
Question: Why move to 10GbE if you have 8Gb Fibre Channel? The price is about the same, right?
Answer:
If your only network requirement is block storage, then Fibre Channel provides a high performance network to address that requirement. However, if you have a mixture of networking needs, such as NAS, block storage, and LAN, then moving to 10GbE provides sufficient bandwidth and flexibility to support multiple traffic types with fewer resources and with lower overall cost.
Question: Is the representation of number of links accurate when comparing Ethernet to Fibre Channel. Your overall bandwidth of the wire may be close, but when including protocol overheads, the real bandwidth isn’t an accurate comparison. Example: FC protocol overhead is only 5% vs TCP at 25%. iSCSI framing adds another 4%. So your math on how many FC cables equal 10 Gbps cables is not a fair comparison.

Answer: As pointed out in the question, comparing protocol performance requires more than just a comparison of wire rates of the physical transports. Based upon protocol efficiency, one could conclude that the comparison between FC and TCP/IP is unfair as designed because Fibre Channel should have produced greater data throughput from a comparable wire rate. However, the data in this case shows that iSCSI offers comparable performance in a real world application environment, rather than just a benchmark test. The focus of the presentation was iSCSI. FCoE and FC were only meant to provide a reference points. The comparisons were not intended to be exact nor precise. 10GbE and iSCSI offers the performance to satisfy business critical performance requirements. Customers looking to deploy a storage network should consider a proof of concept to ensure that a new solution can satisfy their specific application requirements.

Question: Two FC switches were used during this testing. Was it to solve an operation risk of no single point of failure?
Answer:
The use of two switches was due to hardware limitation. Each switch had 8-ports and the test required 8 ports at the target and the host. Since this was a lab setup, we weren’t configuring for HA. However, the recommendation for any production environment would be to use redundant switches. This would apply for iSCSI storage networks as well.
Question: How can iSCSI match all the distributed management and security capabilities of Fibre Channel / FCoE such as FLOGI, integrated name server, zoning etc?
Answer:
The feature lists between the two protocols don’t match exactly. The point of this presentation was to point out that iSCSI is closing the performance gap and has enough high-end features to make it enterprise-ready.
Question: How strong is the possibility that 40G Ethernet will be bypassed, with a move directly from 10G to 100G?
Answer: Vendors are shipping products today that support 40Gb Ethernet so it seems clear that there will be a 40GbE. Time will tell if customers bypass 40GbE and wait for 100GbE.

Thanks again for checking out our blog. We hope to have you on our next webinar live, but if not, we’ll be updating this blog frequently.

Gary Gumanow – iSCSI SIG Co-chairman, ESF Marketing Chair

SQL Server “rocks” with iSCSI – Emulex and NetApp tell why

The leading storage network technology for mission critical applications  today  is Fibre Channel (FC). Fibre Channel is a highly reliable and high performing network technology for block storage applications. But, for organizations that can’t afford single purpose networks or the added complexity of managing more than one network technology, FC may not be ideal. With the introduction of Fibre Channel over Ethernet (FCoE), the ability to deploy your FC storage resources over a shared Ethernet network is now possible. But, FCoE isn’t the only available option for block storage over Ethernet.

Initially used primarily by small and medium sized businesses or for less demanding applications, iSCSI is now finding broad application by larger enterprises for mission critical applications. Some of the drivers for increased iSCSI adoption in the enterprise include lower cost for 10Gb Ethernet components as well as the drive toward cloud based infrastructures which benefit from increased flexibility and scalability associated with IP network protocols.

On February 24th, the SNIA Ethrnet Storage Forum will present a live webcast to discuss the advantages of iSCSI storage for business applications and will show test results demonstrating the performance of  SQL Server deployed with  10GbE iSCSI. Hosted by Gary Gumanow, co-chair of the iSCSI SIG and ESF board member, this presentation will include content experts from  Emulex  and  NetApp along with a live Q&A.

Guest Speakers

Steve Abbott – Sr. Product Marketing Manager, Emulex

Wei Liu – Microsoft Alliance Engineer, NetApp

Data & Time: February 24th, 11am PT

Register today at http://www.brighttalk.com/webcast/25316

SNIA ESF

The SNIA Ethernet Storage Forum is dedicated to educating the IT community on the advantages and best use of Ethernet storage. This presentation is the first in a series of marketing activities that will primarily focus on data center applications during the calendar year 2011.

10GbE and iSCSI – Delivering Affordable Options

Last week a friend and I were talking about our children and the conversation turned to affordable options for college. It turns out my friend’s daughter is doing very well in school and eager to study agriculture, but is applying to only a couple state universities. I suggested that she should keep her options open and also apply to top agriculture schools like Cornell, but at $50,000/year cost is a major issue and sadly the reality is that colleges are becoming less and less affordable each year.

So it wasn’t enough for my friend’s daughter to have options, she needed affordable ones. I found myself comparing the college admissions world with my world, that of Information Technology, and thought how fortunate we were that technology breakthroughs like 10GbE not only provide improvements in performance and quality of service, but do so while delivering affordable options.

Let me explain.

First, it’s important to recognize the transition from 1GbE to 10GbE is happening now. Several factors are driving this transition, and with server virtualization, there is strong market demand for 10GbE. In fact, according to a May 2010 IT Brand Pulse survey, nearly two-thirds of the questioned IT managers indicated they had or were in the process of deploying 10GbE and half said server virtualization was driving the adoption of 10GbE.

Reality is that 10GbE is already being integrated and deployed onto motherboards with server manufacturers launching 10GbE as LAN on Motherboard (LOM) solutions on blade, rack, and tower servers. Furthermore, in their July 2010 report, the Dell’Oro Group estimated that 13.4M servers will ship with a 10GbE LOM by 2014 up from 0.5M in 2009 – they predicted that sometime in or during the back half of 2012, servers with 10GbE will outnumber those shipping with 1GbE. Additionally, the overall 10GbE per-port-pricing is already less than $400 and expected to drop further as 10GBaseT solutions are introduced.

But what does this mean for storage?

It comes down to affordable options and with 10GbE, there are multiple IP storage protocols to access both file and block data and 10GbE iSCSI is very compelling as there are even more options from which to choose. The standard baseline today is that operating systems provide an iSCSI software initiator that is essentially along with most standard operating systems and runs on most 10GbE NICs – sort of the basic converged 10GbE model.

To offload the server CPUs, for straightforward booting from an iSCSI SAN, and especially for enterprise-class manageability, converged iSCSI adapters that run both NIC and iSCSI offload are an option. In the past, this would have required the purchase of an iSCSI adapter. But today, some 10GbE LOMs and NICs can provide iSCSI adapter functionality as part of their standard offering or through a pay-as-you-go upgrade model.

So when you’re buying your server especially if you’re planning to use it for virtualization, for basic I/O scalability you should consider the benefits of a server with a 10GbE LOM and remember that it’ll also provide storage connectivity with an iSCSI software initiator. And if you’re undecided whether you need the benefits of an iSCSI adapter but want to keep your options open, keep in mind that some server LOM solutions already support iSCSI offload or can be upgraded to support it. This way, you have a 10GbE solution with built in affordable options for iSCSI.

I know this is somewhat of a stretch from the world of colleges and unfortunately 10GbE doesn’t help my friend’s daughter, but I do hope that she sees the value of having options and applies to Cornell – you never know if a scholarship or financial aid might come through and give her that affordable option.

Ethernet and IP Storage – Today’s Technology Enabling Next Generation Data Centers

I continue to believe that IP based storage protocols will be preferred for future data center deployments. The future of IT is pointing to cloud based architectures, whether internal or external. At the core of the cloud is virtualization. And I believe that Ethernet and IP storage protocols offer the greatest overall value to unlock the potential of virtualization and clouds. Will other storage network technologies work? Of course. But, I’m not talking about whether a network “works”. I’m suggesting that a converged network environment with Ethernet and IP storage offers the best combined value for virtual environments and cloud deployments. I’ve written and spoken about this topic before. And I will likely continue to do so. So, let me mention a few reasons to choose IP storage, iSCSI or NAS, for use in cloud environments.

Mobility. One of the many benefits of server virtualization is the ability to non-disruptively migrate applications from one physical server to another to support load balancing, failover or redundancy, and servicing or updating of hardware. The ability to migrate applications is best achieved with networked storage since the data doesn’t have to move when a virtual machine (VM) moves. But, the network needs to maintain connectivity to the fabric when a VM moves. Ethernet offers a network technology capable of migrating or reassigning network addresses, in this case IP addresses, from one physical device to another. When a VM moves to another physical server, the IP addresses move with it. IP based storage, such as iSCSI, leverages the built in capabilities of TCP/IP over Ethernet to migrate network port addresses without interruption to applications.

Flexibility. Most data centers require a mixture of applications that access either file or block data. With server virtualization, it is likely that you’ll require access to file and block data types on the same physical server for either the guest or parent OS. The ability to use a common network infrastructure for both the guest and parent can reduce cost and simplify management. Ethernet offers support for multiple storage protocols. In addition to iSCSI, Ethernet supports NFS and CIFS/SMB resulting in greater choice to optimize application performance within your budget. FCoE is also supported on an enhanced 10Gb Ethernet network to offer access to an existing FC infrastructure. The added flexibility to interface with existing SAN resources enhances the value of 10Gb as a long-term networking solution.

Performance. Cost. Ubiquity. Other factors that enhance Ethernet storage and therefore IP storage adoption include a robust roadmap, favorable economics, and near universal adoption. The Ethernet roadmap includes 40Gb and 100Gb speeds which will support storage traffic and will be capable of addressing any foreseeable application requirements. Ethernet today offers considerable economic value as port prices continue to drop. Although Gb speeds offer sufficient bandwidth for most business applications, the cost per Gb of bandwidth with 10 Gigabit Ethernet (GbE) is now lower than GbE and therefore offers upside in cost and efficiency. Finally, nearly all new digital devices including mobile phones, cameras, laptops, servers, and even some home appliances, are being offered with WiFi connectivity over Ethernet. Consolidating onto a single network technology means that the networking infrastructure to the rest of the world is essentially already deployed. How good is that?

Some may view moving to a shared network as kind of scary. The concerns are real. But, Ethernet has been a shared networking platform for decades and continues to offer enhanced features, performance, and security to address its increased application. And just because it can share other traffic, doesn’t mean that it must. Physical isolation of Ethernet networks is just as feasible as any other networking technology. Some may choose this option. Regardless, selecting a single network technology, even if not shared across all applications, can reduce not only capital expense, but also operational expense. Your IT personnel can be trained on a single networking technology versus multiple specialized single purpose networks. You may even be able to reduce maintenance and inventory costs to boot.

Customers looking to architect their network and storage infrastructure for today and the future would do well to consider Ethernet and IP storage protocols. The advantages are pretty compelling.

Ethernet Storage Market Momentum Continues

The inexorable growth of the market for Ethernet storage continued in the first half of 2010 – in fact we’re getting very close to Ethernet storage being the majority of networked storage in the Enterprise.

According to IDC’s recent Q2 2010 Worldwide Storage Systems Hardware Tracker, Ethernet Storage (NAS plus iSCSI) revenue market share climbed to 45%, up from 39% in 2009, 32% in 2008 and 28% in 2007, as shown below.

2007

2008

2009

Q2 2010

FC SAN

72%

68%

61%

55%

iSCSI SAN

6%

10%

13%

15%

NAS

22%

22%

26%

30%

In terms of capacity market share, we have already see the crossover point, with Ethernet Storage at 52% of the total PB shipped, up from 47% in 2009, 42% in 2008 and 37% in 2007, as shown in the following table.

2007

2008

2009

Q2 2010

FC SAN

62%

58%

53%

48%

iSCSI SAN

8%

13%

15%

18%

NAS

29%

29%

32%

34%

Ethernet Storage Market Momentum Continues in First Half of 2010

The inexorable growth of the market for Ethernet storage continued in the first half of 2010 – in fact we’re getting very close to Ethernet storage being the majority of networked storage in the Enterprise.

According to IDC’s recent Q2 2010 Worldwide Storage Systems Hardware Tracker, Ethernet Storage (NAS plus iSCSI) revenue market share climbed to 45%, up from 39% in 2009, 32% in 2008 and 28% in 2007, as shown below.

2007

2008

2009

Q2 2010

FC SAN

72%

68%

61%

55%

iSCSI SAN

6%

10%

13%

15%

NAS

22%

29%

26%

30%

In terms of capacity market share, we have already see the crossover point, with Ethernet Storage at 52% of the total PB shipped, up from 47% in 2009, 42% in 2008 and 37% in 2007, as shown in the following table.

2007

2008

2009

Q2 2010

FC SAN

62%

58%

53%

48%

iSCSI SAN

8%

13%

15%

18%

NAS

29%

29%

32%

34%

pNFS and Adoption in Academia

Few weeks ago I was invited to present the state of the pNFS to the Purdue University. They are interested to be one of the early adopters and I jumped on that opportunity to promote pNFS. The presentation included the deep dive in the protocol and the need for scalability and I continued with the current state of the protocol and the initial client implementations in Linux and Open Solaris.

The presentation and the discussion that followed addressed some basic questions that I expected around why should users trust that NFSv4.1/pNFS will not have the same faith as NFSv4.0. This is a legitimate question that often the pNFS developers in Linux ask themselves and the answer that I gave was same as the developers; pNFS will address many of the HPC needs. After additional details on scalability, performance, availability all the people in the room agreed that it is worth to look closer at pNFS. I recommended them to start looking at the current Fedora distribution that has both server and client pNFS file layout.

As Purdue is a heavy Lustre user, they further asked how would Lustre support pNFS. If you didn’t know, there are patches available for Lustre to support pNFS. I introduced Purdue team to the chief developer of Lustre so they can be the first to test the prototype patches.

This will help the promotion of pNFS in academia.

We need to continue our team effort to promote everywhere in universities pNFS.

iSCSI over DCB: RELIABILITY AND PREDICTABLE PERFORMANCE

by Gary Gumanow

Following up from the previous blogpost on iSCSI over DCB, this blogpost highlights just some of the benefits that DCB can deliver. Welcome back.

DCB extends Ethernet by providing a network infrastructure that virtually eliminates packet loss, enabling improved data networking and management within the DCB network environment with features for priority flow control (P802.1Qbb), enhanced transmission selection (P802.1Qaz), congestion notification (P802.1Qau), and discovery. The result is a more deterministic network behavior. DCB is enabled through enhanced switches, server network adapters, and storage targets.

DCB delivers a “lossless” network, and makes the network performance extremely predictable. While standard Ethernet performs very well, its performance varies slightly (see graphic). With DCB, the maximum performance is the same, but performance varies very little. This is extremely beneficial for data center managers, enabling them to better predict performance levels and deliver smooth traffic flows. In fact, when under test, DCB eliminates packet re-transmission because of dropped packets making not only the network more efficient, but the host servers as well.

SEGREGATING AND PRIORITIZING TRAFFIC

In the past, storage networking best practice recommendations included physically separating data network from storage network traffic. Today’s servers commonly have quad-port GbE adapters to ensure sufficient bandwidth, so segregating traffic has been easy – for example, two ports can be assigned for storage networks and two for data networks. In some cases these Ethernet adapters aggregate ports together to deliver even greater throughput for servers.

With the onslaught of virtualization in the data center today, consolidated server environments have a different circumstance. Using virtualization software can simplify connectivity with multiple 10 GbE server adapters – consolidating bandwidth instead of distributing it among multiple ports and a tangle of wires. These 10 GbE adapters handle all the traffic – database, web, management, and storage – improving infrastructure utilization rates. But with traffic consolidated on fewer larger connections, how does IT segregate the storage and data networks, prioritize traffic, and guarantee service levels?

Data Center Bridging includes prioritization functionality, which improves management of traffic flowing over fewer, larger pipes. In addition to setting priority queues, DCB can allocate portions of bandwidth. For example, storage traffic can be configured as higher priority than web traffic – but the administrator can allocate 60% of bandwidth to the storage traffic and 40% to the Web, ensuring operations and predictable performance for all.

Thanks again for checking back here at this blog. Hopefully, you find this information useful and a good use of your last four minutes, and if you have any questions or comments, please let me know.

Regards,

Gary