SMB3 – These Questions Rock!

Earlier this month, the SNIA Ethernet Storage Forum hosted a live webcast on Server Message Block (SMB), “Rockin’ and Rollin’ with SMB3.” Presenting was Ned Pyle, Microsoft SMB Program Manager. If you missed the live event, I encourage you to watch it on-demand. We had a lot of questions from the big audience this event drew, so as promised, here are answers to them all.

Q. Other than that audit setup, is there a way to determine, via the OS, which SMB version is in use?

A. No. Network captures alone will tell you, but Windows doesn’t track this explicitly other than SMB1 with auditing we added specifically for the task of identifying removal options

Q. SMB 3.1.1 over Ethernet… can you discuss/compare with SMB 3.1.1 over Infiniband?

A. If the question is ‘what’s better, Infiniband or Ethernet’, my answer is always: it depends. I really don’t want to get into a competitive conversation under the guides of SNIA. I simply recommend looking at the vendor stories and make an informed decision. Overall, Ethernet/TCP/IP versions like RoCE and iWARP configurations are generally less expensive than Infiniband ones. They all have tremendous performance. They all have their various ups and downs.

Q. Do you have statistics regarding SMB-Direct adoption?

A. It’s tricky, as our telemetry for Server usage is quite inaccurate due to firewall rules preventing servers from reaching the Internet. I can say indirectly that we know of thousands of customer deployments.

Q. What’s the name of the IO application?

A. DiskSPD

Q. I don’t believe your I/O data tests, wouldn’t you need to trunk 17 10 Gigabit Network Cards to achieve 168 gigabit I/O capability?

A.  This was a misunderstanding, you thought I said 10Gb but it was 100Gb. We used 100Gb RDMA NICs in this demo with RoCEv2. The bottleneck was the storage at that point, the network had plenty of bandwidth left over.  

Q. These are great, but how many of these new features will end up locking out FOSS/GPL implementations of SMB such as SAMBA?

A. Absolutely not! We work with Samba team and Linux to ensure that SMB can be broadly deployed with all of its capabilities inside open source software.

Q. NetApp supports CA shares (which uses transparent failover) in two use cases: SQL over SMB and Hyper-V over SMB3.

A. This sounds likes someone from NetApp stating a fact, so I will simply say “good!” 🙂

Q.  Can you please post links to the tools mentioned in this presentation, and I/O tests? Is there a comparison using I/O Meter?

A. Here you go:

  • https://gallery.technet.microsoft.com/DiskSpd-a-robust-storage-6cd2f223
  • https://github.com/Microsoft/diskspd
  • https://github.com/Microsoft/diskspd/tree/master/Frameworks/VMFleet

Q. You are forced to use SMB1 because of the Windows 2003 issue?

A. Windows Server 2003 and XP (and older, like Win2000) all use SMB1. If they are still around, you will need to leave SMB1 enabled on any machines talking to them.

Q. When will Microsoft officially drop support for SMB1?

A. Overall for the protocol, there is no timeline. It is deprecated however, so no further work will be done in SMB1 other than critical security patches. SMB1 will start being removed *by default* in a coming release of Windows Server and Windows 10 client. This doesn’t mean totally removed forever, but instead “missing by default”, where you must directly opt in to adding it back. It will be done on a per-SKU basis, so that enterprises are first likely to see it, since they are equipped better to understand it and less likely to need SMB1

Q. Is there a way to change block size in SMB3 ?

A. In SMB2_READ processing section 3.3.5.12 (https://msdn.microsoft.com/en-us/library/cc246729.aspx):

The server SHOULD<296> fail the request with STATUS_INVALID_PARAMETER if the Length field is greater than Connection.MaxReadSize.

If Connection.SupportsMultiCredit is TRUE the server MUST validate CreditCharge based on Length, as specified in section 3.3.5.2.5. If the validation fails, it MUST fail the read request with STATUS_INVALID_PARAMETER.

There is similar text for SMB2_WRITE in 3.3.5.13 (https://msdn.microsoft.com/en-us/library/cc246730.aspx).

Then, off to SMB2_NEGOTIATE  in 3.3.5.4 (https://msdn.microsoft.com/en-us/library/cc246768.aspx) to discover:

  • MaxReadSize is set to the maximum size, in bytes, of the Length in an SMB2 READ Request (section 2.2.19) that the server will accept on the transport that established this connection. This value SHOULD<231> be greater than or equal to 65536. MaxReadSize MUST be set to MaxReadSize.
  • MaxWriteSize is set to the maximum size, in bytes, of the Length in an SMB2 WRITE Request (section 2.2.21) that the server will accept on the transport that established this connection. This value SHOULD<232> be greater than or equal to 65536. MaxWriteSize MUST be set to MaxWriteSize.
Windows version\Connection.Dialect 2.0.2 All other SMB2 dialects
Windows Vista SP1\Windows Server 2008 65536 N/A
Windows 7\Windows Server 2008 R2 65536 1048576
Windows 8 without [MSKB-2934016]\Windows Server 2012 without [MSKB-2934016] 65536 1048576
All other SMB2 servers 65536 8388608

<232> Section 3.3.5.4: If the underlying transport is NETBIOS over TCP, Windows servers set MaxWriteSize to 65536. Otherwise, MaxWriteSize is set based on the following table.

Windows version\Connection.Dialect 2.0.2 All other SMB2 dialects
Windows Vista SP1\Windows Server 2008 65536 N/A
Windows 7\Windows Server 2008 R2 65536 1048576
Windows 8 without [MSKB-2934016]\Windows Server 2012 without [MSKB-2934016] 65536 1048576
All other SMB2 servers 65536 8388608

Update: If you missed the live event, it’s now available  on-demand. You can also  download the webcast slides.

Rock n’ Roll with SMB3

Server Message Block (SMB) is the core file-transfer protocol of Windows, MacOS and Samba, and has become widely deployed. It’s ubiquitous – a 30-year-old family of network code.

However, the latest iteration of SMB3 is almost unrecognizable when compared to versions only a few years old. That’s why the SNIA Ethernet Storage Forum (ESF) has invited Microsoft’s Ned Pyle, program manager of the SMB protocol, to speak at our live webcast, “Rockin’ and Rollin’ with SMB3.”

Extensive reengineering has led to advanced capabilities that include multichannel, transparent failover, scale out, and encryption. SMB Direct makes use of RDMA networking, creates block transport system and provides reliable transport to zetabytes of unstructured data, worldwide.

SMB3 forms the basis of hyperconverged and scale-out systems for virtualization and SQL Server. It is available for a variety of hardware devices, from printers, network-attached storage appliances, to Storage Area Networks. It is often the most prevalent protocol on a network, with high-performance data transfers as well as efficient end-user access over wide-area connections. Register now for the live event on April 5th to hear:

  • Brief background on SMB
  • An overview of the SMB 3.x family, first released with Windows 8, Windows Server 2012, MacOS 10.10, Samba 4.1, and Linux CIFS 3.12
  • What changed in SMB 3.1.1
  • Understanding SMB security, scenarios, and workloads
  • The deprecation and removal of the legacy SMB1 protocol
  • How SMB3 supports hyperconverged and scale-out storage

This is a unique opportunity to “rock out” with an SMB3 expert on the front lines at Microsoft. We hope to see you on April 5th.

Update: If you missed the live event, it’s now available  on-demand. You can also  download the webcast slides.

Cloud File Services: SMB/CIFS and NFS…in the Cloud – Q&A

Cloud File Services: SMB/CIFS and NFS…in the Cloud – Q&A

At our recent live ESF Webcast, “Cloud File Services: SMB/CIFS and NFS…in the Cloud” we talked about evaporating your existing file server into the cloud. Over 300 people have viewed the Webcast. If you missed it, it’s now available on-demand. It was an interactive session with a lot of great questions from attendees. We did not have time to address them all – so here is a complete Q&A from the Webcast. If you think of additional questions, please feel free to comment on this blog.

Q. Can your Storage OS take advantage of born-in-the-cloud File Storage like Zadara Storage at AWS and Azure?

A. The concept presented is generic in nature.   Whichever storage OS the customer chooses to use in the cloud will have its own requirements on the underlying storage beneath it.   Most Storage OS’s used for Cloud File Services will likely use block or object backends rather than a file backend.

Q. Regarding Cloud File Services for “Client file services,” since the traditional file services require the client and server to be in a connected mode, and in the same network. And, they are tied to identities available in the network. How can the SMB/NFS protocols be used to serve data from the cloud to the clients that could be coming from different networks (4G/Corporate)? Isn’t REST the appropriate interface for that model?

A. The answer depends on the use case.   There are numerous examples of SMB over the WAN, for example, so it’s not far fetched to imagine someone using Cloud File Services as an alternative to a “Sync & Share” solution for client file services.   REST (or similar) may be appropriate for some, while file-based protocols will work better for others.   Cloud File Services provides the capability to extend into this environment where it couldn’t before.

Q. Is Manila like VMware VSAN or VASA?

A. Please take a look at the Manila project on OpenStack’s website https://wiki.openstack.org/wiki/Manila

Q. How do you take care of data security while moving data from on-premises to cloud (Storage OS)?

A. The answer depends on the Storage OS you are using for your Cloud File Services platform.   If your Storage OS supports encryption, for example, in its storage-to-storage in-flight data transport, then data security in-flight would be taken care of.   There are many facets to security that need to be thought through, including security at rest, some of which may depend on the environment (private/on-premises, service provider, hyperscalar) the Storage OS is sitting in.

Q. How do you get the data out of the cloud?   I think that’s been a traditional concern with cloud storage.

A. That’s the beauty of Cloud File Services!   With data movement and migration provided at the storage-level by the same Storage OS across all locations, you can simply move the data between on-premises and off-premises and expect similar behavior on both ends.   If you choose to put data into a native environment specific to a hyperscalar or service provider, you run the risk of lock-in.

Q. 1. How does one address issue of “chatty” applications over the cloud?   2. File services have “poor” performance for small files. How does one address that issue? Block & Objects do address that issue 3. Why not expose SMB, NFS, Object Interface on the Compute note?

A. 1. We should take this opportunity to make the applications less chatty! 🙂   One possible solution here is to operate the application and Storage OS in the same environment, in much the same way you would have on-premises.   If you choose a hyperscalar or service provider, for chatty use cases, it may be best to keep the application and storage pieces “closer” together.

2. Newer file protocols are getting much better at this.   SMB 3.02 for instance, was optimized for 8K transactions.   With a modern Storage OS, you will be able to take advantage of new developments.

3. That is precisely the idea: the Storage OS operating in the “compute nodes,” serving out their interfaces, while taking advantage of different backend offerings for cost and scalability.

Q. Most storage arrays NetApp, EMC etc., can provide 5 9s of resilience, Cloud VMs typically offer 3 9’s.   How do you get to 5 9’s with CFS?

A. Cloud File Services (CFS) as a platform can span across all of your environments, and as such, the availability guarantees will depend upon each environment in which CFS is operating.

Q. Why are we “adding” another layer? Why can we just use powerful “NAS” devices that can have different media like NVMe, Flash SSD or HDDs?

A. Traditional applications may not want to change, but this architecture should suit those well.   It’s worth examining that “cloud-ready” model.   Is the goal to be “cloud-ready,” or is the goal to support the scaling, failover, and on-demand-ness that the cloud has the ability to provide?   Shared nothing is a popular way of accomplishing some of this, but it may not be the only way.

The existing interfaces provided by hyperscalars do provide abstraction, but if you are building an application, you run a strong risk of lock-in on any particular abstraction.   What is your exit strategy then?   How do you move your data (and applications) out?

By leveraging a common Storage OS across your entire infrastructure (on-premises, service providers, and hyperscalars), you have a very simple exit strategy, and your exit and mobility strategy become very similar, if not the same, with the ability to scale or move across any environment you choose.

Q. How do you virtualize storage OS? What happens to native storage OS hardware/storage?

A. A Storage OS can be virtualized similar to a PC or traditional server OS.   Some pieces may have to be switched or removed, but it is still an operating system.

Q. Why is your Storage displayed as part of your Compute layer?

A. In the hyperscalar model, the Storage OS is sitting in the compute layer because it is, in effect, running as a virtual machine the same as any other.   It can then take advantage of different tiers of storage offered to it.

Q. My concern is that it would be slower as a VM than a storage controller.   There’s really no guarantee of storage performance in the cloud in fact most hyperscalers won’t give me a good SLA without boatloads of money.   How might you respond to this?

A. Of course with on-premises infrastructure, a company or service provider will have more of a guarantee in the sense that they control the hardware behind it.   However, as we’ve seen, SLA’s continue to improve over time, and costs continue to come down for the Public Cloud.

Q. Does FreeNAS qualify as a Storage OS?

A. I recommend checking with their team.

Q. Isn’t this similar to Hybrid cloud?

A. Cloud File Services (CFS) is one way of looking at Hybrid Cloud.   Savvy readers and listeners will pick up that having the same Storage OS everywhere doesn’t necessarily limit you to only File Services. iSCSI or RESTful interfaces could work exactly the same.

Q. What do you mean by Storage OS? Can you give some examples?

A. As I work for NetApp, one example is Data ONTAP.   EMC has several as well, such as one for the VNX platform.   Most major storage vendors will have their own OS.

Q. I think one of the key questions is the data access latency over WAN, how I can move my data to the cloud, how I can move back when needed – for example, when the service is terminated?

A. Latency is a common concern, and connectivity is always important.   Moving your data into and out of the cloud is the beauty of the Cloud File Services platform, as I mentioned in other answers.   If one of your environments goes down (for example, your on-premises datacenter) then you would feasibly be able to shift your workloads over to one of your other environments, similar to a DR situation.   That is one example of where storage replication and application awareness across sites is important.

Q. Running applications like Oracle, Exchange through SMB/NFS (NAS), don’t you think it will be slow compared to FC (block storage)?

A. Oracle has had great success running over NFS, and it is extremely popular.   While Exchange doesn’t currently support running directly over SMB at this time, it’s not ludicrous to think that it may happen at some point in the future, in the same way that SQL has.

Q. What about REST and S3 API or are they just for object storage?   What about CINDER?

A. The focus of this presentation was only File Services, but as I mentioned in another answer, if your Storage OS supports these services (like REST or S3), it’s feasible to imagine that you could span them in the same way that we discussed CFS.

Q. Why SAN based application moving to NAS?

A. This was discussed in one of the early slides in the presentation (slide 10, I believe).   Data mobility and granular management were discussed, as it’s easier to move, delete, and otherwise manage files than LUNs, an admin can operate at a more granular level, and it’s easier to operate and maintain.   No HBA’s, etc.   File protocols are generally considered “easier” to use.

 

 

New Webcast: Cloud File Services: SMB/CIFS and NFS…in the Cloud

Imagine evaporating your existing file server into the cloud with the same experience and level of control that you currently have on-premises. On October 1st, ESF will host a live Webcast that introduces the concept of Cloud File Services and discusses the pros and cons you need to consider.

There are numerous companies and startups offering “sync & share” solutions across multiple devices and locations, but for the enterprise, caveats are everywhere. Register now for this Webcast to learn:

  • Key drivers for file storage
  • Split administration with sync & share solutions and on-premises file services
  • Applications over File Services on-premises (SMB 3, NFS 4.1)
  • Moving to the cloud: your storage OS in a hyperscalar or service provider
  • Accommodating existing File Services workloads with Cloud File Services
  • Accommodating  cloud-hosted applications over Cloud File Services

This Webcast will be a vendor-neutral and informative discussion on this hot topic. And since it’s live, we encourage your to bring your questions for our experts. I hope you’ll register today and we’ll look forward to having you attend on October 1st.    

 

SMB 3.0 – Your Questions Asked and Answered

Last week we had a large and highly-engaged audience at our live Webcast: “SMB 3.0 – New Opportunities for Windows Environments.” We ran out of time answering all the questions during our event so, as promised, here is a recap of all the questions and answers to attendees’ questions. The Webcast is now available on demand at http://snia.org/forums/esf/knowledge/webcasts. You can also download a copy of the presentation slides there.

Q. Have you tested SMB Direct over 40Gb Ethernet or using RDMA?

A. SMB Direct has been demonstrated using 40Gb Ethernet using TCP or RDMA and Infiniband using RDMA.

Q. 100 iops, really?

A. If you look at the bottom right of slide 27 (Performance Test Results) you will see that the vertical axis is IOPs/sec (Normalized). This is a common method for comparing alternative storage access methods on the same storage server. I think we could have done a better job in making this clear by labeling the vertical axis as “IOPs (Normalized).”

Q. How does SMB 3.0 weigh against NFS-4.1 (with pNFS)?

A. That’s a deep question that probably deserves a webcast of its own. SMB 3 doesn’t have anything like pNFS. However many Windows workloads don’t need the sophisticated distributed namespace that pNFS provides. If they do, the namespace is stitched together on the client using mounts and DFS-N.

Q. In the iSCSI ODX case, how does server1 (source) know about the filesystem structure being stored on the LUN (server2) i.e. how does it know how to send the writes over to the LUN?

A. The SMB server (source) does not care about the filesystem structure on the LUN (destination). The token mechanism only loosely couples the two systems. They must agree that the client has permission to do the copy and then they perform the actual copy of a set of blocks. Metadata for the client’s file system representing the copied file on the LUN is part of the client workflow. Client drag/drops file from share to mounted LUN. Client subsystem determines that ODX is available. Client modifies file system metadata on the LUN as part of the copy operation including block maps. ODX is invoked and the servers are just moving blocks.

Q. Can ODX copies be within the same share or only between?

A. There is no restriction to ODX in this respect. The resource and destination of the copy can be on same shares, different shares, or even completely different protocols as illustrated in the presentation.

Q. Does SMB 3 provide API for integration with storage vendor snapshot other MS VSS?

A. Each storage vendor has to support Microsoft Remote VSS protocol, which is part of SMB 3.0 protocol specification. In Windows 2012 or Windows 8 the VSS APIs were extended to support UNC share path.

Q. How does SMB 3 compare to iSCSI rather than FC?

A. Please examine slide 27, which compares SMB 3, FC and iSCSI on the same storage server configuration.

Q. I have a question between SMB and CIFS. I know both are the protocols used for sharing. But why is CIFS adopted by most of the storage vendors? We are using CIFS shares on our NetApps, and I have seen that most of the other storage vendors are also using CIFS on their NAS devices.

A. There has been confusion between the terms “SMB” and “CIFS” ever since CIFS was introduced in the 90s. Fundamentally, the protocol that manages the data transfer between and client and server is SMB. Always has been. IMO CIFS was a marketing term created in response to Sun’s WebNFS. CIFS became popularized with most SMB server vendors calling their product a CIFS server. Usage is slowly changing but if you have a CIFS server it talks SMB.

Q. What is required on the client? Is this a driver with multi-path capability? Is this at the file system level in the client? What is needed in transport layer for the failover?

A. No special software or driver is required on the client side as long as it is running Windows 8 and later operating environment.

Q. Are all these new features cross-platform or is it something only supported by Windows?

A. SMB 3 implementations by different storage vendors will have some set of these features.

Q. Are virtual servers (cloud based) vs. non-virtual transition speeds greatly different?

A. The speed of a transition, i.e. failover is dependent on two steps. The first is the time needed to detect the failure and the second is the time needed to recover from that failure. While both a virtual and a physical server support transition the speed can significantly vary due to different network configurations. See more with next question.

Q. Is there latency as it fails over?

A. Traditionally SMB timeouts were associated with lower level, i.e. TCP timeouts. Client behavior has varied over the years but a rule-of-thumb was detection of a failure in 45 sec. This error would be passed up the stack to the user/application. With SMB 3 there is a new protocol called SMB Witness. A scale-out SMB server will include nodes providing SMB shares as well those providing Witness service. A client connects to SMB and Witness. If the node hosting the SMB share fails, the Witness node will notify the client indicating the new location for the SMB share. This can significantly reduce the time needed for detection. The scale-out SMB server can implement a proprietary mechanism to quickly detect node failure and trigger a Witness notification.

Q. Sync or Async?

A. Whether state movement between server nodes is sync or async depends on vendor implementation. Typically all updated state needs to be committed to stable storage before returning completion to the client.

Q. How fast is this transition with passing state id’s between hosts?

A. The time taken for the transition includes the time needed to detect the failure of Client A and the time needed to re-establish things using Client B. The time taken for both is highly dependent on the nature of the clustered app as well as the supported use case.

Q. We already have FC (using VMware), why drop down to SMB?

A. If you are using VMware with FC, then moving to SMB is not an option. VMware supports the use of NFS for hypervisor storage but not SMB.

Q. What are the top applications on SMB 3.0?

A. Hyper-V, MS-SQL, IIS

Q. How prevalent is true “multiprotocol sharing” taking place with common datasets being simultaneously accessed via SMB and NFS clients?

A. True “multiprotocol sharing” i.e. simultaneous access of a file by NFS & SMB clients is extremely rare. The NFS and SMB locking models don’t lend themselves to that. Sharing of a multiprotocol directory is an important use case. Users may want access to a common area from Linux, OS X and Windows. But this is sequential access by one OS/protocol at a time not all at once.

Q. Do we know growth % split between NFS and SMB?

A. There is no explicit industry tracker for the protocol split and probably not that much point in collecting them either, as the protocols aren’t really in competition. There is affinity among applications, OSes and protocols – MS products tend to SMB (Hyper-V, SQL Server,…), and non-Microsoft to NFS (VMware, Oracle, …). Cloud products at the point of consumption are normally HTTP RESTless protocols.

Ethernet Storage Forum – 2012 Year in Review and What to Expect in 2013

As we come to a close of the year 2012, I want to share some of our successes and briefly highlight some new changes for 2013. Calendar year 2012 has been eventful and the SNIA-ESF has been busy. Here are some of our accomplishments:

  • 10GbE – With virtualization and network convergence, as well as the general availability of LOM and 10GBASE-T cabling, we saw this is a “breakout year” for 10GbE. In July, we published a comprehensive white paper titled “10GbE Comes of Age.” We then followed up with a Webcast “10GbE – Key Trends, Predictions and Drivers.” We ran this live once in the U.S. and once in the U.K. and combined, the Webcast has been viewed by over 400 people!
  • NFS – has also been a hot topic. In June we published a white paper “An Overview of NFSv4″ highlighting the many improved features NFSv4 has over NFSv3. A Webcast to help users upgrade, “NFSv4 – Plan for a Smooth Migration,” has also been well received with over 150 viewers to date.   A 4-part Webcast series on NFS is now planned. We kicked the series off last month with “Reasons to Start Working with NFSv4 Now” and will continue on this topic during the early part of 2013. Our next NFS Webcast will be “Advances in NFS – NFSv4.1 and pNFS.” You can register for that here.
  • Flash – The availability of solid state devices based on NAND flash is changing the performance efficiencies of storage. Our September Webcast “Flash – Plan for the Disruption” discusses how Flash is driving the need for 10GbE and has already been viewed by more than 150 people.

We have also added to expand membership and welcome new membership from Tonian and LSI to the ESF. We expect with this new charter to see an increase in membership participation as we drive incremental value and establish ourselves as a leadership voice for Ethernet Storage.

As we move into 2013, we expect two hot trends to continue – the broader use of file protocols in datacenter applications, and the continued push toward datacenter consolidation with the use of Ethernet as a storage network. In order to better address these two trends, we have modified our charter for 2013. Our NFS SIG will be renamed the File Protocol SIG and will focus on promoting not only NFS, but also SMB / CIFS solutions and protocols. The iSCSI SIG will be renamed to the Storage over Ethernet SIG and will focus on promoting data center convergence topics with Ethernet networks, including the use of block and file protocols, such as NFS, SMB, FCoE, and iSCSI, over the same wire. This modified charter will allow us to have a richer conversation around storage trends relevant to your IT environment.

So, here is to a successful 2012, and excitement for the coming year.