The Evolution of Congestion Management in Fibre Channel

The Fibre Channel (FC) industry introduced Fabric Notifications as a key resiliency mechanism for storage networks in 2021 to combat congestion, link integrity, and delivery errors. Since then, numerous manufacturers of FC SAN solutions have implemented Fabric Notifications and enhanced the overall user experience when deploying FC SANs.

On August 27, 2024, the SNIA Data, Networking & Storage Forum is hosting a live webinar, “The Evolution of Congestion Management in Fibre Channel,” for a deep dive into Fibre Channel congestion management. We’ve convened a stellar, multi-vendor group of Fibre Channel experts with extensive Fibre Channel knowedge and different technology viewpoints to explore the evolution of Fabric Notifications and the available solutions of this exciting new technology. You’ll learn: Read More

SNIA Networking Storage Forum – New Name, Expanded Charter

Anyone who follows technology knows that it is a fast-paced world with rapid changes and constant innovations. SNIA, together with its members, technical work groups, Forums, and Initiatives, continues to embrace, educate, and develop standards to make technology more available and better understood.

At the SNIA Networking Storage Forum, we’ve been at the forefront of diving into technology topics that extend beyond traditional networked storage, providing education on AI, edge, acceleration and offloads, hyperconverged infrastructure, programming frameworks, and more. We still care about and spend a lot of time on networked storage and storage protocols, but we felt it was time that the name of the group better reflected the broad range of timely topics we’re covering. Read More

Here’s Everything You Wanted to Know About Throughput, IOPs, and Latency

Any discussion about storage systems is incomplete without the mention of Throughput, IOPs, and Latency. But what exactly do these terms mean, and why are they important? To answer these questions, the SNIA Networking Storage Forum (NSF) is bringing back our popular webinar series, “Everything You Wanted to Know About Storage, But Were Too Proud to Ask.”

Collectively, these three terms are often referred to as storage performance metrics. Performance can be defined as the effectiveness of a storage system to address I/O needs of an application or workload. Different application workloads have different I/O patterns, and with that arises different bottlenecks, so there is no “one-size fits all” in storage systems. These storage performance metrics help with storage solution design and selection based on application/workload demands.

Join us on February 7, 2024, for “Everything You Wanted to Know About Throughput, IOPS, and Latency, But Were Too Proud to Ask.” In this webinar, we’ll cover: Read More

Accelerating Generative AI

Workloads using generative artificial intelligence trained on large language models are frequently throttled by insufficient resources (e.g., memory, storage, compute or network dataflow bottlenecks). If not identified and addressed, these dataflow bottlenecks can constrain Gen AI application performance well below optimal levels.

Given the compelling uses across natural language processing (NLP), video analytics, document resource development, image processing, image generation, and text generation, being able to run these workloads efficiently has become critical to many IT and industry segments. The resources that contribute to generative AI performance and efficiency include CPUs, DPUs, GPUs, FPGAs, plus memory and storage controllers. Read More

Considerations and Options for NVMe/TCP Deployment

NVMe®/TCP has gained a lot of attention over the last several years due to its great performance characteristics and relatively low cost. Since its ratification in 2018, the NVMe/TCP protocol has been enhanced to add features such as Discovery Automation, Authentication and Secure Channels that make it more suitable for use in enterprise environments. Now as organizations evaluate their options and consider adopting NVMe/TCP for use in their environment, many find they need a bit more information before deciding how to move forward.

That’s why the SNIA Networking Storage Forum (NSF) is hosting a live webinar on July 19, 2023 “NVMe/TCP: Performance, Deployment and Automation” where we will provide an overview of deployment considerations and options, and answer questions such as: Read More

Web 3.0 – The Future of Decentralized Storage

Decentralized storage is bridging the gap between Web 2.0 and Web 3.0, and its impact on enterprise storage is significant. The topic of decentralized storage and Web 3.0 will be the focus of an expert panel discussion the SNIA Networking Storage Forum is hosting on June 1, 2023, “Why Web 3.0 is Important to Enterprise Storage.”

In this webinar, we will provide an overview of enterprise decentralized storage and explain why it is more relevant now than ever before. We will delve into the benefits and demands of decentralized storage and discuss the evolution of on-premises, to cloud, to decentralized storage (cloud 2.0). We will also explore various use cases of decentralized storage, including its role in data privacy and security and the potential for decentralized applications (dApps) and blockchain technology. Read More

Programming Frameworks Q&A

Last month, the SNIA Networking Storage Forum made sense of the “wild west” of programming frameworks, covering xPUs, GPUs and computational storage devices at our live webcast, “You’ve Been Framed! An Overview of xPU, GPU & Computational Storage Programming Frameworks.” It was an excellent overview of what’s happening in this space.

There was a lot to digest, so our stellar panel of experts has taken the time to answer the questions from our live audience in this blog.

Q. Why is it important to have open-source programming frameworks?

A. Open-source frameworks enable community support and partnerships beyond what proprietary frameworks support. In many cases they allow ISVs and end users to write one integration that works with multiple vendors.

Q. Will different accelerators require different frameworks or can one framework eventually cover them all?

Read More

Memory Semantics and Data Movement with CXL and SDXI

Using software to perform memory copies has been the gold standard for applications performing memory-to-memory data movement or system memory operations. With new accelerators and memory types enriching the system architecture, accelerator-assisted memory data movement and transformation need standardization.

At the forefront of this standardization movement is the SNIA Smart Data Accelerator Interface (SDXI) which is designed as an industry-open standard that is Extensible, Forward-compatible, and Independent of I/O interconnect technology.

Adjacently, Compute Express Link™ (CXL™) is an industry-supported Cache-Coherent Interconnect for Processors, Memory Expansion, and Accelerators. CXL is designed to be an industry-open standard interface for high-speed communications, as accelerators are increasingly used to complement CPUs in support of emerging applications such as Artificial Intelligence and Machine Learning.

Read More

SNIA Experts Answer Questions on xPU Accelerator Offload Functions

The popular xPU webcast series hosted by the SNIA Networking Storage Forum’s continued last month with an in-depth look at accelerator offload functions of the xPU. Our experts discussed the problems the xPUs solve, where in the system they live, and the functions they implement. If you missed the session, you can watch it on-demand and access the presentation slides at the SNIA Educational Library. The Q&A here offers additional insights into the role of the xPU.

Q. Since xPUs can see traffic on the host doesn’t that widen the surface area for exposure if it were to be compromised?

Read More

SmartNICs to xPUs Q&A

The SNIA Networking Storage Forum kicked off its xPU webcast series last month with “SmartNICs to xPUs – Why is the Use of Accelerators Accelerating?” where SNIA experts defined what xPUs are, explained how they can accelerate offload functions, and cleared up confusion on many other names associated with xPUs such as SmartNIC, DPU, IPU, APU, NAPU. The webcast was highly-rated by our audience and already has more than 1,300 views. If you missed it, you can watch it on-demand and download a copy of the presentation slides at the SNIA Educational Library.

The live audience asked some interesting questions and here are answers from our presenters.

Q. How can we have redundancy on an xPU?

Read More