Accelerating Generative AI

Workloads using generative artificial intelligence trained on large language models are frequently throttled by insufficient resources (e.g., memory, storage, compute or network dataflow bottlenecks). If not identified and addressed, these dataflow bottlenecks can constrain Gen AI application performance well below optimal levels.

Given the compelling uses across natural language processing (NLP), video analytics, document resource development, image processing, image generation, and text generation, being able to run these workloads efficiently has become critical to many IT and industry segments. The resources that contribute to generative AI performance and efficiency include CPUs, DPUs, GPUs, FPGAs, plus memory and storage controllers. Read More

Considerations and Options for NVMe/TCP Deployment

NVMe®/TCP has gained a lot of attention over the last several years due to its great performance characteristics and relatively low cost. Since its ratification in 2018, the NVMe/TCP protocol has been enhanced to add features such as Discovery Automation, Authentication and Secure Channels that make it more suitable for use in enterprise environments. Now as organizations evaluate their options and consider adopting NVMe/TCP for use in their environment, many find they need a bit more information before deciding how to move forward.

That’s why the SNIA Networking Storage Forum (NSF) is hosting a live webinar on July 19, 2023 “NVMe/TCP: Performance, Deployment and Automation” where we will provide an overview of deployment considerations and options, and answer questions such as: Read More

FAQ on CXL and SDXI

How are Compute Express Link™ (CXL™) and the SNIA Smart Data Accelerator Interface (SDXI) related? It’s a topic we covered in detail at our recent SNIA Networking Storage Forum webcast, “What’s in a Name? Memory Semantics and Data Movement with CXL and SDXI” where our experts, Rita Gupta and Shyam Iyer, introduced both SDXI and CXL, highlighted the benefits of each, discussed data movement needs in a CXL ecosystem and covered SDXI advantages in a CXL interconnect. If you missed the live session, it is available in the SNIA Educational Library along with the presentation slides. The session was highly rated by the live audience who asked several interesting questions. Here are answers to them from our presenters Rita and Shyam.

Q. Now that SDXI v1.0 is out, can application implementations use SDXI today?

Read More

Memory Semantics and Data Movement with CXL and SDXI

Using software to perform memory copies has been the gold standard for applications performing memory-to-memory data movement or system memory operations. With new accelerators and memory types enriching the system architecture, accelerator-assisted memory data movement and transformation need standardization.

At the forefront of this standardization movement is the SNIA Smart Data Accelerator Interface (SDXI) which is designed as an industry-open standard that is Extensible, Forward-compatible, and Independent of I/O interconnect technology.

Adjacently, Compute Express Link™ (CXL™) is an industry-supported Cache-Coherent Interconnect for Processors, Memory Expansion, and Accelerators. CXL is designed to be an industry-open standard interface for high-speed communications, as accelerators are increasingly used to complement CPUs in support of emerging applications such as Artificial Intelligence and Machine Learning.

Read More

SNIA Experts Answer Questions on xPU Accelerator Offload Functions

The popular xPU webcast series hosted by the SNIA Networking Storage Forum’s continued last month with an in-depth look at accelerator offload functions of the xPU. Our experts discussed the problems the xPUs solve, where in the system they live, and the functions they implement. If you missed the session, you can watch it on-demand and access the presentation slides at the SNIA Educational Library. The Q&A here offers additional insights into the role of the xPU.

Q. Since xPUs can see traffic on the host doesn’t that widen the surface area for exposure if it were to be compromised?

Read More

xPU Accelerator Offload Functions

As covered in our first xPU webcast “SmartNICs and xPUs: Why is the Use of Accelerators Accelerating,” we discussed the trend to deploy dedicated accelerator chips to assist or offload the main CPU. These new accelerators (xPUs) have multiple names such as SmartNIC, DPU, IPU, APU, NAPU. If you missed the presentation, I encourage you to check it out in the SNIA Educational Library where you can watch it on-demand and access the presentation slides.

This second webcast in this SNIA Networking Storage Forum xPU webcast series is “xPU Accelerator Offload Functions” where our SNIA experts will take a deeper dive into the accelerator offload functions of the xPU. We’ll discuss what problems the xPUs are coming to solve, where in the system they live, and the functions they implement, focusing on:

Read More

Keeping Edge Data Secure Q&A

The complex and changeable structure of edge computing, together with its network connections, massive real-time data, challenging operating environment, distributed edge cloud collaboration, and other characteristics, create a multitude of security challenges. It was the topic of our SNIA Networking Storage Forum (NSF) live webcast “Storage Life on the Edge: Security Challenges” where SNIA security experts Thomas Rivera, CISSP, CIPP/US, CDPSE and Eric Hibbard, CISSP-ISSAP, ISSMP, ISSEP, CIPP/US, CIPT, CISA, CDPSE, CCSK debated as to whether existing security practices and standards are adequate for this emerging area of computing. If you missed the presentation, you can view it on-demand here.

It was a fascinating discussion and as promised, Eric and Thomas have answered the questions from our live audience.

Q. What complexities are introduced from a security standpoint for edge use cases?

Read More

Experts Discuss Key Edge Storage Security Challenges

The complex and changeable structure of edge computing, together with its network connections, massive real-time data, challenging operating environment, distributed edge cloud collaboration, and other characteristics, create a multitude of security challenges. It’s a topic the SNIA Networking Storage Forum (NSF) will take on as our “Storage Life on the Edge” webcast series continues. Join us on April 27, 2022 for “Storage Life on the Edge: Security Challenges” where I’ll be joined by security experts Thomas Rivera, CISSP, CIPP/US, CDPSE and Eric Hibbard, CISSP-ISSAP, ISSMP, ISSEP, CIPP/US, CIPT, CISA, CDPSE, CCSK as they explore these challenges and wade into the debate as to whether existing security practices and standards are adequate for this emerging area of computing. Our discussion will cover:

Read More

Another Great Storage Debate: Hyperconverged vs. Disaggregated vs. Centralized

The SNIA Networking Storage Forum’s “Great Storage Debate” webcast series is back! This time, SNIA experts will be discussing the ongoing evolution of the data center, in particular how storage is allocated and managed. There are three competing visions about how storage should be done: Hyperconverged Infrastructure (HCI), Disaggregated Storage, and Centralized Storage. Join us on May 4, 2021 for our live webcast Great Storage Debate: Hyperconverged vs. Disaggregated vs. Centralized.

IT architects, storage vendors, and industry analysts argue constantly over which is the best approach and even the exact definition of each. Isn’t Hyperconverged constrained? Is Disaggregated designed only for large cloud service providers? Is Centralized storage only for legacy applications?

Tune in to debate these questions and more:  

Read More