Q&A for Accelerating Gen AI Dataflow Bottlenecks

Generative AI is front page news everywhere you look. With advancements happening so quickly, it is hard to keep up. The SNIA Networking Storage Forum recently convened a panel of experts from a wide range of backgrounds to talk about Gen AI in general and specifically discuss how dataflow bottlenecks can constrain Gen AI application performance well below optimal levels. If you missed this session, “Accelerating Generative AI: Options for Conquering the Dataflow Bottlenecks,” it’s available on-demand at the SNIA Educational Library.

We promised to provide answers to our audience questions, and here they are.

Q: If ResNet-50 is a dinosaur from 2015, which model would you recommend using instead for benchmarking?

A: Setting aside the unfair aspersions being cast on the venerable ResNet-50, which is still used for inferencing benchmarks 😊, we suggest checking out the MLCommons website. In the benchmarks section you’ll see multiple use cases on Training and Inference. There are multiple benchmarks available that can provide more information about the ability of your infrastructure to effectively handle your intended workload. Read More

Hidden Costs of AI Q&A

At our recent SNIA Networking Storage Forum webinar, “Addressing the Hidden Costs of AI,” our expert team explored the impacts of AI, including sustainability and areas where there are potentially hidden technical and infrastructure costs. If you missed the live event, you can watch it on-demand in the SNIA Educational Library. Questions from the audience ranged from training Large Language Models to fundamental infrastructure changes from AI and more. Here are answers to the audience’s questions from our presenters.

Q: Do you have an idea of where the best tradeoff is for high IO speed cost and GPU working cost? Is it always best to spend maximum and get highest IO speed possible?

A: It depends on what you are trying to do If you are training a Large Language Model (LLM) then you’ll have a large collection of GPUs communicating with one another regularly (e.g., All-reduce) and doing so at throughput rates that are up to 900GB/s per GPU! For this kind of use case, it makes sense to use the fastest network option available. Any money saved by using a cheaper/slightly less performant transport will be more than offset by the cost of GPUs that are idle while waiting for data.

If you are more interested in Fine Tuning an existing model or using Retrieval Augmented Generation (RAG) then you won’t need quite as much network bandwidth and can choose a more economical connectivity option.

It’s worth noting Read More

Accelerating Generative AI

Workloads using generative artificial intelligence trained on large language models are frequently throttled by insufficient resources (e.g., memory, storage, compute or network dataflow bottlenecks). If not identified and addressed, these dataflow bottlenecks can constrain Gen AI application performance well below optimal levels.

Given the compelling uses across natural language processing (NLP), video analytics, document resource development, image processing, image generation, and text generation, being able to run these workloads efficiently has become critical to many IT and industry segments. The resources that contribute to generative AI performance and efficiency include CPUs, DPUs, GPUs, FPGAs, plus memory and storage controllers. Read More

Addressing the Hidden Costs of AI

The latest buzz around generative AI ignores the massive costs to run and power the technology. Understanding what the sustainability and cost impacts of AI are and how to effectively address them will be the topic of our next SNIA Networking Storage Forum (NSF) webinar, “Addressing the Hidden Costs of AI.” On February 27, 2024, our SNIA experts will offer insights on the potentially hidden technical and infrastructure costs associated with generative AI. You’ll also learn best practices and potential solutions to be considered as they discuss: Read More

You’ve Been Framed! An Overview of Programming Frameworks

With the emergence of GPUs, xPUs (DPU, IPU, FAC, NAPU, etc.) and computational storage devices for host offload and accelerated processing, a panoramic wild west of frameworks is emerging, all vying to be one of the preferred programming software stacks that best integrates the application layer with these underlying processing units.

On October 26, 2022, the SNIA Networking Storage Forum will break down what’s happening in the world of frameworks in our live webcast, “You’ve Been Framed! xPU, GPU & Computational Storage Programming Frameworks.”

We’ve convened an impressive group of experts that will provide an overview of programming frameworks that support:

Read More