As applications continue to increase in complexity and users demand more from their workloads, there is a trend to again deploy dedicated accelerator chips to assist by offloading work from the main CPU. These new accelerators (xPUs) have multiple names such as SmartNIC (Smart Network Interface Card), DPU, IPU, APU, NAPU. How are these different than GPU, TPU and the venerable CPU? xPUs can accelerate and offload functions including math, networking, storage functions, compression, cryptography, security and management.
It’s a topic that the SNIA Networking Storage Forum will spotlight in our 3-part xPU webcast series. The first webcast on May 19, 2022 “SmartNICs to xPUs – Why is the Use of Accelerators Accelerating?” will cover key topics about, and clarify questions surrounding, xPUs, including…
- xPU Definition: What is an xPU (SmartNIC, DPU, IPU, APU, NAPU), GPU, TPU, CPU? A focus on high-level architecture and definition of the xPU.
- Trends and Workloads: What is driving the trend to use hardware accelerators again after years of software-defined everything? What types of workloads are typically offloaded or accelerated? How do cost and power factor in?
- Deployment and Solutions: What are the pros and cons of dedicated accelerator chips versus running everything on the CPU?
- Market landscape Who provides these new accelerators—the CPU, storage, networking, and/or cloud vendors?
Register here to join us on May 19th to get the answers to these questions. Part 2 of this series will take a deep dive on accelerator offload functions and Part 3 will focus on deployment and solutions. Keep an eye on this blog and follow us on Twitter @SNIANSF for details and dates for the future sessions.