A Q&A on the Open Programmable Infrastructure (OPI) Project

Last month, the SNIA Networking Storage Forum hosted several experts leading the Open Programmable Infrastructure (OPI) project with a live webcast, “An Introduction to the OPI (Open Programmable Infrastructure) Project.” The project has been created to address a new class of cloud and datacenter infrastructure component. This new infrastructure element, often referred to as Data Processing Unit (DPU), Infrastructure Processing Unit (IPU) or xPU as a general term, takes the form of a server hosted PCIe add-in card or on-board chip(s), containing one or more ASIC’s or FPGA’s, usually anchored around a single powerful SoC device.

Our OPI experts provided an introduction to the OPI Project and then explained lifecycle provisioning, API, use cases, proof of concept and developer platform. If you missed the live presentation, you can watch it on demand and download a PDF of the slides at the SNIA Educational Library. The attendees at the live session asked several interesting questions. Here are answers to them from our presenters.

Q. Are there any plans for OPI to use GraphQL for API definitions since GraphQL has a good development environment, better security, and a well-defined, typed, schema approach?

Read More

FAQ on CXL and SDXI

How are Compute Express Link™ (CXL™) and the SNIA Smart Data Accelerator Interface (SDXI) related? It’s a topic we covered in detail at our recent SNIA Networking Storage Forum webcast, “What’s in a Name? Memory Semantics and Data Movement with CXL and SDXI” where our experts, Rita Gupta and Shyam Iyer, introduced both SDXI and CXL, highlighted the benefits of each, discussed data movement needs in a CXL ecosystem and covered SDXI advantages in a CXL interconnect. If you missed the live session, it is available in the SNIA Educational Library along with the presentation slides. The session was highly rated by the live audience who asked several interesting questions. Here are answers to them from our presenters Rita and Shyam.

Q. Now that SDXI v1.0 is out, can application implementations use SDXI today?

Read More

An Overview of the Linux Foundation OPI (Open Programmable Infrastructure)

A new class of cloud and datacenter infrastructure component is emerging into the marketplace. This new infrastructure element, often referred to as Data Processing Unit (DPU), Infrastructure Processing Unit (IPU) or xPU as a general term, takes the form of a server hosted PCIe add-in card or on-board chip(s), containing one or more ASIC’s or FPGA’s, usually anchored around a single powerful SoC device.

The Open Programmable Infrastructure (OPI) project has been created to address the configuration, operation, and lifecycle for these devices. It also has the goal of fostering an open software ecosystem for DPUs/IPUs covering edge, datacenter, and cloud use cases. The project intends to delineate what a DPU/IPU is, to define frameworks and architecture for DPU/IPU-based software stacks applicable to any vendors’ hardware solution, to  create a rich open-source application ecosystem, to integrate with existing open-source projects aligned to the same vision such as the Linux kernel, IPDK.io, DPDK, DASH, and SPDK to create new APIs for interaction with and between the elements of the DPU/IPU ecosystem:

Read More

Programming Frameworks Q&A

Last month, the SNIA Networking Storage Forum made sense of the “wild west” of programming frameworks, covering xPUs, GPUs and computational storage devices at our live webcast, “You’ve Been Framed! An Overview of xPU, GPU & Computational Storage Programming Frameworks.” It was an excellent overview of what’s happening in this space.

There was a lot to digest, so our stellar panel of experts has taken the time to answer the questions from our live audience in this blog.

Q. Why is it important to have open-source programming frameworks?

A. Open-source frameworks enable community support and partnerships beyond what proprietary frameworks support. In many cases they allow ISVs and end users to write one integration that works with multiple vendors.

Q. Will different accelerators require different frameworks or can one framework eventually cover them all?

Read More

You’ve Been Framed! An Overview of Programming Frameworks

With the emergence of GPUs, xPUs (DPU, IPU, FAC, NAPU, etc.) and computational storage devices for host offload and accelerated processing, a panoramic wild west of frameworks is emerging, all vying to be one of the preferred programming software stacks that best integrates the application layer with these underlying processing units.

On October 26, 2022, the SNIA Networking Storage Forum will break down what’s happening in the world of frameworks in our live webcast, “You’ve Been Framed! xPU, GPU & Computational Storage Programming Frameworks.”

We’ve convened an impressive group of experts that will provide an overview of programming frameworks that support:

Read More

A Deep Dive on xPU Deployment and Solutions

Our first and second webcasts in this xPU webcast series explained what xPUs are, how they work, and what they can do. If by you missed them, they are available to watch here in the SNIA Educational Library. On August 24, 2022, the SNIA Networking Storage Forum will host the third webcast in this series, “xPU Deployment and Solutions Deep Dive,” where our xPU experts will explain next steps for deployments, discussing:

When to Deploy:

  • Pros and cons of dedicated accelerator chips versus running everything on the CPU
    •  xPU use cases across hybrid, multi-cloud and edge environments
    • Cost and power considerations
Read More

SNIA Experts Answer Questions on xPU Accelerator Offload Functions

The popular xPU webcast series hosted by the SNIA Networking Storage Forum’s continued last month with an in-depth look at accelerator offload functions of the xPU. Our experts discussed the problems the xPUs solve, where in the system they live, and the functions they implement. If you missed the session, you can watch it on-demand and access the presentation slides at the SNIA Educational Library. The Q&A here offers additional insights into the role of the xPU.

Q. Since xPUs can see traffic on the host doesn’t that widen the surface area for exposure if it were to be compromised?

Read More

SmartNICs to xPUs Q&A

The SNIA Networking Storage Forum kicked off its xPU webcast series last month with “SmartNICs to xPUs – Why is the Use of Accelerators Accelerating?” where SNIA experts defined what xPUs are, explained how they can accelerate offload functions, and cleared up confusion on many other names associated with xPUs such as SmartNIC, DPU, IPU, APU, NAPU. The webcast was highly-rated by our audience and already has more than 1,300 views. If you missed it, you can watch it on-demand and download a copy of the presentation slides at the SNIA Educational Library.

The live audience asked some interesting questions and here are answers from our presenters.

Q. How can we have redundancy on an xPU?

Read More

xPU Accelerator Offload Functions

As covered in our first xPU webcast “SmartNICs and xPUs: Why is the Use of Accelerators Accelerating,” we discussed the trend to deploy dedicated accelerator chips to assist or offload the main CPU. These new accelerators (xPUs) have multiple names such as SmartNIC, DPU, IPU, APU, NAPU. If you missed the presentation, I encourage you to check it out in the SNIA Educational Library where you can watch it on-demand and access the presentation slides.

This second webcast in this SNIA Networking Storage Forum xPU webcast series is “xPU Accelerator Offload Functions” where our SNIA experts will take a deeper dive into the accelerator offload functions of the xPU. We’ll discuss what problems the xPUs are coming to solve, where in the system they live, and the functions they implement, focusing on:

Read More

SmartNICs to xPUs – Why is the Use of Accelerators Accelerating?

As applications continue to increase in complexity and users demand more from their workloads, there is a trend to again deploy dedicated accelerator chips to assist by offloading work from the main CPU.  These new accelerators (xPUs) have multiple names such as SmartNIC (Smart Network Interface Card), DPU, IPU, APU, NAPU. How are these different than GPU, TPU and the venerable CPU? xPUs can accelerate and offload functions including math, networking, storage functions, compression, cryptography, security and management.

It’s a topic that the SNIA Networking Storage Forum will spotlight in our 3-part xPU webcast series. The first webcast on May 19, 2022 “SmartNICs to xPUs – Why is the Use of Accelerators Accelerating?” will cover key topics about, and clarify questions surrounding, xPUs, including…

Read More