A Deep Dive on xPU Deployment and Solutions

Our first and second webcasts in this xPU webcast series explained what xPUs are, how they work, and what they can do. If by you missed them, they are available to watch here in the SNIA Educational Library. On August 24, 2022, the SNIA Networking Storage Forum will host the third webcast in this series, “xPU Deployment and Solutions Deep Dive,” where our xPU experts will explain next steps for deployments, discussing:

When to Deploy:

  • Pros and cons of dedicated accelerator chips versus running everything on the CPU
    •  xPU use cases across hybrid, multi-cloud and edge environments
    • Cost and power considerations
Read More

SNIA Experts Answer Questions on xPU Accelerator Offload Functions

The popular xPU webcast series hosted by the SNIA Networking Storage Forum’s continued last month with an in-depth look at accelerator offload functions of the xPU. Our experts discussed the problems the xPUs solve, where in the system they live, and the functions they implement. If you missed the session, you can watch it on-demand and access the presentation slides at the SNIA Educational Library. The Q&A here offers additional insights into the role of the xPU.

Q. Since xPUs can see traffic on the host doesn’t that widen the surface area for exposure if it were to be compromised?

Read More

SmartNICs to xPUs Q&A

The SNIA Networking Storage Forum kicked off its xPU webcast series last month with “SmartNICs to xPUs – Why is the Use of Accelerators Accelerating?” where SNIA experts defined what xPUs are, explained how they can accelerate offload functions, and cleared up confusion on many other names associated with xPUs such as SmartNIC, DPU, IPU, APU, NAPU. The webcast was highly-rated by our audience and already has more than 1,300 views. If you missed it, you can watch it on-demand and download a copy of the presentation slides at the SNIA Educational Library.

The live audience asked some interesting questions and here are answers from our presenters.

Q. How can we have redundancy on an xPU?

Read More

xPU Accelerator Offload Functions

As covered in our first xPU webcast “SmartNICs and xPUs: Why is the Use of Accelerators Accelerating,” we discussed the trend to deploy dedicated accelerator chips to assist or offload the main CPU. These new accelerators (xPUs) have multiple names such as SmartNIC, DPU, IPU, APU, NAPU. If you missed the presentation, I encourage you to check it out in the SNIA Educational Library where you can watch it on-demand and access the presentation slides.

This second webcast in this SNIA Networking Storage Forum xPU webcast series is “xPU Accelerator Offload Functions” where our SNIA experts will take a deeper dive into the accelerator offload functions of the xPU. We’ll discuss what problems the xPUs are coming to solve, where in the system they live, and the functions they implement, focusing on:

Read More

Keeping Edge Data Secure Q&A

The complex and changeable structure of edge computing, together with its network connections, massive real-time data, challenging operating environment, distributed edge cloud collaboration, and other characteristics, create a multitude of security challenges. It was the topic of our SNIA Networking Storage Forum (NSF) live webcast “Storage Life on the Edge: Security Challenges” where SNIA security experts Thomas Rivera, CISSP, CIPP/US, CDPSE and Eric Hibbard, CISSP-ISSAP, ISSMP, ISSEP, CIPP/US, CIPT, CISA, CDPSE, CCSK debated as to whether existing security practices and standards are adequate for this emerging area of computing. If you missed the presentation, you can view it on-demand here.

It was a fascinating discussion and as promised, Eric and Thomas have answered the questions from our live audience.

Q. What complexities are introduced from a security standpoint for edge use cases?

Read More

Storage Implications of Doing More at the Edge

In our SNIA Networking Storage Forum webcast series, “Storage Life on the Edge” we’ve been examining the many ways the edge is impacting how data is processed, analyzed and stored. I encourage you to check out the sessions we’ve done to date:

On July 12, 2022, we continue the series with “Storage Life on the Edge: Accelerated Performance Strategies” where our SNIA experts will discuss the need for faster computing, access to storage, and movement of data at the edge as well as between the edge and the data center, covering:

Read More

SmartNICs to xPUs – Why is the Use of Accelerators Accelerating?

As applications continue to increase in complexity and users demand more from their workloads, there is a trend to again deploy dedicated accelerator chips to assist by offloading work from the main CPU.  These new accelerators (xPUs) have multiple names such as SmartNIC (Smart Network Interface Card), DPU, IPU, APU, NAPU. How are these different than GPU, TPU and the venerable CPU? xPUs can accelerate and offload functions including math, networking, storage functions, compression, cryptography, security and management.

It’s a topic that the SNIA Networking Storage Forum will spotlight in our 3-part xPU webcast series. The first webcast on May 19, 2022 “SmartNICs to xPUs – Why is the Use of Accelerators Accelerating?” will cover key topics about, and clarify questions surrounding, xPUs, including…

Read More

Experts Discuss Key Edge Storage Security Challenges

The complex and changeable structure of edge computing, together with its network connections, massive real-time data, challenging operating environment, distributed edge cloud collaboration, and other characteristics, create a multitude of security challenges. It’s a topic the SNIA Networking Storage Forum (NSF) will take on as our “Storage Life on the Edge” webcast series continues. Join us on April 27, 2022 for “Storage Life on the Edge: Security Challenges” where I’ll be joined by security experts Thomas Rivera, CISSP, CIPP/US, CDPSE and Eric Hibbard, CISSP-ISSAP, ISSMP, ISSEP, CIPP/US, CIPT, CISA, CDPSE, CCSK as they explore these challenges and wade into the debate as to whether existing security practices and standards are adequate for this emerging area of computing. Our discussion will cover:

Read More

Processing and Managing Edge Data Q&A

The SNIA Networking Storage Forum (NSF) kicked off our “Storage Life on the Edge” webcast series with a session on managing data from the edge to the cloud and back. We were fortunate to have a panel of experts, Dan Cummins, John Kim and David McIntyre to explain key considerations when managing and processing data generated at the edge. If you missed this introductory session, it’s available on-demand, along with the presentation slides at the SNIA Educational Library.

Our presenters spent a good percentage of time answering questions from our live audience. Here are answers to them all.

Q. Could an application be deployed simultaneously at near-edge, far edge and functional edge?

Read More

Object Storage: Got Questions?

Over 900 people (and counting) have watched our SNIA Networking Storage Forum (NSF) webcast, “Object Storage: Trends, Use Cases” where our expert panelist had a lively discussion on object storage characteristics, use cases and performance acceleration. If you have not seen this session yet, we encourage you to check it out on-demand. The conversation included several interesting questions related to object storage. As promised, here are answers to them:

Q: Today object storage allows many new capabilities but also new challenges, such as the need for geographic and local load balancers in a distributed scale out infrastructure that at the same time do not become the bottleneck of the object services at an unsustainable cost. Are there any solutions available today that have these features built in?

A: Some object storage solutions have features such as load balancing and geographic distribution built into the software, though often the storage administrator must manually configure parts of these features at the network and/or server level. Most object storage cloud (StaaS) implementations include a distributed, scale-out infrastructure (including load balancing) in their implementation.

Read More