Storage for Automotive Q&A

At our recent SNIA Networking Storage Forum (NSF) webcast “Revving up Storage for Automotive” our expert presenters, Ryan Suzuki and John Kim, discussed storage implications as vehicles are turning into data centers on wheels. If you missed the live event, it is available on-demand together with the presentations slides.

Our audience asked several interesting questions on this quickly evolving industry. Here are John and Ryan’s answers to them.

Q: What do you think the current storage landscape is missing to support the future of IoV [Internet of Vehicles]? Are there any identified cases of missing features from storage (edge/cloud) which are preventing certain ideas from being implemented and deployed?

Read More

Storage Life on the Edge

Cloud to Edge infrastructures are rapidly growing.  It is expected that by 2025, up to 75% of all data generated will be created at the Edge.  However, Edge is a tricky word and you’ll get a different definition depending on who you ask. The physical edge could be in a factory, retail store, hospital, car, plane, cell tower level, or on your mobile device. The network edge could be a top-of-rack switch, server running host-based networking, or 5G base station.

The Edge means putting servers, storage, and other devices outside the core data center and closer to both the data sources and the users of that data—both edge sources and edge users could be people or machines.

Read More

A Q&A on Discovery Automation for NVMe-oF IP-Based SANs

In order to fully unlock the potential of the NVMe® IP based SANs, we first need to address the manual and error prone process that is currently used to establish connectivity between NVMe Hosts and NVM subsystems. Several leading companies in the industry have joined together through NVM Express to collaborate on innovations to simplify and automate this discovery process.

This was the topic of discussion at our recent SNIA Networking Storage Forum webcast “NVMe-oF: Discovery Automation for IP-based SANs” where our experts, Erik Smith and Curtis Ballard, took a deep dive on the work that is being done to address these issues. If you missed the live event, you can watch it on demand here and get a copy of the slides. Erik and Curtis did not have time to answer all the questions during the live presentation. As promised, here are answers to them all.

Q. Is the Centralized Discovery Controller (CDC) highly available, and is this visible to the hosts?  Do they see a pair of CDCs on the network and retry requests to a secondary if a primary is not available?

Read More

Cabling, Connectors and Transceivers Questions Answered

In our recent live SNIA Network Storage Forum webcast, “Next-generation Interconnects: The Critical Importance of Connectors and Cables” provided an outstanding tutorial on the latest in the impressive array of data center infrastructure components designed to address expanding requirements for higher-bandwidth and lower-power. They covered common pluggable connectors and media types, copper cabling and transceivers, and real world use cases. If you missed the live event, it is available on-demand.

We ran out of time to answer all the questions from the live audience. As promised, here are answers to them all.

Q. For 25GbE, is the industry consolidating on one of the three options?

Read More

Revving Up Storage for Automotive

Each year cars become smarter and more automated. In fact, the automotive industry is effectively transforming the vehicle into a data center on wheels. Connectedness, autonomous driving, and media & entertainment all bring more and more storage onboard and into networked data centers. But all the storage in (and for) a car is not created equal. There are 10s if not 100s of different processors on a car today. Some are attached to storage, some are not and each application demands different characteristics from the storage device.

The SNIA Networking Storage Forum (NSF) is exploring this fascinating topic on December 7, 2021 at our live webcast “Revving Up Storage for Automotive” where industry experts from both the storage and automotive worlds will discuss:

Read More

Keeping Pace with Object Storage Trends & Use Cases

Object storage has been among the most popular topics we’ve covered in the SNIA Networking Storage Forum. On November 16, 2021, we will take this topic on again at our live webcast “Object Storage: Trends, Use Cases.” Moving beyond the mechanics of object storage, our experts panel will focus on recent object storage trends, problems object storage can solve, and real-world use cases including ransomware protection.

So, what’s new? Object storage has traditionally been seen as an archival storage platform, and is now being employed as a platform for primary data. In this webcast, we’ll highlight how this is happening and discuss:

Read More

Fibre Channel SAN Hosts and Targets Q&A

At our recent SNIA Networking Storage Forum (NSF) webcast “How Fibre Channel Hosts and Targets Really Communicate” our Fibre Channel (FC) experts explained exactly how Fibre Channel works, starting with the basics on the FC networking stack, link initialization, port types, and flow control, and then dove into the details on host/target logins and host/target IO. It was a great tutorial on Fibre Channel. If you missed it, you can view it on-demand. The audience asked several questions during the live event. Here are answers to them all:

Q. What is the most common problem that we face in the FC protocol?

A. Much the same as any other network protocol, congestion is the most common problem found in FC SANs. It can take a couple of forms including, but not limited to, host oversubscription and “Fan-in/Fan-out” ratios of host ports to storage ports, but it is probably the single largest generator of support cases. Another common problem is the ‘Host cannot see target’ kind of problem.  

Read More

Storage for AI Q&A

What types of storage are needed for different aspects of AI? That was one of the many topics covered in our SNIA Networking Storage Forum (NSF) webcast “Storage for AI Applications.” It was a fascinating discussion and I encourage you to check it out on-demand. Our panel of experts answered many questions during the live roundtable Q&A. Here are answers to those questions, as well as the ones we didn’t have time to address.

Q. What are the different data set sizes and workloads in AI/ML in terms of data set size, sequential/ random, write/read mix?

A. Data sets will vary incredibly from use case to use case. They may be GBs to possibly 100s of PB. In general, the workloads are very heavily reads maybe 95%+. While it would be better to have sequential reads, in general the patterns tend to be closer to random. In addition, different use cases will have very different data sizes. Some may be GBs large, while others may be <1 KB. The different sizes have a direct impact on performance in storage and may change how you decide to store the data.

Read More

Automating Discovery for NVMe IP-based SANs

NVMe® IP-based SANs (including transports such as TCP, RoCE, and iWARP) have the potential to provide significant benefits in application environments ranging from the Edge to the Data Center. However, before we can fully unlock the potential of the NVMe IP-based SAN, we first need to address the manual and error prone process that is currently used to establish connectivity between NVMe Hosts and NVM subsystems.  This process includes administrators explicitly configuring each Host to access the appropriate NVM subsystems in their environment. In addition, any time an NVM Subsystem interface is added or removed, a Host administrator may need to explicitly update the configuration of impacted hosts to reflect this change. 

Due to the decentralized nature of this configuration process, using it to manage connectivity for more than a few Host and NVM subsystem interfaces is impractical and adds complexity when deploying an NVMe IP-based SAN in environments that require a high-degrees of automation.

Read More

Next-generation Interconnects: The Critical Importance of Connectors and Cables

Modern data centers consist of hundreds of subsystems connected with optical transceivers, copper cables, and industry standards-based connectors. As data demands escalate, it drives the throughput of these interconnects to increase rapidly, making the maximum reach of copper cabling very short. At the same time, data centers are expanding in size, with nodes stretching further apart. This is making longer-reach optical technologies much more popular. However, optical interconnect technologies are more costly and complex than copper with many new buzz-words and technology concepts.

The rate of change from the vast uptick in data demand accelerates new product development at an incredible pace. While much of the enterprise is still on 10/40/100GbE and 128GFC speeds, the optical standards bodies are beginning to deliver 800G, with 1.6Tb transceivers in discussion! The introduction of new technologies creates a paradigm shift that requires changes and adjustments throughout the network.

Read More