近期文章

Posted on

Marvell showcases its new no-compromise Open RAN solution with ecosystem partners using best of cloud, wireless compute architectures

By Peter Carson, Senior Director Solutions Marketing, Marvell

Marvell’s 5G Open RAN architecture leverages its OCTEON Fusion processor and underscores collaborations with Arm and Meta to drive adoption of no-compromise 5G Open RAN solutions

The wireless industry’s no-compromise 5G Open RAN platform will be on display at Mobile World Congress 2022. The Marvell-designed solution builds on its extensive compute collaboration with Arm and raises expectations about Open RAN capabilities for ecosystem initiatives like the Meta Connectivity Evenstar program, which is aimed at expanding the global adoption of Open RAN. Last year at MWC, Marvell announced it had joined the Evenstar program [read more]. This year, Marvell’s new 5G Open RAN Accelerator will be on display at the Arm booth at MWC 2022. The OCTEON Fusion processor, which integrates 5G in-line acceleration and Arm Neoverse CPUs, is the foundation for Marvell’s Open RAN DU reference design.

5G is going mainstream with the rapid rollout of next generation networks by every major operator worldwide. The ability of 5G to reliably provide high bandwidth and extremely low latency connectivity is powering applications like metaverse, autonomous driving, industrial IoT, private networks, and many more. 5G is a massive undertaking that is set to transform entire industries and serve the world’s diverse connectivity needs for years to come. But the wireless networks at the center of this revolution are, themselves, undergoing a major transformation – not just in feeds and speeds, but in architecture. More specifically, significant portions of the 5G radio access network (RAN) are moving into the cloud.

获取更多信息

Posted on

No-Compromise 5G Open RAN: Compute Architecture

By Peter Carson, Senior Director Solutions Marketing, Marvell

简介

5G networks are evolving to a cloud-native architecture with Open RAN at the center. This explainer series is aimed at de-mystifying the challenges and complexity in scaling these emerging open and virtualized radio access networks. Let’s start with the compute architecture.

The Problem 

Open RAN systems based on legacy compute architectures utilize an excessively high number of CPU cores and energy to support 5G Layer 1 (L1) and other data-centric processing, like security, networking and storage virtualization. As illustrated in the diagram below, this leaves very few host compute resources available for the tasks the server was originally designed to support. These systems typically offload a small subset of 5G L1 functions, such as forward error correction (FEC), from the host to an external FPGA-based accelerator but execute the processing offline. This kind of look-aside (offline) processing of time-critical L1 functions outside the data path adds latency that degrades system performance.

Image:  Limitations of Open RAN systems based on general purpose processors

获取更多信息

Posted on

Next Evolution for Storage Networking: Self-driving SANs

作者:Todd Owens,Marvell 技术营销经理

and Jacqueline Nguyen, Marvell Field Marketing Manager

Storage area network (SAN) administrators know they play a pivotal role in ensuring mission-critical workloads stay up and running. The workloads and applications that run on the infrastructure they manage are key to overall business success for the company.

Like any infrastructure, issues do arise from time to time, and the ability to identify transient links or address SAN congestion quickly and efficiently is paramount. Today, SAN administrators typically rely on proprietary tools and software from the Fibre Channel (FC) switch vendors to monitor the SAN traffic. When SAN performance issues arise, they rely on their years of experience to troubleshoot the issues.

What creates congestion in a SAN anyway?

Refresh cycles for servers and storage are typically shorter and more frequent than that of SAN infrastructure. This results in servers and storage arrays that run at different speeds being connected to the SAN. Legacy servers and storage arrays may connect to the SAN at 16GFC bandwidth while newer servers and storage are connected at 32GFC.

Fibre Channel SANs use buffer credits to manage the prioritization of the traffic flow in the SAN. When a slower device intermixes with faster devices on the SAN, there can be situations where response times to buffer credit requests slow down, causing what is called “Slow Drain” congestion. This is a well-known issue in FC SANs that can be time consuming to troubleshoot and, with newer FC-NVMe arrays, this problem can be magnified. But these days are soon coming to an end with the introduction of what we can refer to as the self-driving SAN.

获取更多信息

Posted on

Optical Technologies for 5G Access Networks

By Matt Bolig, Director, Product Marketing, Networking Interconnect, Marvell

There’s been a lot written about 5G wireless networks in recent years.  It’s easy to see why; 5G technology supports game-changing applications like autonomous driving and smart city infrastructure.  Infrastructure investment in bringing this new reality to fruition will take many years and 100’s of billions of dollars globally, as figure 1 below illustrates.

图 1: Cumulative Global 5G RAN Capex in $B (source: Dell’Oro, July 2021)

When considering where capital is invested in 5G, one underappreciated aspect is just how much wired infrastructure is required to move massive amounts of data through these wireless networks. 

获取更多信息

Posted on

Marvell and Ingrasys Collaborate to Power Ceph Cluster with EBOF in Data Centers

By Khurram Malik, Senior Manager, Technical Marketing, Marvell

A massive amount of data is being generated at the edge, data center and in the cloud, driving scale out Software-Defined Storage (SDS) which, in turn, is enabling the industry to modernize data centers for large scale deployments. Ceph is an open-source, distributed object storage and massively scalable SDS platform, contributed to by a wide range of major high-performance computing (HPC) and storage vendors. Ceph BlueStore back-end storage removes the Ceph cluster performance bottleneck, allowing users to store objects directly on raw block devices and bypass the file system layer, which is specifically critical in boosting the adoption of NVMe SSDs in the Ceph cluster. Ceph cluster with EBOF provides a scalable, high-performance and cost-optimized solution and is a perfect use case for many HPC applications. Traditional data storage technology leverages special-purpose compute, networking, and storage hardware to optimize performance and requires proprietary software for management and administration. As a result, IT organizations neither scale-out nor make it feasible to deploy petabyte or exabyte data storage from a CAPEX and OPEX perspective.
Ingrasys (subsidiary of Foxconn) is collaborating with Marvell to introduce an Ethernet Bunch of Flash (EBOF) storage solution which truly enables scale-out architecture for data center deployments. EBOF architecture disaggregates storage from compute and provides limitless scalability, better utilization of NVMe SSDs, and deploys single-ported NVMe SSDs in a high-availability configuration on an enclosure level with no single point of failure.

Power Ceph Cluster with EBOF in Data Centers image 1

Ceph is deployed on commodity hardware and built on multi-petabyte storage clusters. It is highly flexible due to its distributed nature. EBOF use in a Ceph cluster enables added storage capacity to scale up and scale out at an optimized cost and facilitates high-bandwidth utilization of SSDs. A typical rack-level Ceph solution includes a networking switch for client, and cluster connectivity; a minimum of 3 monitor nodes per cluster for high availability and resiliency; and Object Storage Daemon (OSD) host for data storage, replication, and data recovery operations. Traditionally, Ceph recommends 3 replicas at a minimum to distribute copies of the data and assure that the copies are stored on different storage nodes for replication, but this results in lower usable capacity and consumes higher bandwidth. Another challenge is that data redundancy and replication are compute-intensive and add significant latency. To overcome all these challenges, Ingrasys has introduced a more efficient Ceph cluster rack developed with management software – Ingrasys Composable Disaggregate Infrastructure (CDI) Director.

获取更多信息