-->

We’re Building the Future of Data Infrastructure

Products
Company
Support

Archive for the ‘Networking’ Category

August 31st, 2020

Arm processors in the Data Center

By Raghib Hussain, Chief Strategy Officer and Executive Vice President, Networking and Processors Group

Last week, Marvell announced a change in our strategy for ThunderX, our Arm-based server-class processor product line. I’d like to take the opportunity to put some more context around that announcement, and our future plans in the data center market.

ThunderX is a product line that we started at Cavium, prior to our merger with Marvell in 2018. At Cavium, we had built many generations of successful processors for infrastructure applications, including our Nitrox security processor and OCTEON infrastructure processor. These processors have been deployed in the world’s most demanding data-plane applications such as firewalls, routers, SSL-acceleration, cellular base stations, and Smart NICs. Today, OCTEON is the most scalable and widely deployed multicore processor in the market.

As co-founder of Cavium, I had a strong belief that Arm-based processors also had a role to play in next generation data centers. One size simply doesn’t fit all anymore, so we started the ThunderX product line for the server market. It was a bold move, and we knew it would take significant time and investment to come to fruition. In fact, we have spent six years now building multiple generations of products, developing the ecosystem, the software, and working with customers to qualify systems for production deployment in large data centers. ThunderX2 was the industry’s first Arm-based processor capable of powering dual socket servers that could go toe-to-toe with x86-based solutions, and clearly established the performance credentials for Arm in the server market. We moved the bar higher yet again with ThunderX3, as we discussed at Hot Chips 32.

Today, we see strong ecosystem support and a significant opportunity for Arm-based processors in the data center. But the real market opportunity for server-class Arm processors is in customized solutions, optimized for the use cases at hyperscale data center operators. This should be no surprise, as the power of the Arm architecture has always been in its ability to be integrated into highly optimized designs tailored for specific use cases, and we see hyperscale datacenter applications as no different.

Our rich IP portfolio, decades of processor expertise with Nitrox, OCTEON, and ThunderX, combined with our new custom ASIC capability, and investment in the latest TSMC 5nm process node, puts Marvell in a unique position to address this market opportunity. So to us, this market driven change just makes sense. We look forward to partnering with our customers and helping to deliver highly optimized solutions tailored to their unique needs.

August 27th, 2020

How to Reap the Benefits of NVMe over Fabric in 2020

By Todd Owens, Technical Marketing Manager, Marvell

As native Non-volatile Memory Express (NVMe®) share-storage arrays continue enhancing our ability to store and access more information faster across a much bigger network, customers of all sizes – enterprise, mid-market and SMBs – confront a common question: what is required to take advantage of this quantum leap forward in speed and capacity?

Of course, NVMe technology itself is not new, and is commonly found in laptops, servers and enterprise storage arrays. NVMe provides an efficient command set that is specific to memory-based storage, provides increased performance that is designed to run over PCIe 3.0 or PCIe 4.0 bus architectures, and — offering 64,000 command queues with 64,000 commands per queue — can provide much more scalability than other storage protocols.

A screenshot of a cell phone

Description automatically generated

Unfortunately, most of the NVMe in use today is held captive in the system in which it is installed. While there are a few storage vendors offering NVMe arrays on the market today, the vast majority of enterprise datacenter and mid-market customers are still using traditional storage area networks, running SCSI protocol over either Fibre Channel or Ethernet Storage Area Networks (SAN).

The newest storage networks, however, will be enabled by what we call NVMe over Fabric (NVMe-oF) networks. As with SCSI today, NVMe-oF will offer users a choice of transport protocols. Today, there are three standard protocols that will likely make significant headway into the marketplace. These include:

  • NVMe over Fibre Channel (FC-NVMe)
  • NVMe over RoCE RDMA (NVMe/RoCE)
  • NVMe over TCP (NVMe/TCP)

If NVMe over Fabrics are to achieve their true potential, however, there are three major elements that need to align. First, users will need an NVMe-capable storage network infrastructure in place. Second, all of the major operating system (O/S) vendors will need to provide support for NVMe-oF. Third, customers will need disk array systems that feature native NVMe. Let’s look at each of these in order.

  1. NVMe Storage Network Infrastructure

In addition to Marvell, several leading network and SAN connectivity vendors support one or more varieties of NVMe-oF infrastructure today. This storage network infrastructure (also called the storage fabric), is made up of two main components: the host adapter that provides server connectivity to the storage fabric; and the switch infrastructure that provides all the traffic routing, monitoring and congestion management.

For FC-NVMe, today’s enhanced 16Gb Fibre Channel (FC) host bus adapters (HBA) and 32Gb FC HBAs already support FC-NVMe. This includes the Marvell® QLogic® 2690 series Enhanced 16GFC, 2740 series 32GFC and 2770 Series Enhanced 32GFC HBAs.

On the Fibre Channel switch side, no significant changes are needed to transition from SCSI-based connectivity to NVMe technology, as the FC switch is agnostic about the payload data. The job of the FC switch is to just route FC frames from point to point and deliver them in order, with the lowest latency required. That means any 16GFC or greater FC switch is fully FC-NVMe compatible.

A key decision regarding FC-NVMe infrastructure, however, is whether or not to support both legacy SCSI and next-generation NVMe protocols simultaneously. When customers eventually deploy new NVMe-based storage arrays (and many will over the next three years), they are not going to simply discard their existing SCSI-based systems. In most cases, customers will want individual ports on individual server HBAs that can communicate using both SCSI and NVMe, concurrently. Fortunately, Marvell’s QLogic 16GFC/32GFC portfolio does support concurrent SCSI and NVMe, all with the same firmware and a single driver. This use of a single driver greatly reduces complexity compared to alternative solutions, which typically require two (one for FC running SCSI and another for FC-NVMe).

If we look at Ethernet, which is the other popular transport protocol for storage networks, there is one option for NVMe-oF connectivity today and a second option on the horizon. Currently, customers can already deploy NVMe/RoCE infrastructure to support NVMe connectivity to shared storage. This requires RoCE RDMA-enabled Ethernet adapters in the host, and Ethernet switching that is configured to support a lossless Ethernet environment. There are a variety of 10/25/50/100GbE network adapters on the market today that support RoCE RDMA, including the Marvell FastLinQ® 41000 Series and the 45000 Series adapters. 

On the switching side, most 10/25/100GbE switches that have shipped in the past 2-3 years support data center bridging (DCB) and priority flow control (PFC), and can support the lossless Ethernet environment needed to support a low-latency, high-performance NVMe/RoCE fabric.

While customers may have to reconfigure their networks to enable these features and set up the lossless fabric, these features will likely be supported in any newer Ethernet switch or director. One point of caution: with lossless Ethernet networks, scalability is typically limited to only 1 or 2 hops. For high scalability environments, consider alternative approaches to the NVMe storage fabric.

One such alternative is NVMe/TCP. This is a relatively new protocol (NVM Express Group ratification in late 2018), and as such is not widely available today. However, the advantage of NVMe/TCP is that it runs on today’s TCP stack, leveraging TCP’s congestion control mechanisms. That means there’s no need for a tuned environment (like that required with NVMe/RoCE), and NVMe/TCP can scale right along with your network. Think of NVMe/TCP in the same way as you do iSCSI today. Like iSCSI, NVMe/TCP will provide good performance, work with existing infrastructure, and be highly scalable. For those customers seeking the best mix of performance and ease of implementation, NVMe/TCP will be the best bet.

Because there is limited operating system (O/S) support for NVMe/TCP (more on this below), I/O vendors are not currently shipping firmware and drivers that support NVMe/TCP. But a few, like Marvell, have adapters that, from a hardware standpoint, are NVMe/TCP-ready; all that will be required is a firmware update in the future to enable the functionality. Notably, Marvell will support NVMe over TCP with full hardware offload on its FastLinQ adapters in the future. This will enable our NVMe/TCP adapters to deliver high performance and low latency that rivals NVMe/RoCE implementations.

A screenshot of a cell phone

Description automatically generated
  • Operating System Support

While it’s great that there is already infrastructure to support NVMe-oF implementations, that’s only the first part of the equation. Next comes O/S support. When it comes to support for NVMe-oF, the major O/S vendors are all in different places – see the table below for a current (August 2020) summary. The major Linux distributions from RHEL and SUSE support both FC-NVMe and NVMe/RoCE and have limited support for NVMe/TCP. VMware, beginning with ESXi 7.0, supports both FC-NVMe and NVMe/RoCE but does not yet support NVMe/TCP. Microsoft Windows Server currently uses an SMB-direct network protocol and offers no support for any NVMe-oF technology today.

With VMware ESXi 7.0, be aware of a couple of caveats: VMware does not currently support FC-NVMe or NVMe/RoCE in vSAN or with vVols implementations. However, support for these configurations, along with support for NVMe/TCP, is expected in future releases.

  • Storage Array Support

A few storage array vendors have released mid-range and enterprise class storage arrays that are NVMe-native. NetApp sells arrays that support both NVMe/RoCE and FC-NVMe, and are available today. Pure Storage offers NVMe arrays that support NVMe/RoCE, with plans to support FC-NVMe and NVMe/TCP in the future. In late 2019, Dell EMC introduced its PowerMax line of flash storage that supports FC-NVMe. This year and next, other storage vendors will be bringing arrays to market that will support both NVMe/RoCE and FC-NMVe. We expect storage arrays that support NVMe/TCP will become available in the same time frame.

Future-proof your investments by anticipating NVMe-oF tomorrow

Altogether, we are not too far away from having all the elements in place to make NVMe-oF a reality in the data center. If you expect the servers you are deploying today to operate for the next five years, there is no doubt they will need to connect to NVMe-native storage during that time. So plan ahead.

The key from an I/O and infrastructure perspective is to make sure you are laying the groundwork today to be able to implement NVMe-oF tomorrow. Whether that’s Fibre Channel or Ethernet, customers should be deploying I/O technology that supports NVMe-oF today. Specifically, that means deploying 16GFC enhanced or 32GFC HBAs and switching infrastructure for Fibre Channel SAN connectivity. This includes the Marvell QLogic 2690, 2740 or 2770-series Fibre Channel HBAs. For Ethernet, this includes Marvell’s FastLinQ 41000/45000 series Ethernet adapter technology.

These advances represent a big leap forward and will deliver great benefits to customers. The sooner we build industry consensus around the leading protocols, the faster these benefits can be realized.

For more information on Marvell Fibre Channel and Ethernet technology, go to www.marvell.com. For technology specific to our OEM customer servers and storage, go to www.marvell.com/hpe or www.marvell.com/dell.

July 23rd, 2020

Telemetry: Can You See the Edge?

By Suresh Ravindran, Senior Director, Software Engineering

Telemetry: Can You See the Edge?

So far in our series Living on the Network Edge, we have looked at trends driving Intelligence and Performance to the network edge. In this blog, let’s look into the need for visibility into the network.

As automation trends evolve, the number of connected devices is seeing explosive growth. IDC estimates that there will be 41.6 billion connected IoT devices generating a whopping 79.4 zettabytes of data in 20251. A significant portion of this traffic will be video flows and sensor traffic which will need to be intelligently processed for applications such as personalized user services, inventory management, intrusion prevention and load balancing across a hybrid cloud model. Networking devices will need to be equipped with the ability to intelligently manage processing resources to efficiently handle huge amounts of data flows.

How do you see what you can’t see?

But is your network edgy enough? In order to handle the growth, we’ve seen intelligence pushed to the network edge for application-aware engineering and inferencing applications running in hybrid clouds. In order to keep up with billions of mobile devices using denser applications, we addressed wireless offloading as one method to alleviate the burden on cellular networks. This approach increases the load on edge and enterprise networks with demands for intelligent flow processing capabilities to efficiently utilize the LAN and WAN bandwidth.   With intelligence and performance in place, we also need to address the growing complexity associated with “seeing” how network switching resources are being utilized. Visibility through network telemetry is fundamental to empowering AI-automation, performance, security and troubleshooting. To be proactive and predictive, networks need to be built with switches that look beyond the obvious with intelligent telemetry capabilities.

Intelligent telemetry for effective network visibility

Increased use of analytics and AI for performance monitoring, detection, troubleshooting and response has been ranked a top priority for organizations to achieve their vision of the ideal networkIT professionals leverage telemetry to define workload behaviors requiring network bandwidth timing patterns and whether applications are causing jitter or low-bandwidth issues. In general, telemetry functions have tracked events in hindsight but are now increasingly used to analyze and predict – living on the network edge means monitoring, predicting and managing the anomalies for proactive infrastructure automation and application responses.

An effective telemetry solution also requires network devices to stream a wide range of metadata for network flow and switch resource usage in real time. As streaming telemetry header formats evolve, it is equally important for the switch silicon’s pipeline to have programming abilities which adapt to changes in telemetry tools while performing at line-rate.   

Successfully living at the network edge means detecting and adjusting algorithms in real time. It won’t be enough to move intelligence to the edge and increase the performance for workloads if you can’t see what is happening within the network. Network visibility is crucial in managing workloads to reliably deliver customer and enterprise service level agreements predictively. Telemetry, Intelligence and Performance are critical technologies for the growing borderless campus as mobility and cloud applications proliferate and drive networking functions. In our next blog, we will discuss Security as part of our insights and TIPS to Living on the Network Edge.  Watch out for the edge …

# # #

1 Worldwide Global DataSphere IoT device and data forecast (2019-2023), IDC

July 16th, 2020

The Need for Speed at the Edge

By George Hervey, Principal Architect, Marvell

Marvell Driving Network Intelligence and Processing to the Edge

In the previous TIPS to Living on the Edge, we looked at the trend of driving network intelligence to the edge. With the capacity enabled by the latest wireless networks, like 5G, the infrastructure will enable the development of innovative applications. These applications often employ a high-frequency activity model, for example video or sensors, where the activities are often initiated by the devices themselves generating massive amounts of data moving across the network infrastructure. Cisco’s VNI Forecast Highlights predicts that global business mobile data traffic will grow six-fold from 2017 to 2022, or at an annual growth rate of 42 percent1, requiring a performance upgrade of the network.

Wireless Offload

How do networks with dense wireless connections address the overwhelming bandwidth and connection challenges? One answer is wireless offload. Whether a big box retail store with 1,000 customers or a 60,000-seat stadium or a convention center with 200,000 attendees, the amount of data to be delivered is enormous. The cost to carry the data over wireless has hit a critical inflection point in capacity, driving the need for offload to a wired network. This trend of wireless offload requires higher and higher performance at the network edge enabling users to experience high-performance connectivity and low latency response times they’ve grown to expect.

New Performance Paradigm

Deployment of 5G and Wi-Fi 6 are enabled by advanced wireless access technologies including the use of MIMO and higher frequency spectrum. The capacity being delivered will quickly be consumed by the growing number of devices and new applications. In fact, higher bandwidth at the access layer was a major force behind the definition of Multi-Gig Ethernet. This new performance paradigm will have an impact on all layers of the network, motivating an increase in uplink port speeds to handle the added access bandwidth. Additionally, stacking link capacity will increase to facilitate efficient port deployments and help handle the growth in attached clients.

Network capacity increases enable the adoption of higher bandwidth services, support for emerging real-time applications and an expansion of concurrent active devices on networks. Ironically, the resulting trends and future innovations will continue to drive the need for increased network performance.

Performance is the second part in a series of TIPS that will discuss essential technologies for the growing borderless campus as mobility and cloud applications proliferate and drive networking functions. Telemetry challenges and insights will inspire our next TIPS to Living on the Network Edge.

# # #

1Cisco VNI Complete Forecast Highlights

July 8th, 2020

Driving Network Intelligence and Processing to the Edge

By George Hervey, Principal Architect, Marvell

Marvell Driving Network Intelligence and Processing to the Edge

The mobile phone has become such an essential part of our lives as we move towards more advanced stages of the “always on, always connected” model. Our phones provide instant access to data and communication mediums, and that access influences the decisions we make and ultimately, our behavior.

According to Cisco, global mobile networks will support more than 12 billion mobile devices and IoT connections by 2022.1 And these mobile devices will support a variety of functions. Already, our phones replace gadgets and enable services. Why carry around a wallet when your phone can provide Apple Pay, Google Pay or make an electronic payment? Who needs to carry car keys when your phone can unlock and start your car or open your garage door? Applications now also include live streaming services that enable VR/AR experiences and sharing in real time. While future services and applications seem unlimited to the imagination, they require next-generation data infrastructure to support and facilitate them.

The Need for Intelligence at the Edge
Network connectivity and traffic growth continue to increase as the rate of adoption of new data-intensive applications drive bandwidth requirements and a smarter infrastructure — an infrastructure that can, through intelligence, recognize specific application and infrastructure needs and deliver processing at the edge when necessary. While network speeds increase with advancements of multi-gigabit Ethernet and 400GE backbone connections, the bandwidth available with the latest 5G and Wi-Fi will continue to cause a bottleneck in the backhaul. Edge processing helps prevent the need for moving massive amounts of data across networks. This higher level of network intelligence allows the network to deliver complex software-defined infrastructure management without user intervention, manage inference engines, apply policies, and most importantly, deliver proactive application functionality. This enhances the user experience by offering a near real-time interactive platform with low latency, high reliability and secure infrastructure.

With bandwidth demand growing so much, how do we address it at scale? If we parallel the cloud data centers, we see that one way to scale out and handle the added bandwidth and number of nodes is to add processing to the edge of the network. This was accomplished in data centers through the use of smartNICs to offload complex processing tasks including packet processing, security and virtualization from the servers. A similar approach is being achieved in the carrier networks through the deployment of SD-WAN/uCPE/vCPE appliances placed at the edge to provide the intelligence alongside reduced connectivity costs. However, this approach becomes problematic in enterprise networks where a variety of end point capabilities are needed, and the first location of uniformity takes place at the network’s access layer.

Taking Advantage of Artificial Intelligence (AI)
Yet another challenge is created when legacy methods are used for deploying services in enterprise networks – such as centralized firewalls and authentication servers. With the expected increase in devices accessing the network and more bandwidth needed per device, these legacy constraints can result in bottlenecks. To address these issues, one must truly live on the network edge, pushing out the processing closer to the demand and making it more intelligent. Network OEMs, IT infrastructure owners and service providers will need to take advantage of the new generations of artificial intelligence (AI) and network function offloads at the access layer of the enterprise network.

TIPS to Living on the Network Edge
This is the first in a series providing TIPS about essential technologies that will be needed for the growing borderless campus as mobility and cloud applications proliferate and move networking functions from the core to the edge. Today, we discussed the trend toward expanded Network Intelligence. In Part 2, we will look at the Performance levels needed as we provide more insights and TIPS to Living on the Network Edge.

1 Cisco 2022 Mobile Visual Network Forecast Update

April 2nd, 2018

Understanding Today’s Network Telemetry Requirements

By Tal Mizrahi, Feature Definition Architect, Marvell

There have, in recent years, been fundamental changes to the way in which networks are implemented, as data demands have necessitated a wider breadth of functionality and elevated degrees of operational performance. Accompanying all this is a greater need for accurate measurement of such performance benchmarks in real time, plus in-depth analysis in order to identify and subsequently resolve any underlying issues before they escalate.

The rapidly accelerating speeds and rising levels of complexity that are being exhibited by today’s data networks mean that monitoring activities of this kind are becoming increasingly difficult to execute. Consequently more sophisticated and inherently flexible telemetry mechanisms are now being mandated, particularly for data center and enterprise networks.

A broad spectrum of different options are available when looking to extract telemetry material, whether that be passive monitoring, active measurement, or a hybrid approach. An increasingly common practice is the piggy-backing of telemetry information onto the data packets that are passing through the network. This tactic is being utilized within both in-situ OAM (IOAM) and in-band network telemetry (INT), as well as in an alternate marking performance measurement (AM-PM) context.

At Marvell, our approach is to provide a diverse and versatile toolset through which a wide variety of telemetry approaches can be implemented, rather than being confined to a specific measurement protocol. To learn more about this subject, including longstanding passive and active measurement protocols, and the latest hybrid-based telemetry methodologies, please view the video below and download our white paper.

WHITE PAPER, Network Telemetry Solutions for Data Center and Enterprise Networks

February 22nd, 2018

Marvell to Demonstrate CyberTAN White Box Solution Incorporating the Marvell ARMADA 8040 SoC Running Telco Systems NFVTime Universal CPE OS at Mobile World Congress 2018

By Maen Suleiman, Senior Software Product Line Manager, Marvell

As more workloads are moving to the edge of the network, Marvell continues to advance technology that will enable the communication industry to benefit from the huge potential that network function virtualization (NFV) holds. At this year’s Mobile World Congress (Barcelona, 26th Feb to 1st Mar 2018), Marvell, along with some of its key technology collaborators, will be demonstrating a universal CPE (uCPE) solution that will enable telecom operators, service providers and enterprises to deploy needed virtual network functions (VNFs) to support their customers’ demands.

The ARMADA® 8040 uCPE solution, one of several ARMADA edge computing solutions to be introduced to the market, will be located at the Arm booth (Hall 6, Stand 6E30) and will run Telco Systems NFVTime uCPE operating system (OS) with two deployed off-the-shelf VNFs provided by 6WIND and Trend Micro, respectively, that enable virtual routing and security functionalities.  The CyberTAN white box solution is designed to bring significant improvements in both cost effectiveness and system power efficiency compared to traditional offerings while also maintaining the highest degrees of security.

CyberTAN white box solution incorporating Marvell ARMADA 8040 SoC

 

The CyberTAN white box platform is comprised of several key Marvell technologies that bring an integrated solution designed to enable significant hardware cost savings. The platform incorporates the power-efficient Marvell® ARMADA 8040 system-on-chip (SoC) based on the Arm Cortex®-A72 quad-core processor, with up to 2GHz CPU clock speed, and Marvell E6390x Link Street® Ethernet switch on-board. The Marvell Ethernet switch supports 10G uplink and 8 x 1GbE ports along with integrated PHYs, four of which are auto-media GbE ports (combo ports).

The CyberTAN white box benefits from the Marvell ARMADA 8040 processor’s rich feature set and robust software ecosystem, including:

  • both commercial and industrial grade offerings
  • dual 10G connectivity, 10G Crypto and IPSEC support
  • SBSA compliancy
  • Arm TrustZone support
  • broad software support from the following: UEFI, Linux, DPDK, ODP, OPTEE, Yocto, OpenWrt, CentOS and more

In addition, the uCPE platform supports Mini PCI Express (mPCIe) expansion slots that can enable Marvell advanced 11ac/11ax Wi-Fi or additional wired/wireless connectivity, up to 16GB DDR4 DIMM, 2 x M.2 SATA, one SATA and eMMC options for storage, SD and USB expansion slots for additional storage or other wired/wireless connectivity such as LTE.

At the Arm booth, Telco Systems will demonstrate its NFVTime uCPE operating system on the CyberTAN white box, with zero-touch provisioning (ZTP) feature. NFVTime is an intuitive NFVi-OS that facilitates the entire process of deploying VNFs onto the uCPE, and avoids the complex and frustrating management and orchestration activities normally associated with putting NFV-based services into action. The demonstration will include two main VNFs:

  • A 6WIND virtual router VNF based on 6WIND Turbo Router which provides high performance, ready-to-use virtual routing and firewall functionality; and
  • A Trend Micro security VNF based on Trend Micro’s Virtual Function Network Suite (VNFS) that offers elastic and high-performance network security functions which provide threat defense and enable more effective and faster protection.

Please contact your Marvell sales representative to arrange a meeting at Mobile World Congress or drop by the Arm booth (Hall 6, Stand 6E30) during the conference to see the uCPE solution in action.