-->

We’re Building the Future of Data Infrastructure

Products
Company
Support

Archive for the ‘Networking’ Category

July 8th, 2020

Driving Network Intelligence and Processing to the Edge

By George Hervey, Principal Architect, Marvell

Marvell Driving Network Intelligence and Processing to the Edge

The mobile phone has become such an essential part of our lives as we move towards more advanced stages of the “always on, always connected” model. Our phones provide instant access to data and communication mediums, and that access influences the decisions we make and ultimately, our behavior.

According to Cisco, global mobile networks will support more than 12 billion mobile devices and IoT connections by 2022.1 And these mobile devices will support a variety of functions. Already, our phones replace gadgets and enable services. Why carry around a wallet when your phone can provide Apple Pay, Google Pay or make an electronic payment? Who needs to carry car keys when your phone can unlock and start your car or open your garage door? Applications now also include live streaming services that enable VR/AR experiences and sharing in real time. While future services and applications seem unlimited to the imagination, they require next-generation data infrastructure to support and facilitate them.

The Need for Intelligence at the Edge
Network connectivity and traffic growth continue to increase as the rate of adoption of new data-intensive applications drive bandwidth requirements and a smarter infrastructure — an infrastructure that can, through intelligence, recognize specific application and infrastructure needs and deliver processing at the edge when necessary. While network speeds increase with advancements of multi-gigabit Ethernet and 400GE backbone connections, the bandwidth available with the latest 5G and Wi-Fi will continue to cause a bottleneck in the backhaul. Edge processing helps prevent the need for moving massive amounts of data across networks. This higher level of network intelligence allows the network to deliver complex software-defined infrastructure management without user intervention, manage inference engines, apply policies, and most importantly, deliver proactive application functionality. This enhances the user experience by offering a near real-time interactive platform with low latency, high reliability and secure infrastructure.

With bandwidth demand growing so much, how do we address it at scale? If we parallel the cloud data centers, we see that one way to scale out and handle the added bandwidth and number of nodes is to add processing to the edge of the network. This was accomplished in data centers through the use of smartNICs to offload complex processing tasks including packet processing, security and virtualization from the servers. A similar approach is being achieved in the carrier networks through the deployment of SD-WAN/uCPE/vCPE appliances placed at the edge to provide the intelligence alongside reduced connectivity costs. However, this approach becomes problematic in enterprise networks where a variety of end point capabilities are needed, and the first location of uniformity takes place at the network’s access layer.

Taking Advantage of Artificial Intelligence (AI)
Yet another challenge is created when legacy methods are used for deploying services in enterprise networks – such as centralized firewalls and authentication servers. With the expected increase in devices accessing the network and more bandwidth needed per device, these legacy constraints can result in bottlenecks. To address these issues, one must truly live on the network edge, pushing out the processing closer to the demand and making it more intelligent. Network OEMs, IT infrastructure owners and service providers will need to take advantage of the new generations of artificial intelligence (AI) and network function offloads at the access layer of the enterprise network.

TIPS to Living on the Network Edge
This is the first in a series providing TIPS about essential technologies that will be needed for the growing borderless campus as mobility and cloud applications proliferate and move networking functions from the core to the edge. Today, we discussed the trend toward expanded Network Intelligence. In Part 2, we will look at the Performance levels needed as we provide more insights and TIPS to Living on the Network Edge.

1 Cisco 2022 Mobile Visual Network Forecast Update

April 2nd, 2018

Understanding Today’s Network Telemetry Requirements

By Tal Mizrahi, Feature Definition Architect, Marvell

There have, in recent years, been fundamental changes to the way in which networks are implemented, as data demands have necessitated a wider breadth of functionality and elevated degrees of operational performance. Accompanying all this is a greater need for accurate measurement of such performance benchmarks in real time, plus in-depth analysis in order to identify and subsequently resolve any underlying issues before they escalate.

The rapidly accelerating speeds and rising levels of complexity that are being exhibited by today’s data networks mean that monitoring activities of this kind are becoming increasingly difficult to execute. Consequently more sophisticated and inherently flexible telemetry mechanisms are now being mandated, particularly for data center and enterprise networks.

A broad spectrum of different options are available when looking to extract telemetry material, whether that be passive monitoring, active measurement, or a hybrid approach. An increasingly common practice is the piggy-backing of telemetry information onto the data packets that are passing through the network. This tactic is being utilized within both in-situ OAM (IOAM) and in-band network telemetry (INT), as well as in an alternate marking performance measurement (AM-PM) context.

At Marvell, our approach is to provide a diverse and versatile toolset through which a wide variety of telemetry approaches can be implemented, rather than being confined to a specific measurement protocol. To learn more about this subject, including longstanding passive and active measurement protocols, and the latest hybrid-based telemetry methodologies, please view the video below and download our white paper.

WHITE PAPER, Network Telemetry Solutions for Data Center and Enterprise Networks

February 22nd, 2018

Marvell to Demonstrate CyberTAN White Box Solution Incorporating the Marvell ARMADA 8040 SoC Running Telco Systems NFVTime Universal CPE OS at Mobile World Congress 2018

By Maen Suleiman, Senior Software Product Line Manager, Marvell

As more workloads are moving to the edge of the network, Marvell continues to advance technology that will enable the communication industry to benefit from the huge potential that network function virtualization (NFV) holds. At this year’s Mobile World Congress (Barcelona, 26th Feb to 1st Mar 2018), Marvell, along with some of its key technology collaborators, will be demonstrating a universal CPE (uCPE) solution that will enable telecom operators, service providers and enterprises to deploy needed virtual network functions (VNFs) to support their customers’ demands.

The ARMADA® 8040 uCPE solution, one of several ARMADA edge computing solutions to be introduced to the market, will be located at the Arm booth (Hall 6, Stand 6E30) and will run Telco Systems NFVTime uCPE operating system (OS) with two deployed off-the-shelf VNFs provided by 6WIND and Trend Micro, respectively, that enable virtual routing and security functionalities.  The CyberTAN white box solution is designed to bring significant improvements in both cost effectiveness and system power efficiency compared to traditional offerings while also maintaining the highest degrees of security.

CyberTAN white box solution incorporating Marvell ARMADA 8040 SoC

 

The CyberTAN white box platform is comprised of several key Marvell technologies that bring an integrated solution designed to enable significant hardware cost savings. The platform incorporates the power-efficient Marvell® ARMADA 8040 system-on-chip (SoC) based on the Arm Cortex®-A72 quad-core processor, with up to 2GHz CPU clock speed, and Marvell E6390x Link Street® Ethernet switch on-board. The Marvell Ethernet switch supports 10G uplink and 8 x 1GbE ports along with integrated PHYs, four of which are auto-media GbE ports (combo ports).

The CyberTAN white box benefits from the Marvell ARMADA 8040 processor’s rich feature set and robust software ecosystem, including:

  • both commercial and industrial grade offerings
  • dual 10G connectivity, 10G Crypto and IPSEC support
  • SBSA compliancy
  • Arm TrustZone support
  • broad software support from the following: UEFI, Linux, DPDK, ODP, OPTEE, Yocto, OpenWrt, CentOS and more

In addition, the uCPE platform supports Mini PCI Express (mPCIe) expansion slots that can enable Marvell advanced 11ac/11ax Wi-Fi or additional wired/wireless connectivity, up to 16GB DDR4 DIMM, 2 x M.2 SATA, one SATA and eMMC options for storage, SD and USB expansion slots for additional storage or other wired/wireless connectivity such as LTE.

At the Arm booth, Telco Systems will demonstrate its NFVTime uCPE operating system on the CyberTAN white box, with zero-touch provisioning (ZTP) feature. NFVTime is an intuitive NFVi-OS that facilitates the entire process of deploying VNFs onto the uCPE, and avoids the complex and frustrating management and orchestration activities normally associated with putting NFV-based services into action. The demonstration will include two main VNFs:

  • A 6WIND virtual router VNF based on 6WIND Turbo Router which provides high performance, ready-to-use virtual routing and firewall functionality; and
  • A Trend Micro security VNF based on Trend Micro’s Virtual Function Network Suite (VNFS) that offers elastic and high-performance network security functions which provide threat defense and enable more effective and faster protection.

Please contact your Marvell sales representative to arrange a meeting at Mobile World Congress or drop by the Arm booth (Hall 6, Stand 6E30) during the conference to see the uCPE solution in action.

October 3rd, 2017

Celebrating 20 Years of Wi-Fi – Part I

By Prabhu Loganathan, Senior Director of Marketing for Connectivity Business Unit, Marvell

You can’t see it, touch it, or hear it – yet Wi-Fi® has had a tremendous impact on the modern world – and will continue to do so. From our home wireless networks, to offices and public spaces, the ubiquity of high speed connectivity without reliance on cables has radically changed the way computing happens. It would not be much of an exaggeration to say that because of ready access to Wi-Fi, we are consequently able to lead better lives – using our laptops, tablets and portable electronics goods in a far more straightforward, simplistic manner with a high degree of mobility, no longer having to worry about a complex tangle of wires tying us down.

Though it may be hard to believe, it is now two decades since the original 802.11 standard was ratified by the IEEE®. This first in a series of blogs will look at the history of Wi-Fi to see how it has overcome numerous technical challenges and evolved into the ultra-fast, highly convenient wireless standard that we know today. We will then go on to discuss what it may look like tomorrow.

Unlicensed Beginnings
While we now think of 802.11 wireless technology as predominantly connecting our personal computing devices and smartphones to the Internet, it was in fact initially invented as a means to connect up humble cash registers. In the late 1980s, NCR Corporation, a maker of retail hardware and point-of-sale (PoS) computer systems, had a big problem. Its customers – department stores and supermarkets – didn’t want to dig up their floors each time they changed their store layout.

A recent ruling that had been made by the FCC, which opened up certain frequency bands as free to use, inspired what would be a game-changing idea. By using wireless connections in the unlicensed spectrum (rather than conventional wireline connections), electronic cash registers and PoS systems could be easily moved around a store without the retailer having to perform major renovation work.

Soon after this, NCR allocated the project to an engineering team out of its Netherlands office. They were set the challenge of creating a wireless communication protocol. These engineers succeeded in developing ‘WaveLAN’, which would be recognized as the precursor to Wi-Fi. Rather than preserving this as a purely proprietary protocol, NCR could see that by establishing it as a standard, the company would be able to position itself as a leader in the wireless connectivity market as it emerged. By 1990, the IEEE 802.11 working group had been formed, based on wireless communication in unlicensed spectra.

Using what were at the time innovative spread spectrum techniques to reduce interference and improve signal integrity in noisy environments, the original incarnation of Wi-Fi was finally formally standardized in 1997. It operated with a throughput of just 2 Mbits/s, but it set the foundations of what was to come.

Wireless Ethernet
Though the 802.11 wireless standard was released in 1997, it didn’t take off immediately. Slow speeds and expensive hardware hampered its mass market appeal for quite a while – but things were destined to change. 10 Mbit/s Ethernet was the networking standard of the day. The IEEE 802.11 working group knew that if they could equal that, they would have a worthy wireless competitor. In 1999, they succeeded, creating 802.11b. This used the same 2.4 GHz ISM frequency band as the original 802.11 wireless standard, but it raised the throughput supported considerably, reaching 11 Mbits/s. Wireless Ethernet was finally a reality.

Soon after 802.11b was established, the IEEE working group also released 802.11a, an even faster standard. Rather than using the increasingly crowded 2.4 GHz band, it ran on the 5 GHz band and offered speeds up to a lofty 54 Mbits/s.

Because it occupied the 5 GHz frequency band, away from the popular (and thus congested) 2.4 GHz band, it had better performance in noisy environments; however, the higher carrier frequency also meant it had reduced range compared to 2.4 GHz wireless connectivity. Thanks to cheaper equipment and better nominal ranges, 802.11b proved to be the most popular wireless standard by far. But, while it was more cost effective than 802.11a, 802.11b still wasn’t at a low enough price bracket for the average consumer. Routers and network adapters would still cost hundreds of dollars.

That all changed following a phone call from Steve Jobs. Apple was launching a new line of computers at that time and wanted to make wireless networking functionality part of it. The terms set were tough – Apple expected to have the cards at a $99 price point, but of course the volumes involved could potentially be huge. Lucent Technologies, which had acquired NCR by this stage, agreed.

While it was a difficult pill to swallow initially, the Apple deal finally put Wi-Fi in the hands of consumers and pushed it into the mainstream. PC makers saw Apple computers beating them to the punch and wanted wireless networking as well. Soon, key PC hardware makers including Dell, Toshiba, HP and IBM were all offering Wi-Fi.

Microsoft also got on the Wi-Fi bandwagon with Windows XP. Working with engineers from Lucent, Microsoft made Wi-Fi connectivity native to the operating system. Users could get wirelessly connected without having to install third party drivers or software. With the release of Windows XP, Wi-Fi was now natively supported on millions of computers worldwide – it had officially made it into the ‘big time’.

This blog post is the first in a series that charts the eventful history of Wi-Fi. The second part, which is coming soon, will bring things up to date and look at current Wi-Fi implementations.

 

July 17th, 2017

Rightsizing Ethernet

By George Hervey, Principal Architect, Marvell

Implementation of cloud infrastructure is occurring at a phenomenal rate, outpacing Moore’s Law. Annual growth is believed to be 30x and as much 100x in some cases. In order to keep up, cloud data centers are having to scale out massively, with hundreds, or even thousands of servers becoming a common sight.

At this scale, networking becomes a serious challenge. More and more switches are required, thereby increasing capital costs, as well as management complexity. To tackle the rising expense issues, network disaggregation has become an increasingly popular approach. By separating the switch hardware from the software that runs on it, vendor lock-in is reduced or even eliminated. OEM hardware could be used with software developed in-house, or from third party vendors, so that cost savings can be realized.

Though network disaggregation has tackled the immediate problem of hefty capital expenditures, it must be recognized that operating expenditures are still high. The number of managed switches basically stays the same. To reduce operating costs, the issue of network complexity has to also be tackled.

Network Disaggregation
Almost every application we use today, whether at home or in the work environment, connects to the cloud in some way. Our email providers, mobile apps, company websites, virtualized desktops and servers, all run on servers in the cloud.

For these cloud service providers, this incredible growth has been both a blessing and a challenge. As demand increases, Moore’s law has struggled to keep up. Scaling data centers today involves scaling out – buying more compute and storage capacity, and subsequently investing in the networking to connect it all. The cost and complexity of managing everything can quickly add up.

Until recently, networking hardware and software had often been tied together. Buying a switch, router or firewall from one vendor would require you to run their software on it as well. Larger cloud service providers saw an opportunity. These players often had no shortage of skilled software engineers. At the massive scales they ran at, they found that buying commodity networking hardware and then running their own software on it would save them a great deal in terms of Capex.

This disaggregation of the software from the hardware may have been financially attractive, however it did nothing to address the complexity of the network infrastructure. There was still a great deal of room to optimize further.

802.1BR
Today’s cloud data centers rely on a layered architecture, often in a fat-tree or leaf-spine structural arrangement. Rows of racks, each with top-of-rack (ToR) switches, are then connected to upstream switches on the network spine. The ToR switches are, in fact, performing simple aggregation of network traffic. Using relatively complex, energy consuming switches for this task results in a significant capital expense, as well as management costs and no shortage of headaches.

Through the port extension approach, outlined within the IEEE 802.1BR standard, the aim has been to streamline this architecture. By replacing ToR switches with port extenders, port connectivity is extended directly from the rack to the upstream. Management is consolidated to the fewer number of switches which are located at the upper layer network spine, eliminating the dozens or possibly hundreds of switches at the rack level.

The reduction in switch management complexity of the port extender approach has been widely recognized, and various network switches on the market now comply with the 802.1BR standard. However, not all the benefits of this standard have actually been realized.

The Next Step in Network Disaggregation
Though many of the port extenders on the market today fulfill 802.1BR functionality, they do so using legacy components. Instead of being optimized for 802.1BR itself, they rely on traditional switches. This, as a consequence impacts upon the potential cost and power benefits that the new architecture offers.

Designed from the ground up for 802.1BR, Marvell’s Passive Intelligent Port Extender (PIPE) offering is specifically optimized for this architecture. PIPE is interoperable with 802.1BR compliant upstream bridge switches from all the industry’s leading OEMs. It enables fan-less, cost efficient port extenders to be deployed, which thereby provide upfront savings as well as ongoing operational savings for cloud data centers. Power consumption is lowered and switch management complexity is reduced by an order of magnitude

The first wave in network disaggregation was separating switch software from the hardware that it ran on. 802.1BR’s port extender architecture is bringing about the second wave, where ports are decoupled from the switches which manage them. The modular approach to networking discussed here will result in lower costs, reduced energy consumption and greatly simplified network management.

July 7th, 2017

Extending the Lifecycle of 3.2T Switch-Based Architecture

By Yaron Zimmerman, Senior Staff Product Line Manager, Marvell

and Yaniv Kopelman, Networking and Connectivity CTO, Marvell

The growth witnessed in the expanse of data centers has been completely unprecedented. This has been driven by the exponential increases in cloud computing and cloud storage demand that is now being witnessed. While Gigabit switches proved more than sufficient just a few years ago, today, even 3.2 Terabit (3.2T) switches, which currently serve as the fundamental building blocks upon which data center infrastructure is constructed, are being pushed to their full capacity.

While network demands have increased, Moore’s law (which effectively defines the semiconductor industry) has not been able to keep up. Instead of scaling at the silicon level, data centers have had to scale out. This has come at a cost though, with ever increasing capital, operational expenditure and greater latency all resulting. Facing this challenging environment, a different approach is going to have to be taken. In order to accommodate current expectations economically, while still also having the capacity for future growth, data centers (as we will see) need to move towards a modularized approach.

switching-blogpost

Scaling out the datacenter

Data centers are destined to have to contend with demands for substantially heightened network capacity – as a greater number of services, plus more data storage, start migrating to the cloud. This increase in network capacity, in turn, results in demand for more silicon to support it.

To meet increasing networking capacity, data centers are buying ever more powerful Top-of-Rack (ToR) leaf switches. In turn these are consuming more power – which impacts on the overall power budget and means that less power is available for the data center servers. Not only does this lead to power being unnecessarily wasted, in addition it will push the associated thermal management costs and the overall Opex upwards. As these data centers scale out to meet demand, they’re often having to add more complex hierarchical structures to their architecture as well – thereby increasing latencies for both north-south and east-west traffic in the process.

The price of silicon per gate is not going down either. We used to enjoy cost reductions as process sizes decreased from 90 nm, to 65 nm, to 40 nm. That is no longer strictly true however. As we see process sizes go down from 28 nm node sizes, yields are decreasing and prices are consequently going up. To address the problems of cloud-scale data centers, traditional methods will not be applicable. Instead, we need to take a modularized approach to networking.

PIPEs and Bridges

Today’s data centers often run on a multi-tiered leaf and spine hierarchy. Racks with ToR switches connect to the network spine switches. These, in turn, connect to core switches, which subsequently connect to the Internet. Both the spine and the top of the rack layer elements contain full, managed switches.

By following a modularized approach, it is possible to remove the ToR switches and replace them with simple IO devices – port extenders specifically. This effectively extends the IO ports of the spine switch all the way down to the ToR. What results is a passive ToR that is unmanaged. It simply passes the packets to the spine switch. Furthermore, by taking a whole layer out of the management hierarchy, the network becomes flatter and is thus considerably easier to manage.

The spine switch now acts as the controlling bridge. It is able to manage the layer which was previously taken care of by the ToR switch. This means that, through such an arrangement, it is possible to disaggregate the IO ports of the network that were previously located at the ToR switch, from the logic at the spine switch which manages them. This innovative modularized approach is being facilitated by the increasing number of Port Extenders and Control Bridges now being made available from Marvell that are compatible with the IEEE 802.1BR bridge port extension standard.

Solving Data Center Scaling Challenges

The modularized port-extender and control bridge approach allows data centers to address the full length and breadth of scaling challenges. Port extenders solve the latency by flattening the hierarchy. Instead of having conventional ‘leaf’ and ‘spine’ tiers, the port extender acts to simply extend the IO ports of the spine switch to the ToR. Each server in the rack has a near-direct connection to the managing switch. This improves latency for north-south bound traffic.

The port extender also functions to aggregate traffic from 10 Gbit Ethernet ports into higher throughput outputs, allowing for terabit switches which only have 25, 40, or 100 Gbit Ethernet ports, to communicate directly with 10 Gbit Ethernet edge devices. The passive port extender is a greatly simplified device compared to a managed switch. This means lower up-front costs as well as lower power consumption and a simpler network management scheme are all derived. Rather than dealing with both leaf and spine switches, network administration simply needs to focus on the managed switches at the spine layer.

With no end in sight to the ongoing progression of network capacity, cloud-scale data centers will always have ever-increasing scaling challenges to attend to. The modularized approach described here makes those challenges solvable.

June 21st, 2017

Making Better Use of Legacy Infrastructure

By Ron Cates, Senior Director, Product Marketing, Networking Business Unit, Marvell

The flexibility offered by wireless networking is revolutionizing the enterprise space. High-speed Wi-Fi®, provided by standards such as IEEE 802.11ac and 802.11ax, makes it possible to deliver next-generation services and applications to users in the office, no matter where they are working.

However, the higher wireless speeds involved are putting pressure on the cabling infrastructure that supports the Wi-Fi access points around an office environment. The 1 Gbit/s Ethernet was more than adequate for older wireless standards and applications. Now, with greater reliance on the new generation of Wi-Fi access points and their higher uplink rate speeds, the older infrastructure is starting to show strain. At the same time, in the server room itself, demand for high-speed storage and faster virtualized servers is placing pressure on the performance levels offered by the core Ethernet cabling that connects these systems together and to the wider enterprise infrastructure.

One option is to upgrade to a 10 Gbit/s Ethernet infrastructure. But this is a migration that can be prohibitively expensive. The Cat 5e cabling that exists in many office and industrial environments is not designed to cope with such elevated speeds. To make use of 10 Gbit/s equipment, that old cabling needs to come out and be replaced by a new copper infrastructure based on Cat 6a standards. Cat 6a cabling can support 10 Gbit/s Ethernet at the full range of 100 meters, and you would be lucky to run 10 Gbit/s at half that distance over a Cat 5e cable.

In contrast to data-center environments that are designed to cope easily with both server and networking infrastructure upgrades, enterprise cabling lying in ducts, in ceilings and below floors is hard to reach and swap out. This is especially true if you want to keep the business running while the switchover takes place.

Help is at hand with the emergence of the IEEE 802.3bz™ and NBASE-T® set of standards and the transceiver technology that goes with them. 802.3bz and NBASE-T make it possible to transmit at speeds of 2.5 Gbit/s or 5 Gbit/s across conventional Cat 5e or Cat 6 at distances up to the full 100 meters. The transceiver technology leverages advances in digital signal processing (DSP) to make these higher speeds possible without demanding a change in the cabling infrastructure.

The NBASE-T technology, a companion to the IEEE 802.3bz standard, incorporates novel features such as downshift, which responds dynamically to interference from other sources in the cable bundle. The result is lower speed. But the downshift technology has the advantage that it does not cut off communication unexpectedly, providing time to diagnose the problem interferer in the bundle and perhaps reroute it to sit alongside less sensitive cables that may carry lower-speed signals. This is where the new generation of high-density transceivers come in.

There are now transceivers coming onto the market that support data rates all the way from legacy 10 Mbit/s Ethernet up to the full 5 Gbit/s of 802.3bz/NBASE-T – and will auto-negotiate the most appropriate data rate with the downstream device. This makes it easy for enterprise users to upgrade the routers and switches that support their core network without demanding upgrades to all the client devices. Further features, such as Virtual Cable Tester® functionality, makes it easier to diagnose faults in the cabling infrastructure without resorting to the use of specialized network instrumentation.

Transceivers and PHYs designed for switches can now support eight 802.3bz/NBASE-T ports in one chip, thanks to the integration made possible by leading-edge processes. These transceivers are designed not only to be more cost-effective, they also consume far less power and PCB real estate than PHYs that were designed for 10 Gbit/s networks. This means they present a much more optimized solution with numerous benefits from a financial, thermal and a logistical perspective.

The result is a networking standard that meshes well with the needs of modern enterprise networks – and lets that network and the equipment evolve at its own pace.