-->

We’re Building the Future of Data Infrastructure

Products
Company
Support

Archive for the ‘Enterprise Services’ Category

July 8th, 2020

Driving Network Intelligence and Processing to the Edge

By George Hervey, Principal Architect, Marvell

Marvell Driving Network Intelligence and Processing to the Edge

The mobile phone has become such an essential part of our lives as we move towards more advanced stages of the “always on, always connected” model. Our phones provide instant access to data and communication mediums, and that access influences the decisions we make and ultimately, our behavior.

According to Cisco, global mobile networks will support more than 12 billion mobile devices and IoT connections by 2022.1 And these mobile devices will support a variety of functions. Already, our phones replace gadgets and enable services. Why carry around a wallet when your phone can provide Apple Pay, Google Pay or make an electronic payment? Who needs to carry car keys when your phone can unlock and start your car or open your garage door? Applications now also include live streaming services that enable VR/AR experiences and sharing in real time. While future services and applications seem unlimited to the imagination, they require next-generation data infrastructure to support and facilitate them.

The Need for Intelligence at the Edge
Network connectivity and traffic growth continue to increase as the rate of adoption of new data-intensive applications drive bandwidth requirements and a smarter infrastructure — an infrastructure that can, through intelligence, recognize specific application and infrastructure needs and deliver processing at the edge when necessary. While network speeds increase with advancements of multi-gigabit Ethernet and 400GE backbone connections, the bandwidth available with the latest 5G and Wi-Fi will continue to cause a bottleneck in the backhaul. Edge processing helps prevent the need for moving massive amounts of data across networks. This higher level of network intelligence allows the network to deliver complex software-defined infrastructure management without user intervention, manage inference engines, apply policies, and most importantly, deliver proactive application functionality. This enhances the user experience by offering a near real-time interactive platform with low latency, high reliability and secure infrastructure.

With bandwidth demand growing so much, how do we address it at scale? If we parallel the cloud data centers, we see that one way to scale out and handle the added bandwidth and number of nodes is to add processing to the edge of the network. This was accomplished in data centers through the use of smartNICs to offload complex processing tasks including packet processing, security and virtualization from the servers. A similar approach is being achieved in the carrier networks through the deployment of SD-WAN/uCPE/vCPE appliances placed at the edge to provide the intelligence alongside reduced connectivity costs. However, this approach becomes problematic in enterprise networks where a variety of end point capabilities are needed, and the first location of uniformity takes place at the network’s access layer.

Taking Advantage of Artificial Intelligence (AI)
Yet another challenge is created when legacy methods are used for deploying services in enterprise networks – such as centralized firewalls and authentication servers. With the expected increase in devices accessing the network and more bandwidth needed per device, these legacy constraints can result in bottlenecks. To address these issues, one must truly live on the network edge, pushing out the processing closer to the demand and making it more intelligent. Network OEMs, IT infrastructure owners and service providers will need to take advantage of the new generations of artificial intelligence (AI) and network function offloads at the access layer of the enterprise network.

TIPS to Living on the Network Edge
This is the first in a series providing TIPS about essential technologies that will be needed for the growing borderless campus as mobility and cloud applications proliferate and move networking functions from the core to the edge. Today, we discussed the trend toward expanded Network Intelligence. In Part 2, we will look at the Performance levels needed as we provide more insights and TIPS to Living on the Network Edge.

1 Cisco 2022 Mobile Visual Network Forecast Update

June 7th, 2018

Versatile New Ethernet Switch Simultaneously Addresses Multiple Industry Sectors

By Ran Gu, Marketing Director of Switching Product Line, Marvell

Due to ongoing technological progression and underlying market dynamics, Gigabit Ethernet (GbE) technology with 10 Gigabit uplink speeds is starting to proliferate into the networking infrastructure across a multitude of different applications where elevated levels of connectivity are needed: SMB switch hardware, industrial switching hardware, SOHO routers, enterprise gateways and uCPEs, to name a few. The new Marvell® Link Street™ 88E6393X, which has a broad array of functionality, scalability and cost-effectiveness, provides a compelling switch IC solution with the scope to serve multiple industry sectors.

The 88E6393X switch IC incorporates both 1000BASE-T PHY and 10 Gbps fiber port capabilities, while requiring only 60% of the power budget necessitated by competing solutions. Despite its compact package, this new switch IC offers 8 triple speed (10/100/1000) Ethernet ports, plus 3 XFI/SFI ports, and has a built-in 200 MHz microprocessor. Its SFI support means that the switch can connect to a fiber module without the need to include an external PHY – thereby saving space and bill-of-materials (BoM) costs, as well as simplifying the design. It complies with the IEEE 802.1BR port extension standard and can also play a pivotal role in lowering the management overhead and keeping operational expenditures (OPEX) in check. In addition, it includes L3 routing support for IP forwarding purposes.

Adherence to the latest time sensitive networking (TSN) protocols (such as 802.1AS, 802.1Qat, 802.1Qav and 802.1Qbv) enables delivery of the low latency operation mandated by industrial networks. The 256 entry ternary content-addressable memory (TCAM) allows for real-time, deep packet inspection (DPI) and policing of the data content being transported over the network (with access control and policy control lists being referenced). The denial of service (DoS) prevention mechanism is able to detect illegal packets and mitigate the security threat of DoS attacks.

The 88E6393X device, working in conjunction with a high performance ARMADA® network processing system-on-chip (SoC), can offload some of the packet processing activities so that the CPU’s bandwidth can be better focused on higher level activities. Data integrity is upheld, thanks to the quality of service (QoS) support across 8 traffic classes. In addition, the switch IC presents a scalable solution. The 10 Gbps interfaces provide non-blocking uplink to make it possible to cascade several units together, thus creating higher port count switches (16, 24, etc.).

This new product release features a combination of small footprint, lower power consumption, extensive security and inherent flexibility to bring a highly effective switch IC solution for the SMB, enterprise, industrial and uCPE space.

 

April 2nd, 2018

Understanding Today’s Network Telemetry Requirements

By Tal Mizrahi, Feature Definition Architect, Marvell

There have, in recent years, been fundamental changes to the way in which networks are implemented, as data demands have necessitated a wider breadth of functionality and elevated degrees of operational performance. Accompanying all this is a greater need for accurate measurement of such performance benchmarks in real time, plus in-depth analysis in order to identify and subsequently resolve any underlying issues before they escalate.

The rapidly accelerating speeds and rising levels of complexity that are being exhibited by today’s data networks mean that monitoring activities of this kind are becoming increasingly difficult to execute. Consequently more sophisticated and inherently flexible telemetry mechanisms are now being mandated, particularly for data center and enterprise networks.

A broad spectrum of different options are available when looking to extract telemetry material, whether that be passive monitoring, active measurement, or a hybrid approach. An increasingly common practice is the piggy-backing of telemetry information onto the data packets that are passing through the network. This tactic is being utilized within both in-situ OAM (IOAM) and in-band network telemetry (INT), as well as in an alternate marking performance measurement (AM-PM) context.

At Marvell, our approach is to provide a diverse and versatile toolset through which a wide variety of telemetry approaches can be implemented, rather than being confined to a specific measurement protocol. To learn more about this subject, including longstanding passive and active measurement protocols, and the latest hybrid-based telemetry methodologies, please view the video below and download our white paper.

WHITE PAPER, Network Telemetry Solutions for Data Center and Enterprise Networks

January 11th, 2018

Storing the World’s Data

By Marvell, PR Team

Storage is the foundation for a data-centric world, but how tomorrow’s data will be stored is the subject of much debate. What is clear is that data growth will continue to rise significantly. According to a report compiled by IDC titled ‘Data Age 2025’, the amount of data created will grow at an almost exponential rate. This amount is predicted to surpass 163 Zettabytes by the middle of the next decade (which is almost 8 times what it is today, and nearly 100 times what it was back in 2010). Increasing use of cloud-based services, the widespread roll-out of Internet of Things (IoT) nodes, virtual/augmented reality applications, autonomous vehicles, machine learning and the whole ‘Big Data’ phenomena will all play a part in the new data-driven era that lies ahead.

Further down the line, the building of smart cities will lead to an additional ramp up in data levels, with highly sophisticated infrastructure being deployed in order to alleviate traffic congestion, make utilities more efficient, and improve the environment, to name a few. A very large proportion of the data of the future will need to be accessed in real-time. This will have implications on the technology utilized and also where the stored data is situated within the network. Additionally, there are serious security considerations that need to be factored in, too.

So that data centers and commercial enterprises can keep overhead under control and make operations as efficient as possible, they will look to follow a tiered storage approach, using the most appropriate storage media so as to lower the related costs. Decisions on the media utilized will be based on how frequently the stored data needs to be accessed and the acceptable degree of latency. This will require the use of numerous different technologies to make it fully economically viable – with cost and performance being important factors.

There are now a wide variety of different storage media options out there. In some cases these are long established while in others they are still in the process of emerging. Hard disk drives (HDDs) in certain applications are being replaced by solid state drives (SSDs), and with the migration from SATA to NVMe in the SSD space, NVMe is enabling the full performance capabilities of SSD technology. HDD capacities are continuing to increase substantially and their overall cost effectiveness also adds to their appeal. The immense data storage requirements that are being warranted by the cloud mean that HDD is witnessing considerable traction in this space.

There are other forms of memory on the horizon that will help to address the challenges that increasing storage demands will set. These range from higher capacity 3D stacked flash to completely new technologies, such as phase-change with its rapid write times and extensive operational lifespan. The advent of NVMe over fabrics (NVMf) based interfaces offers the prospect of high bandwidth, ultra-low latency SSD data storage that is at the same time extremely scalable.

Marvell was quick to recognize the ever growing importance of data storage and has continued to make this sector a major focus moving forwards, and has established itself as the industry’s leading supplier of both HDD controllers and merchant SSD controllers.

Within a period of only 18 months after its release, Marvell managed to ship over 50 million of its 88SS1074 SATA SSD controllers with NANDEdge™ error-correction technology. Thanks to its award-winning 88NV11xx series of small form factor DRAM-less SSD controllers (based on a 28nm CMOS semiconductor process), the company is able to offer the market high performance NVMe memory controller solutions that are optimized for incorporation into compact, streamlined handheld computing equipment, such as tablet PCs and ultra-books. These controllers are capable of supporting reads speeds of 1600MB/s, while only drawing minimal power from the available battery reserves. Marvell offers solutions like its 88SS1092 NVMe SSD controller designed for new compute models that enable the data center to share storage data to further maximize cost and performance efficiencies.

The unprecedented growth in data means that more storage will be required. Emerging applications and innovative technologies will drive new ways of increasing storage capacity, improving latency and ensuring security. Marvell is in a position to offer the industry a wide range of technologies to support data storage requirements, addressing both SSD or HDD implementation and covering all accompanying interface types from SAS and SATA through to PCIe and NMVe.

Check out www.marvell.com to learn more about how Marvell is storing the world’s data.

November 6th, 2017

The USR-Alliance – Enabling an Open Multi-Chip Module (MCM) Ecosystem

By Gidi Navon, System Architect, Marvell

The semiconductor industry is witnessing exponential growth and rapid changes to its bandwidth requirements, as well as increasing design complexity, emergence of new processes and integration of multi-disciplinary technologies. All this is happening against a backdrop of shorter development cycles and fierce competition. Other technology-driven industry sectors, such as software and hardware, are addressing similar challenges by creating open alliances and open standards. This blog does not attempt to list all the open alliances that now exist —  the Open Compute Project, Open Data Path and the Linux Foundation are just a few of the most prominent examples. One technological area that still hasn’t embraced such open collaboration is Multi-Chip-Module (MCM), where multiple semiconductor dies are packaged together, thereby creating a combined system in a single package.

The MCM concept has been around for a while, generating multiple technological and market benefits, including:

  • Improved yield – Instead of creating large monolithic dies with low yield and higher cost (which sometimes cannot even be fabricated), splitting the silicon into multiple die can significantly improve the yield of each building block and the combined solution. Better yield consequently translates into reductions in costs.
  • Optimized process – The final MCM product is a mix-and-match of units in different fabrication processes which enables optimizing of the process selection for specific IP blocks with similar characteristics.
  • Multiple fabrication plants – Different fabs, each with its own unique capabilities, can be utilized to create a given product.
  • Product variety – New products are easily created by combining different numbers and types of devices to form innovative and cost‑optimized MCMs.
  • Short product cycle time – Dies can be upgraded independently, which promotes ease in the addition of new product capabilities and/or the ability to correct any issues within a given die. For example, integrating a new type of I/O interface can be achieved without having to re-spin other parts of the solution that are stable and don’t require any change (thus avoiding waste of time and money).
  • Economy of scale – Each die can be reused in multiple applications and products, increasing its volume and yield as well as the overall return on the initial investment made in its development.

Sub-dividing large semiconductor devices and mounting them on an MCM has now become the new printed circuit board (PCB) – providing smaller footprint, lower power, higher performance and expanded functionality.

Now, imagine that the benefits listed above are not confined to a single chip vendor, but instead are shared across the industry as a whole. By opening and standardizing the interface between dies, it is possible to introduce a true open platform, wherein design teams in different companies, each specializing in different technological areas, are able to create a variety of new products beyond the scope of any single company in isolation.

This is where the USR Alliance comes into action. The alliance has defined an Ultra Short Reach (USR) link, optimized for communication across the very short distances between the components contained in a single package. This link provides high bandwidth with less power and smaller die size than existing very short reach (VSR) PHYs which cross package boundaries and connectors and need to deal with challenges that simply don’t exist inside a package. The USR PHY is based on a multi-wire differential signaling technique optimized for MCM environments.

There are many applications in which the USR link can be implemented. Examples include CPUs, switches and routers, FPGAs, DSPs, analog components and a variety of long reach electrical and optical interfaces.

Figure 1: Example of a possible MCM layout

Marvell is an active promoter member of the USR Alliance and is working to create an ecosystem of interoperable components, interconnects, protocols and software that will help the semiconductor industry bring more value to the market.  The alliance is working on creating PHY, MAC and software standards and interoperability agreements in collaboration with the industry and other standards development organizations, and is promoting the development of a full ecosystem around USR applications (including certification programs) to ensure widespread interoperability.

To learn more about the USR Alliance visit: www.usr-alliance.org

October 19th, 2017

Celebrating 20 Years of Wi-Fi – Part III

By Prabhu Loganathan, Senior Director of Marketing for Connectivity Business Unit, Marvell

Standardized in 1997, Wi-Fi has changed the way that we compute. Today, almost every one of us uses a Wi-Fi connection on a daily basis, whether it’s for watching a show on a tablet at home, using our laptops at work, or even transferring photos from a camera. Millions of Wi-Fi-enabled products are being shipped each week, and it seems this technology is constantly finding its way into new device categories.

Since its humble beginnings, Wi-Fi has progressed at a rapid pace. While the initial standard allowed for just 2 Mbit/s data rates, today’s Wi-Fi implementations allow for speeds in the order of Gigabits to be supported. This last in our three part blog series covering the history of Wi-Fi will look at what is next for the wireless standard.

Gigabit Wireless

The latest 802.11 wireless technology to be adopted at scale is 802.11ac. It extends 802.11n, enabling improvements specifically in the 5.8 GHz band, with 802.11n technology used in the 2.4 GHz band for backwards compatibility.

By sticking to the 5.8 GHz band, 802.11ac is able to benefit from a huge 160 Hz channel bandwidth which would be impossible in the already crowded 2.4 GHz band. In addition, beamforming and support for up to 8 MIMO streams raises the speeds that can be supported. Depending on configuration, data rates can range from a minimum of 433 Mbit/s to multiple Gigabits in cases where both the router and the end-user device have multiple antennas.

If that’s not fast enough, the even more cutting edge 802.11ad standard (which is now starting to appear on the market) uses 60 GHz ‘millimeter wave’ frequencies to achieve data rates up to 7 Gbit/s, even without MIMO propagation. The major catch with this is that at 60 GHz frequencies, wireless range and penetration are greatly reduced.

Looking Ahead

Now that we’ve achieved Gigabit speeds, what’s next? Besides high speeds, the IEEE 802.11 working group has recognized that low speed, power efficient communication is in fact also an area with a great deal of potential for growth. While Wi-Fi has traditionally been a relatively power-hungry standard, the upcoming protocols will have attributes that will allow it to target areas like the Internet of Things (IoT) market with much more energy efficient communication.

20 Years and Counting

Although it has been around for two whole decades as a standard, Wi-Fi has managed to constantly evolve and keep up with the times. From the dial-up era to broadband adoption, to smartphones and now as we enter the early stages of IoT, Wi-Fi has kept on developing new technologies to adapt to the needs of the market. If history can be used to give us any indication, then it seems certain that Wi-Fi will remain with us for many years to come.

October 11th, 2017

Bringing IoT intelligence to the enterprise edge by supporting Google Cloud IoT Core Public Beta on ESPRESSObin and MACCHIATObin community platforms

By Aviad Enav Zagha, Sr. Director Embedded Processors Product Line Manager, Networking Group, Marvell

Though the projections made by market analysts still differ to a considerable degree, there is little doubt about the huge future potential that implementation of Internet of Things (IoT) technology has within an enterprise context. It is destined to lead to billions of connected devices being in operation, all sending captured data back to the cloud, from which analysis can be undertaken or actions initiated. This will make existing business/industrial/metrology processes more streamlined and allow a variety of new services to be delivered.

With large numbers of IoT devices to deal with in any given enterprise network, the challenges of efficiently and economically managing them all without any latency issues, and ensuring that elevated levels of security are upheld, are going to prove daunting. In order to put the least possible strain on cloud-based resources, we believe the best approach is to divest some intelligence outside the core and place it at the enterprise edge, rather than following a purely centralized model. This arrangement places computing functionality much nearer to where the data is being acquired and makes a response to it considerably easier. IoT devices will then have a local edge hub that can reduce the overhead of real-time communication over the network. Rather than relying on cloud servers far away from the connected devices to take care of the ‘heavy lifting’, these activities can be done closer to home. Deterministic operation is maintained due to lower latency, bandwidth is conserved (thus saving money), and the likelihood of data corruption or security breaches is dramatically reduced.

Sensors and data collectors in the enterprise, industrial and smart city segments are expected to generate more than 1GB per day of information, some needing a response within a matter of seconds. Therefore, in order for the network to accommodate the large amount of data, computing functionalities will migrate from the cloud to the network edge, forming a new market of edge computing.

In order to accelerate the widespread propagation of IoT technology within the enterprise environment, Marvell now supports the multifaceted Google Cloud IoT Core platform. Cloud IoT Core is a fully managed service mechanism through which the management and secure connection of devices can be accomplished on the large scales that will be characteristic of most IoT deployments.

Through its IoT enterprise edge gateway technology, Marvell is able to provide the necessary networking and compute capabilities required (as well as the prospect of localized storage) to act as mediator between the connected devices within the network and the related cloud functions. By providing the control element needed, as well as collecting real-time data from IoT devices, the IoT enterprise gateway technology serves as a key consolidation point for interfacing with the cloud and also has the ability to temporarily control managed devices if an event occurs that makes cloud services unavailable. In addition, the IoT enterprise gateway can perform the role of a proxy manager for lightweight, rudimentary IoT devices that (in order to keep power consumption and unit cost down) may not possess any intelligence. Through the introduction of advanced ARM®-based community platforms, Marvell is able to facilitate enterprise implementations using Cloud IoT Core. The recently announced Marvell MACCHIATObin™ and Marvell ESPRESSObin™ community boards support open source applications, local storage and networking facilities. At the heart of each of these boards is Marvell’s high performance ARMADA® system-on-chip (SoC) that supports Google Cloud IoT Core Public Beta.

Via Cloud IoT Core, along with other related Google Cloud services (including Pub/Sub, Dataflow, Bigtable, BigQuery, Data Studio), enterprises can benefit from an all-encompassing IoT solution that addresses the collection, processing, evaluation and visualization of real-time data in a highly efficient manner. Cloud IoT Core features certificate-based authentication and transport layer security (TLS), plus an array of sophisticated analytical functions.

Over time, the enterprise edge is going to become more intelligent. Consequently, mediation between IoT devices and the cloud will be needed, as will cost-effective processing and management. With the combination of Marvell’s proprietary IoT gateway technology and Google Cloud IoT Core, it is now possible to migrate a portion of network intelligence to the enterprise edge, leading to various major operational advantages.

Please visit MACCHIATObin Wiki and ESPRESSObin Wiki for instructions on how to connect to Google’s Cloud IoT Core Public Beta platform.