构建面向未来的数据基础设施

Archive for the 'Storage' Category

  • August 17, 2023

    Marvell Bravera SC5 SSD 控制器荣获 FMS 2023 杰出展示奖

    By Kristin Hehir, Senior Manager, PR and Marketing, Marvell

     

    Marvell and Memblaze were honored with the “Most Innovative Customer Implementation” award at the Flash Memory Summit (FMS), the industry’s largest conference featuring flash memory and other high-speed memory technologies, last week.
    Powered by the Marvell® Bravera™ SC5 controller, Memblaze developed the PBlaze 7 7940 GEN5 SSD family, delivering an impressive 2.5 times the performance and 1.5 times the power efficiency compared to conventional PCIe 4.0 SSDs and ~55/9us read/write latency1. This makes the SSD ideal for business-critical applications and high-performance workloads like machine learning and cloud computing. In addition, Memblaze utilized the innovative sustainability features of Marvell’s Bravera SC5 controllers for greater resource efficiency, reduced environmental impact and streamlined development efforts and inventory management.

  • June 13, 2023

    FC-NVMe 成为 HPE 下一代块存储的主流

    By Todd Owens, Field Marketing Director, Marvell

    While Fibre Channel (FC) has been around for a couple of decades now, the Fibre Channel industry continues to develop the technology in ways that keep it in the forefront of the data center for shared storage connectivity. Always a reliable technology, continued innovations in performance, security and manageability have made Fibre Channel I/O the go-to connectivity option for business-critical applications that leverage the most advanced shared storage arrays.

    A recent development that highlights the progress and significance of Fibre Channel is Hewlett Packard Enterprise’s (HPE) recent announcement of their latest offering in their Storage as a Service (SaaS) lineup with 32Gb Fibre Channel connectivity. HPE GreenLake for Block Storage MP powered by HPE Alletra Storage MP hardware features a next-generation platform connected to the storage area network (SAN) using either traditional SCSI-based FC or NVMe over FC connectivity. This innovative solution not only provides customers with highly scalable capabilities but also delivers cloud-like management, allowing HPE customers to consume block storage any way they desire – own and manage, outsource management, or consume on demand.HPE GreenLake for Block Storage powered by Alletra Storage MP

    At launch, HPE is providing FC connectivity for this storage system to the host servers and supporting both FC-SCSI and native FC-NVMe. HPE plans to provide additional connectivity options in the future, but the fact they prioritized FC connectivity speaks volumes of the customer demand for mature, reliable, and low latency FC technology.

  • January 04, 2023

    软件定义车辆的软件定义网络

    By Amir Bar-Niv, VP of Marketing, Automotive Business Unit, Marvell and John Heinlein, Chief Marketing Officer, Sonatus and Simon Edelhaus, VP SW, Automotive Business Unit, Marvell

    The software-defined vehicle (SDV) is one of the newest and most interesting megatrends in the automotive industry. As we discussed in a previous blog, the reason that this new architectural—and business—model will be successful is the advantages it offers to all stakeholders:

    • The OEMs (car manufacturers) will gain new revenue streams from aftermarket services and new applications;
    • The car owners will easily upgrade their vehicle features and functions; and
    • The mobile operators will profit from increased vehicle data consumption driven by new applications.

    What is a software-defined vehicle? While there is no official definition, the term reflects the change in the way software is being used in vehicle design to enable flexibility and extensibility. To better understand the software-defined vehicle, it helps to first examine the current approach.

    Today’s embedded control units (ECUs) that manage car functions do include software, however, the software in each ECU is often incompatible with and isolated from other modules. When updates are required, the vehicle owner must visit the dealer service center, which inconveniences the owner and is costly for the manufacturer.

  • November 28, 2022

    真正了不起的黑客 – 赢得 SONiC 用户满意

    By Kishore Atreya, Director of Product Management, Marvell

    Recently the Linux Foundation hosted its annual ONE Summit for open networking, edge projects and solutions. For the first time, this year’s event included a “mini-summit” for SONiC, an open source networking operating system targeted for data center applications that’s been widely adopted by cloud customers. A variety of industry members gave presentations, including Marvell’s very own Vijay Vyas Mohan, who presented on the topic of Extensible Platform Serdes Libraries. In addition, the SONiC mini-summit included a hackathon to motivate users and developers to innovate new ways to solve customer problems. 

    So, what could we hack?

    At Marvell, we believe that SONiC has utility not only for the data center, but to enable solutions that span from edge to cloud. Because it’s a data center NOS, SONiC is not optimized for edge use cases. It requires an expensive bill of materials to run, including a powerful CPU, a minimum of 8 to 16GB DDR, and an SSD. In the data center environment, these HW resources contribute less to the BOM cost than do the optics and switch ASIC. However, for edge use cases with 1G to 10G interfaces, the cost of the processor complex, primarily driven by the NOS, can be a much more significant contributor to overall system cost. For edge disaggregation with SONiC to be viable, the hardware cost needs to be comparable to that of a typical OEM-based solution. Today, that’s not possible.

  • October 26, 2022

    64G 光纤通道的品鉴记录

    By Nishant Lodha, Director of Product Marketing – Emerging Technologies, Marvell

    While age is just a number and so is new speed for Fibre Channel (FC), the number itself is often irrelevant and it’s the maturity that matters – kind of like a bottle of wine! Today as we make a toast to the data center and pop open (announce) the Marvell® QLogic® 2870 Series 64G Fibre Channel HBAs, take a glass and sip into its maturity to find notes of trust and reliability alongside of operational simplicity, in-depth visibility, and consistent performance.

    Big words on the label? I will let you be the sommelier as you work through your glass and my writings.

    Marvell QLogic 2870 series 64GFC HBAs

  • October 12, 2022

    云存储和内存的演变

    By Gary Kotzur, CTO, Storage Products Group, Marvell and Jon Haswell, SVP, Firmware, Marvel

    The nature of storage is changing much more rapidly than it ever has historically. This evolution is being driven by expanding amounts of enterprise data and the inexorable need for greater flexibility and scale to meet ever-higher performance demands.

    If you look back 10 or 20 years, there used to be a one-size-fits-all approach to storage. Today, however, there is the public cloud, the private cloud, and the hybrid cloud, which is a combination of both. All these clouds have different storage and infrastructure requirements. What’s more, the data center infrastructure of every hyperscaler and cloud provider is architecturally different and is moving towards a more composable architecture. All of this is driving the need for highly customized cloud storage solutions as well as demanding the need for a comparable solution in the memory domain.

  • September 26, 2022

    SONiC: It’s Not Just for Switches Anymore

    By Amit Sanyal, Senior Director, Product Marketing, Marvell

    SONiC (Software for Open Networking in the Cloud) has steadily gained momentum as a cloud-scale network operating system (NOS) by offering a community-driven approach to NOS innovation. In fact, 650 Group predicts that revenue for SONiC hardware, controllers and OSs will grow from around US$2 billion today to around US$4.5 billion by 2025. 

    Those using it know that the SONiC open-source framework shortens software development cycles; and SONiC’s Switch Abstraction Interface (SAI) provides ease of porting and a homogeneous edge-to-cloud experience for data center operators. It also speeds time-to-market for OEMs bringing new systems to the market.

    The bottom line: more choice is good when it comes to building disaggregated networking hardware optimized for the cloud. Over recent years, SONiC-using cloud customers have benefited from consistent user experience, unified automation, and software portability across switch platforms, at scale.

    As the utility of SONiC has become evident, other applications are lining up to benefit from this open-source ecosystem.

    A SONiC Buffet: Extending SONiC to Storage

    SONiC capabilities in Marvell’s cloud-optimized switch silicon include high availability (HA) features, RDMA over converged ethernet (RoCE), low latency, and advanced telemetry. All these features are required to run robust storage networks.

    Here’s one use case: EBOF. The capabilities above form the foundation of Marvell’s Ethernet-Bunch-of-Flash (EBOF) storage architecture. The new EBOF architecture addresses the non-storage bottlenecks that constrain the performance of the traditional Just-a-Bunch-of-Flash (JBOF) architecture it replaces-by disaggregating storage from compute.

    EBOF architecture replaces the bottleneck components found in JBOF - CPUs, DRAM and SmartNICs - with an Ethernet switch, and it’s here that SONiC is added to the plate. Marvell has, for the first time, applied SONiC to storage, specifically for services enablement, including the NVMeoFTM (NVM Express over Fabrics) discovery controller, and out-of-band management for EBOF, using Redfish® management. This implementation is in production today on the Ingrasys ES2000 EBOF storage solution. (For more on this topic, check out thisthis, and this.)

    Marvell has now extended SONiC NOS to enable storage services, thus bringing the benefits of disaggregated open networking to the storage domain.

    OK, tasty enough, but what about compute?

    How Would You Like Your Arm Prepared?

    I prefer Arm for my control plane processing, you say. Why can’t I manage those switch-based processors using SONiC, too, you ask? You’re in luck. For the first time, SONiC is the OS for Arm-based, embedded control plane processors, specifically the control plane processors found on Marvell® Prestera® switches. SONiC-enabled Arm processing allows SONiC to run on lower-cost 1G systems, reducing the bill-of-materials, power, and total cost of ownership for both management and access switches.

    In addition to embedded processors, with the OCTEON® family, Marvell offers a smorgasbord of Arm-based processors. These can be paired with Marvell switches to bring the benefits of the Arm ecosystem to networking, including Data Processing Units (DPUs) and SmartNICs.

    By combining SONiC with Arm processors, we’re setting the table for the broad Arm software ecosystem - which will develop applications for SONiC that can benefit both cloud and enterprise customers.

    The Third Course

    So, you’ve made it through the SONiC-enabled switching and on-chip control processing courses but there’s something more you need to round out the meal. Something to reap the full benefit of your SONiC experience. PHY, of course. Whether your taste runs to copper or optical mediums; PAM or coherent modulation, Marvell provides a complete SONiC-enabled portfolio by offering SONiC with our (not baked) Alaska® Ethernet PHYs and optical modules built using Marvell DSPs.  Room for Dessert?

    Finally, by enabling SONiC across the data center and enterprise switch portfolio we’re able to bring operators the enhanced telemetry and visibility capabilities that are so critical to effective service-level validation and troubleshooting. For more information on Marvell telemetry capabilities, check out this short video:

     

    The Drive Home

    Disaggregation has lowered the barrier-to-entry for market participants - unleashing new innovations from myriad hardware and software suppliers. By making use of SONiC, network designers can readily design and build disaggregated data center and enterprise networks.

    For its part, Marvell’s goal is simple: help realize the vision of an open-source standardized network operating system and accelerate its adoption.

  • August 05, 2022

    Marvell SSD 控制器荣获 FMS 2022 杰出展示奖

    By Kristin Hehir, Senior Manager, PR and Marketing, Marvell


    FMS best of show award

    Flash Memory Summit (FMS), the industry’s largest conference featuring data storage and memory technology solutions, presented its 2022 Best of Show Awards at a ceremony held in conjunction with this week’s event. Marvell was named a winner alongside Exascend for the collaboration of Marvell’s edge and client SSD controller with Exascend’s high-performance memory card.

    Honored as the “Most Innovative Flash Memory Consumer Application,” the Exascend Nitro CFexpress card powered by Marvell’s PCIe® Gen 4, 4-NAND channel 88SS1321 SSD controller enables digital storage of ultraHD video and photos in extreme temperature environments where ruggedness, endurance and reliability are critical. The Nitro CFexpress card is unique in controller, hardware and firmware architecture in that it combines Marvell’s 12nm process node, low-power, compact form factor SSD controller with Exascend’s innovative hardware design and Adaptive Thermal Control™ technology.

    The Nitro card is the highest capacity VPG 400 CFexpress card on the market, with up to 1 TB of storage, and is certified VPG400 by the CompactFlash® Association using its stringent Video Performance Guarantee Profile 4 (VPG400) qualification. Marvell’s 88SS1321 controller helps drive the Nitro card’s 1,850 MB/s of sustained read and 1,700 MB/sustained write for ultimate performance.

    “Consumer applications, such as high-definition photography and video capture using professional photography and cinema cameras, require the highest performance from their storage solution. They also require the reliability to address the dynamics of extreme environmental conditions, both indoors and outdoors,” said Jay Kramer, Chairman of the Awards Program and President of Network Storage Advisors Inc. “We are proud to recognize the collaboration of Marvell’s SSD controllers with Exascend’s memory cards, delivering 1,850 MB/s of sustained read and 1,700 MB/s sustained write for ultimate performance addressing the most extreme consumer workloads. Additionally, Exascend’s Adaptive Thermal Control™ technology provides an IP67 certified environmental hardening that is dustproof, water resistant and tackles the issue of overheating and thermal throttling.”

    More information on the 2022 Flash Memory Summit Best of Show Award Winners can be found here.

  • December 06, 2021

    Marvell 和 Ingrasys 携手为数据中心使用 EBOF 的 Ceph 集群提供支持

    By Khurram Malik, Senior Manager, Technical Marketing, Marvell

    A massive amount of data is being generated at the edge, data center and in the cloud, driving scale out Software-Defined Storage (SDS) which, in turn, is enabling the industry to modernize data centers for large scale deployments. Ceph is an open-source, distributed object storage and massively scalable SDS platform, contributed to by a wide range of major high-performance computing (HPC) and storage vendors. Ceph BlueStore back-end storage removes the Ceph cluster performance bottleneck, allowing users to store objects directly on raw block devices and bypass the file system layer, which is specifically critical in boosting the adoption of NVMe SSDs in the Ceph cluster. Ceph cluster with EBOF provides a scalable, high-performance and cost-optimized solution and is a perfect use case for many HPC applications. Traditional data storage technology leverages special-purpose compute, networking, and storage hardware to optimize performance and requires proprietary software for management and administration. As a result, IT organizations neither scale-out nor make it feasible to deploy petabyte or exabyte data storage from a CAPEX and OPEX perspective.
    Ingrasys (subsidiary of Foxconn) is collaborating with Marvell to introduce an Ethernet Bunch of Flash (EBOF) storage solution which truly enables scale-out architecture for data center deployments. EBOF architecture disaggregates storage from compute and provides limitless scalability, better utilization of NVMe SSDs, and deploys single-ported NVMe SSDs in a high-availability configuration on an enclosure level with no single point of failure.

    Power Ceph Cluster with EBOF in Data Centers image 1

    Ceph is deployed on commodity hardware and built on multi-petabyte storage clusters. It is highly flexible due to its distributed nature. EBOF use in a Ceph cluster enables added storage capacity to scale up and scale out at an optimized cost and facilitates high-bandwidth utilization of SSDs. A typical rack-level Ceph solution includes a networking switch for client, and cluster connectivity; a minimum of 3 monitor nodes per cluster for high availability and resiliency; and Object Storage Daemon (OSD) host for data storage, replication, and data recovery operations. Traditionally, Ceph recommends 3 replicas at a minimum to distribute copies of the data and assure that the copies are stored on different storage nodes for replication, but this results in lower usable capacity and consumes higher bandwidth. Another challenge is that data redundancy and replication are compute-intensive and add significant latency. To overcome all these challenges, Ingrasys has introduced a more efficient Ceph cluster rack developed with management software – Ingrasys Composable Disaggregate Infrastructure (CDI) Director.

  • January 11, 2021

    By Todd Owens, Field Marketing Director, Marvell

    While there are 32GB micro-SD or USB boot device options available today with VMware requiring as much as 128GB of storage for the OS and Microsoft Storage Spaces Direct needing 200GB — these solutions simply don’t have the storage capacity needed. Using hardware RAID controllers and disk drives in the server bays is another option. However, this adds significant cost and complexity to a server configuration just to meet the OS requirement. The proper solution to address separating the OS from user data is the HPE NS204i-p NVME OS Boot Device.

  • November 12, 2020

    By Lindsey Moore, Marketing Coordinator, Marvell

    Marvell wins FMS Award for Most Innovative Technology

    Flash Memory Summit, the industry's largest trade show dedicated to flash memory and solid-state storage technology, presented its 2020 Best of Show Awards yesterday in a virtual ceremony. Marvell, alongside Hewlett Packard Enterprise (HPE), was named a winner for "Most Innovative Flash Memory Technology" in the controller/system category for the Marvell NVMe RAID accelerator in the HPE OS Boot Device.

    Last month, Marvell introduced the industry’s first native NVMe RAID 1 accelerator, a state-of-the-art technology for virtualized, multi-tenant cloud and enterprise data center environments which demand optimized reliability, efficiency, and performance. HPE is the first of Marvell's partners to support the new accelerator in the HPE NS204i-p NVMe OS Boot Device offered on select HPE ProLiant servers and HPE Apollo systems. The solution lowers data center total cost of ownership (TCO) by offloading RAID 1 processing from costly and precious server CPU resources, maximizing application processing performance.

  • October 27, 2020

    By Shahar Noy, Senior Director, Product Marketing

    You are an avid gamer. You spend countless hours in forums to decide between the ASUS TUF components and researching Radeon RX 500 or GeForce RTX 20, to ensure games would show at their best on your hard-earned PC gaming rig. You made your selection and can’t stop bragging about your system’s ray tracing capabilities and how realistic is the “Forza Motorsport 7” view from your McLaren F1 GT cockpit when you drive through the legendary Le Mans circuit at dusk. You are very proud of your machine and the year 2020 is turning out to be good: Microsoft finally launched the gorgeous looking “Flight Simulator 2020,” and CD Projekt just announced that the beloved and award-winning “The Witcher 3” is about to get an upgrade to take advantage of the myriad of hardware updates available to serious gamers like you. You have your dream system in hand and life can’t be better.

  • August 03, 2018

    打造基础设施行业巨头: Marvell 与 Cavium 强强联合!

    By Todd Owens, Field Marketing Director, Marvell

    Marvell and Cavium

    Marvell 已于 2018 年 7 月 6 日完成收购 Cavium 的相关事宜,目前整合工作正在顺利进行。 Cavium 将充分整合到 Marvell 公司。  Marvell 后,我们担负的共同使命是开发和交付半导体解决方案,实现对全球数据进行更加快速安全的处理、移动、存储和保护。两家公司强强联合使基础设施行业巨头的地位更加稳固,服务的客户包括云/数据中心、企业/校园、服务供应商、SMB/SOHO、工业和汽车行业等。

    infrastructure powerhouse

    有关与 HPE 之间的业务往来,您首先需要了解的是所有事宜仍将照常进行。 针对向 HPE 提供 I/O 和处理器技术相关事宜,收购前后您所打交道的合作伙伴保持不变。  Marvell 是存储技术领域排名靠前的供应商,业务涵盖非常高速的读取通道、高性能处理器和收发器等,广泛应用于当今 HPE ProLiant 和 HPE 存储产品所使用的绝大多数硬盘驱动器 (HDD) 和固态硬盘驱动器 (SSD) 模块。

    行业排名靠前的 QLogic® 8/16/32Gb 光纤通道和 FastLinQ® 10/20/25/50Gb 以太网 I/O 技术将继续提供 HPE 服务器连接和存储解决方案。 此类产品仍将 HPE 智能 I/O 选择作为重点,并以卓越的性能、灵活性和可靠性著称。

       

    Marvell FastLinQ 以太网和 Qlogic 光纤通道 I/O 适配器组合

    我们将继续推出适用于 HPC 服务器的 ThunderX2® Arm® 处理器技术,如高性能计算应用程序适用的 HPE Apollo 70。 今后我们还将继续推出嵌入 HPE 服务器和存储的以太网网络技术,以及应用于所有 HPE ProLiant 和 HPE Synergy Gen10 服务器中 iLO5 基板管理控制器 (BMC) 的 Marvell ASIC 技术。

      iLO 5 for HPE ProLiant Gen10 部署在 Marvell SoC 上

    iLO 5 for HPE ProLiant Gen10 部署在 Marvell SoC 上

    这听起来非常棒,但是真正合并后会出现什么变化呢?

    如今,合并后的公司拥有更为广泛的技术组合,这将帮助 HPE 在边缘、网络以及数据中心提供出众的解决方案。

    Marvell 拥有行业排名靠前的交换机技术,可实现 1GbE 到 100GbE 乃至更大速率。 这让我们能够提供从 IoT 边缘到数据中心和云之间的连接。 我们的智能网卡 (Intelligent NIC) 技术具有压缩、加密等多重功能,客户能够比以往更加快速智能地对网络流量进行分析。 我们的安全解决方案和增强型 SoC 和处理器性能将帮助我们的 HPE 设计团队与 HPE 展开合作,共同对新一代服务器和解决方案进行革新。

    随着合并的不断深化,您将注意到品牌的转变,并且访问信息的位置也在不断变更。 但是我们的特定产品品牌仍将保留,具体包括如 Arm ThunderX2、光纤通道 QLogic 以及以太网 FastLinQ 等,但是多数产品将从 Cavium 向 Marvell 转移。 并且我们的网络资源和电子邮件地址也将进行变更。 例如,您现在可以通过 www.marvell.com/hpe 访问 HPE 子网站。 并且您很快就能通过“hpesolutions@marvell.com”与我们取得联系. 您当前使用的附属产品随后仍将不断更新。 现已对  HPE 特定网卡, HPE以太网快速参考指南, 光纤通道快速参考 指南和演示材料等切实进行更新。 更新将在未来几个月内持续进行。

    总而言之,我们正在不断发展突破、创造辉煌。 如今,作为一个 统一的团队 凭借出众的技术,我们将更加专注于为 HPE 及其有合作的伙伴和客户的发展提供助力。 即刻联系我们,了解更多相关内容。 由此获取我们的相关领域联系方式. 这一全新起点让我们兴奋不已,日益巩固“I/O 和基础设施的关键地位!”。

  • April 05, 2018

    VMware vSAN ReadyNode 原件可替换

    By Todd Owens, Field Marketing Director, Marvell

    VMware vSAN ReadyNode Recipes Can Use Substitutions When you are baking a cake, at times you substitute in different ingredients to make the result better. The same can be done with VMware vSAN ReadyNode configurations or recipes. Some changes to the documented configurations can make the end solution much more flexible and scalable. In this VMware BLOG, the author outlines that server elements in the bom can change including:
    • CPU
    • 存储器
    • Caching Tier
    • Capacity Tier
    • NIC
    • Boot Device
    However, changes can only be made with devices that are certified as supported by VMware. The list of certified I/O devices can be found on VMware vSAN Compatibility Guide and the full portfolio of NICs, FlexFabric Adapters and Converged Network Adapters form HPE and Cavium are supported. If we zero in on the HPE recipes for vSAN ReadyNode configurations, here are the substitutions you can make for I/O adapters. Ok, so we know what substitutions we can make in these vSAN storage solutions. What are the benefits to the customer for making this change? There are several benefits to the HPE/Cavium technology compared to the other adapter offerings.
    • HPE 520/620 Series adapters support Universal RDMA – the ability to support both RoCE and IWARP RDMA protocols with the same adapter.
      • NPAR (Network Partitioning) allows for virtualization of the physical adapter port. SR-IOV Offloadmove management of the VM network from the Hypervisor (CPU) to the Adapter. With HPE/Cavium adapters, these two technologies can work together to optimize the connectivity for virtual server environments and offload the Hypervisor (and thus CPU) from managing VM traffic, while providing full Quality of Service at the same time.
    • Offloads in general – In addition to RDMA, Storage and SR-IOV Offloads mentioned above, HPE/Cavium Ethernet adapters also support TCP/IP Stateless Offloads and DPDK small packet acceleration offloads as well. Each of these offloads moves work from the CPU to the adapter, reducing the CPU utilization associated with I/O activity. As mentioned in my previous blog, because these offloads bypass tasks in the O/S Kernel, they also mitigate any performance issues associated with Spectre/Meltdown vulnerability fixes on X86 systems.
    • Adapter Management integration with vCenter – All HPE/Cavium Ethernet adapters are managed by Cavium’s QCC utility which can be fully integrated into VMware v-Center. This provides a much simpler approach to I/O management in vSAN configurations.
    In summary, if you are looking to deploy vSAN ReadyNode, you might want to fit in a substitution or two on the I/O front to take advantage of all the intelligent capabilities available in Ethernet I/O adapters from HPE/Cavium.
  • March 02, 2018

    Connecting Shared Storage – iSCSI or Fibre Channel

    By Todd Owens, Field Marketing Director, Marvell

    One of the questions we get quite often is which protocol is best for connecting servers to shared storage?

    What customers need to do is look at the reality of what they need from a shared storage environment and make a decision based on cost, performance and manageability.

      list of Hewlett Packard Enterprise (HPE) component pricesNotes: 1. Optical transceiver needed at both adapter and switch ports for 10GbE networks. Thus cost/port is two times the transceiver cost 2. FC switch pricing includes full featured management software and licenses 3. FC Host Bus Adapters (HBAs) ship with transceivers, thus only one additional transceiver is needed for the switch port 

    So if we do the math, the cost per port looks like this: 

    10GbE iSCSI with SFP+ Optics = $437+$2,734+$300 = $3,471 

    10GbE iSCSI with 3 meter Direct Attach Cable (DAC) =$437+$269+300 = $1,006 

    16GFC with SFP+ Optics = $773 + $405 + $1,400 = $2,578 

    Note, in my example, I chose 3 meter cable length, but even if you choose shorter or longer cables (HPE supports from 0.65 to 7 meter cable lengths), this is still the lowest cost connection option. Surprisingly, the cost of the 10GbE optics makes the iSCSI solution with optical connections the most expensive configuration.

    It really comes down to distance and the number of connections required. The DAC cables can only span up to 7 meters or so. That means customers have only limited reach within or across racks. If customers have multiple racks or distance requirements of more than 7 meters, FC becomes the more attractive option, from a cost perspective.

    • Latency is an order of magnitude lower for FC compared to iSCSI. Latency of Brocade Gen 5 (16Gb) FC switching (using cut-through switch architecture) is in the 700 nanoseconds range and for 10GbE it is in the range of 5 to 50 microseconds. The impact of latency gets compounded with iSCSI should the user implement 10GBASE-T connections in the iSCSI adapters. This adds another significant hit to the latency equation for iSCSI.

    图 1: Cavium’s iSCSI Hardware Offload IOPS Performance  

    图 2:

    Keep in mind, Ethernet network management hasn’t really changed much. Network administrators create virtual LANs (vLANs) to separate network traffic and reduce congestion. These network administrators have a variety of tools and processes that allow them to monitor network traffic, run diagnostics and make changes on the fly when congestion starts to impact application performance.

    On the FC side, companies like Cavium and HPE have made significant improvements on the software side of things to simplify SAN deployment, orchestration and management. Technologies like fabric-assigned port worldwide name (FA-WWN) from Cavium and Brocade enable the SAN administrator to configure the SAN without having HBAs available and allow a failed server to be replaced without having to reconfigure the SAN fabric. Cavium and Brocade have also teamed up to improve the FC SAN diagnostics capability with Gen 5 (16Gb) Fibre Channel fabrics by implementing features such as Brocade ClearLink™ diagnostics, Fibre Chanel Ping (FC ping) and Fibre Channel traceroute (FC traceroute), link cable beacon (LCB) technology and more. HPE’s Smart SAN for HPE 3PAR provides the storage administrator the ability to zone the fabric and map the servers and LUNs to an HPE 3PAR StoreServ array from the HPE 3PAR StoreServ management console. 

    In many enterprise environments, there are typically dozens of network administrators. In those same environments, there may be less than a handful of “SAN” administrators. Yes, there are lots of LAN connected devices that need to be managed and monitored, but so few for SAN connected devices.

    Well, it depends. If application performance is the biggest criteria, it’s hard to beat the combination of bandwidth, IOPS and latency of the 16GFC SAN. If compatibility and commonality with existing infrastructures is a critical requirement, 10GbE iSCSI is a good option (assuming the 10GbE infrastructure exists in the first place). If security is a key concern, FC is the best choice. When is the last time you heard of a FC network being hacked into?

  • January 11, 2018

    全球数据的存储

    By Marvell PR Team

    在当下以数据为中心的世界,数据存储是发展的基石,但未来的数据将如何存储却是一个广泛争论的问题。 显而易见,数据增长仍将保持大幅上升态势。 IDC 公司编写的一份题为“数据时代 2025”的报告指出,全世界生成的数据量将以接近指数级的速度增长。 到 2025 年,数据量预期将会超过 163 Zb(几乎是当今数据量的 8 倍,接近 2010 年数据量的 100 倍)。 随着云服务的使用量增加、物联网 (IoT) 节点的广泛部署,虚拟现实和增强现实应用、自动驾驶汽车、机器学习和“大数据”的盛行,都将在未来新的数据驱动时代发挥重要作用。

    展望未来,智能城市的建设需要部署非常精密的基础设施,以达到减少交通拥堵、提高公共设施效率、优化环境等等种类繁杂的目的,从而导致数据量进一步攀升。 未来的很大一部分数据需要满足实时访问。 这种需求将对我们采用的技术产生影响,也会影响数据在网络中的存储位置。 另外,还必须考虑到一些重要的安全因素。

    因此,为了能够控制开销,尽可能提高运营效率,数据中心和商业企业将尝试采用分层存储方法,使用非常合适的存储介质,以降低相关成本。 存储介质的选择将依据数据访问的频率及可接受的延迟程度而定。 This will require the use of numerous different technologies to make it fully economically viable - with cost and performance being important factors. 

    当前市面上有众多不同的存储介质。 有些存储介质已经非常成熟,有些则仍然处于新兴阶段。 在某些应用中,硬盘驱动器 (HDD) 正在被固态硬盘 (SSD) 取代,而在 SSD 领域,随着技术从 SATA 向 NVMe 的迁移,NVMe 正在促使 SSD 技术充分发挥自身性能潜力。 HDD 的容量持续大幅提高,整体性价比的提升也增加了设备的吸引力。 云计算保证了庞大的数据存储需求,这意味着 HDD 在这个领域具有强劲的发展动力。

    未来将出现其他形式的存储器,它们将帮助我们应对日益增长的存储需求所带来的挑战。 其中包括更高容量的 3D 堆栈闪存,也包括全新的技术,例如相变存储器,它的写入速度很快,使用寿命也更长。 随着基于光纤的 NVMe (NVMf) 接口的问世,高带宽、超低延迟、非常高的可扩展性的 SSD 数据存储迎来了广阔的前景。

    Marvell 很早就认识到数据存储的重要性将日益提高,并一直将这个领域作为发展重点,成为了业内 HDD 控制器消费级固态硬盘控制器排名靠前的供应商。

    在产品发布之后仅 18 个月,采用 NANDEdge™ 纠错技术的 Marvell 88SS1074 SATA SSD 控制器的出货量就突破了 5000 万件。 凭借屡获殊荣的 88NV11xx 系列小尺寸 DRAM-less SSD 控制器(基于 28nm CMOS 半导体工艺),Marvell 为市场提供了优化的高性能 NVMe 存储控制器解决方案,可在紧凑型简化手持式计算设备上采用,例如平板电脑和超级本。 这些控制器能够支持 1600Mb/s 的读取速度,而且功耗非常低,有利于节省电池电量。 Marvell 提供 88SS1092 NVMe SSD 控制器等解决方案,针对新计算模式设计,让数据中心能够共享存储数据,从而非常大程度地降低成本和提高性能效率。

    数据超乎想象的增长意味着需要更多的存储。 新兴应用和创新技术将促使我们采用全新方式来提高存储容量、缩短延迟和确保安全性。 Marvell 为行业提供一系列技术以满足数据存储需求,可以满足 SSD 或 HDD 实施要求,且配备所有配套接口类型,从 SAS 和 SATA 一直到 PCIe 和 NMVe。Marvell storing data 欲了解有关 Marvell 产品如何存储全球数据的更多信息,请访问 www.marvell.com

  • December 13, 2017

    Marvell NVMe DRAM-less SSD 控制器荣获 2017 年 ACE 奖

    作者:Sander Arts,Marvell 市场营销代理副总裁

    ACE Awards logo全球科技领域的主要代表上周齐聚圣何塞会展中心,共同谛听今年年度电子创新 (ACE) 大奖花落谁家。 此项久负盛名的赛事由杰出的电子工程杂志 EDN 和 EE Times 共同组织,重点奖励过往 12 个月推向市场的非常具有创新品质的产品,以及非常具有前瞻意识的高层领导和非常有潜力的创业公司。 由杂志的编辑团队和多位备受尊敬的独立评委组成的评审团,将参与选择每个分类的大奖得主。88NV1160 controller for non-volatile memory express Marvell 于今年早些时候推出的 88NV1160 高性能控制器用于非易失性存储器 (NVMe),该产品击败 Diodes Inc. 和 Cypress 半导体公司等生产的强劲竞争产品,赢得梦寐以求的逻辑/接口/内存类大奖。 Marvell gained two further nominations at the awards - with 98PX1012 Prestera PX Passive Intelligent Port Extender (PIPE) also being featured in the Logic/Interface/Memory category, while the 88W8987xA automotive wireless combo SoC was among those cited in the Automotive category. 

    88NV1160 NVMe 固态硬盘 (SSD) 控制器专门设计用于下一代简化便携式计算设备(例如高端平板电脑和超级本),可以提供 1600MB/s 读取速度,同时保持这些设备运行所需的极低能耗 (<1.2W)。 以 28nm 低能耗 CMOS 处理为基础,每个控制器 IC 都内嵌配置双核 400MHz Arm® Cortex®-R5 处理器。

    通过与主机存储缓冲技术的配合,88NV1160 表现出大大低于同类竞争产品的延迟率。 正是这一点支持该设备极大提高了读取速度。 By utilizing its embedded SRAM, the controller does not need to rely on an external DRAM memory - thereby simplifying the memory controller implementation. 并最终大幅减少所需的主板空间,同时降低了产品的总体物料清单成本。

    88NV1160 特有的 NANDEdge™ 低密度奇偶校验纠错功能提高了 SSD 的耐久度,确保最终产品在整个使用寿命期间,都能维持长期的系统可靠性。 该控制器内置 256 位 AES 加密引擎,能够保证所存储元数据的安全性,使其不因潜在的安全漏洞而遭到攻击。 除此之外,这些无 DRAM 的 IC 外形非常紧凑,因而非常有利于实现多芯片封装的集成。

    现在客户都希望他们的便携式电子设备能够具备更多的计算资源,这样才可以使用大量令人垂涎的新上市软件应用;充分利用基于云的服务,享受增强现实和游戏的乐趣。 在提供此类功能的同时,此类设备产品需要能够支持更长的电池使用时间,减少充电次数,如此方能进一步提高用户所得到的使用体验。 客户需要先进的 IC 在具备强大处理能力的同时改善能耗水平,而 88NV1160 正好满足了这种需求。ACE 2017 Award "We're excited to honor this robust group for their dedication to their craft and efforts in bettering the industry for years to come," said Nina Brown, Vice President of Events at UBM Americas. "The judging panel was given the difficult task of selecting winners from an incredibly talented group of finalists and we'd like to thank all of those participants for their amazing work and also honor their achievements. These awards aim to shine a light on the best in today's electronics realm and this group is the perfect example of excellence within both an important and complex industry."  

  • October 17, 2017

    利用 NVMe 充分释放闪存的潜能

    作者:Jeroen Dorgelo,Marvell 存储事业部战略总监

    当今的闪存驱动器存在一个鲜为人知的小秘密,那就是其中很多闪存仍在使用老式接口。 虽然 SATA 和 SAS 协议自最初推出之时已历经数代的发展,但它们仍然是基于数十年前针对传统磁碟式硬盘的设计理念。 这些传统协议成为了限制当今 SSD 发挥潜能的瓶颈。

    NVMe 是专为 SSD 设计的崭新存储接口标准, 采用了大规模并行传输架构,可让当今的 SSD 发挥出全部性能潜力。 由于价格和兼容性的影响,NVMe 经过了一些时间才在市场上有所起色,现在终于获得了成功。

    传统串行连接技术

    SATA 仍然是当前最常见的存储接口。 无论是硬盘驱动器,还是日益盛行的闪存存储设备,大部分都是采用 SATA 接口来工作。 The latest generation of SATA - SATA III – has a 600 MB/s bandwidth limit. 虽然这个带宽足已满足日常消费类应用的需求,但对于企业服务器而言,还是远远不够。 即便是在 I/O 密集型的消费类应用场景下(例如视频编辑)也可能受限于这个带宽。

    SATA 标准最初在 2000 年作为旧式 PATA 并行接口标准的串行替代标准而发布。 SATA 采用高级主机控制接口 (AHCI) 技术,单个命令队列的深度为 32 个命令。 此命令队列架构非常适合传统的磁碟式硬盘存储,但用于闪存时,却存在较多限制。

    如果说 SATA 是消费类驱动器的标准存储接口,那么 SAS 则在企业应用中更为常见。 Released originally in 2004, SAS is also a serial replacement to an older parallel standard SCSI. Designed for enterprise applications, SAS storage is usually more expensive to implement than SATA, but it has significant advantages over SATA for data center use - such as longer cable lengths, multipath IO, and better error reporting. SAS 的带宽限制也更高,为 1200MB/s。

    尽管 SAS 支持的队列深度为 254 个命令,大大多于 SATA 的 32 个,但和 SATA 相同,SAS 依然只支持单个命令队列。 虽然 SAS 具有更深的命令队列和更高的带宽限制,总体性能优于 SATA,但也远非理想的闪存接口。

    NVMe - Massive Parallelism 

    NVMe 于 2011 年推出,从设计之初就充分考虑到闪存存储的需求。 NVMe 由多家存储公司联合开发,其主要目标是克服 SATA 和 SAS 对闪存性能造成的瓶颈。

    如前所述,SATA 受限于 600MB/s 而 SAS 受限于 1200MB/s 带宽,与之相反,NVMe 是在 PCIe 总线上运行,其带宽理论上只受限于 PCIe 总线速度。 当前的 PCIe 标准每条通道提供 1GB/s 甚至更高带宽,而 PCIe 连接一般都会提供多条通道,其总线速度几乎从未对基于 NVMe 的 SSD 构成阻碍。

    NVMe 为实现大规模并行计算而设计,提供多达 64,000 个命令队列,每个命令队列的队列深度高达 64,000。 这种并行计算非常适合闪存存储的随机存取特性,也适合当今计算机中的多内核、多线程处理器。 NVMe 的协议经过合理化设计,与 AHCI 相比,它的优化命令集可通过更少的操作完成更多的工作。 与 SATA 或 SAS 相比,NVMe 的 IO 操作需要的命令通常更少,从而能够减少延迟。 对于企业客户而言,NVMe 还支持很多企业存储功能,例如多路径 IO、可靠的错误报告和管理。

    凭借高速度和低延迟的优势,加上处理高 IOPS 的能力,NVMe SSD 成为企业数据中心的一大热点。 非常重视低延迟和高 IOP 的企业(例如高频率贸易公司和数据库,以及网络应用托管公司)是最初尝试 NVMe SSD 技术的公司,也是非常忠实的支持者。

    采用 NVMe 的障碍

    虽然 NVMe 具有很高的性能,但从历史上来看,它也被视为一种成本相对较高的技术。 高成本影响了它在消费级存储领域的普及。 相对来说,当 NVMe 技术刚刚出现的时候,支持它的操作系统并不多,而且其价格高企,对一般消费者来说更加不具吸引力,不管怎样,很多消费者其实并不能用到其速度更快的这一优势。

    但这一切正在改变。 NVMe SSD 的价格正在逐渐降低,在某些情况下,其价格甚至可与 SATA 驱动器相媲美。 原因不仅是市场的力量,也得益于新的技术创新,例如无 DRAM 型 NVMe SSD。

    鉴于 DRAM 是 SSD 物料清单 (BoM) 成本中的大项,无 DRAM 型 SSD 成本可以降至更低,价格也更有竞争力。 随着 NVMe 1.2 发布,其主机存储缓冲机制 (HMB) 的支持使得无 DRAM 型 SSD 能够借用主机系统内存作为 SSD 的 DRAM 缓冲区,从而带来了更出色的性能。 无 DRAM 型 SSD 充分利用 HMB 支持,能够达到与基于 DRAM 的 SSD 相似的性能,同时节省成本、空间和能耗。

    NVMe SSD 还比以往的协议更加节能。 NVMe 协议本身已经非常高效,但它运行的 PCIe 链路可能会消耗大量的待机功率。 新型 NVMe SSD 支持高效、自动的睡眠状态传输,让设备可以达到甚至低于 SATA SSD 的能耗水平。

    所有这些都意味着,NVMe 比以往更加实用,可以用于各种各样的用例,包括大数据中心(由于 SSD 成本更低可以节省资本支出,又因为功耗更低可以节省营业费用),以及功率敏感的移动/便携应用设备,例如笔记本电脑、平板电脑和智能手机,这些现在都可以考虑使用 NVMe 技术。

    满足速度需求

    对于速度的追求在企业应用中是不争的趋势,但在消费类应用中,是不是真的需要 NVMe 提供的那种速度呢? 对于曾经安装过额外内存、购买过更大硬盘驱动器(或 SSD)、订购过更快互联网连接的任何用户而言,答案不言而喻。

    如今的消费类用例尚未逼近 SATA 驱动器的极限,非常具有可能的一部分原因是 SATA 仍是消费类存储最常用的接口。 当前的视频录制和编辑、游戏和文件服务器应用程序,则已经开始迫近消费类 SSD 的极限性能,而今后的用例则必将推动其继续发展。 随着 NVMe 的价位现在逐渐接近 SATA,我们必须构建能够经受未来考验的存储产品。

  • August 31, 2017

    利用硬件加密保护嵌入式存储

    作者:Jeroen Dorgelo,Marvell 存储事业部战略总监

    对于工业、军事和大量现代商业应用,数据安全性自然显得尤为重要。 虽然基于软件的加密对于消费者和企业环境而言通常运行良好,但工业和军事应用所使用的嵌入式系统背景,则通常需要一种结构更简单而本质性能更加强劲的软件。

    自我加密硬盘采用板上加密处理器在硬盘层确保数据的安全。 这样不仅能自动提高硬盘安全性,而且过程对用户和主机操作系统透明。 这种设备通过在后台自动加密数据,来提供嵌入式系统所要求的使用简洁性和弹性数据安全。

    嵌入式与企业数据安全

    嵌入式和企业存储通常都要求很强的数据安全性。 根据相关行业领域的不同,通常会涉及到客户(或可能是患者)隐私、军事数据或商业数据的安全。 数据种类不同,共性也越来越少。 嵌入式存储与企业存储的使用方式有很大的差别,因而导致解决数据安全问题的方法也大相径庭。

    企业存储通常由数据中心多层机架内互相连通的磁盘阵列组成,而嵌入式存储一般只是简单的将一块固态硬盘 (SSD) 安装到嵌入式电脑或设备之中。 企业经常会控制数据中心的物理安全,也会执行企业网络(或应用程序)的软件访问控制。 Embedded devices, on the other hand - such as tablets, industrial computers, smartphones, or medical devices - are often used in the field, in what are comparatively unsecure environments. 这种背景下的数据安全没有其他选择,只能在设备层面进行。

    基于硬件的全盘加密

    嵌入式应用的访问控制非常没有保障,这里的数据安全工作越自动化越透明越好。

    全盘、基于硬件的加密已经证实是达到这个目的最佳方法。全盘加密 (FDE) 通过自动对硬盘所有内容进行加密的方式,达到更高程度的安全性和透明度。 基于文件的加密会要求用户选择要加密的文件或文件夹,且需要提供解密的密码或秘钥,与之相对,FDE 的工作则充分透明。 所有写入硬盘的数据都会被加密,不过一旦经过验证,用户对此硬盘的访问就如同未加密硬盘一样简单。 这不仅会让 FDE 更易于使用,也意味着这是一种更可靠的加密方法,因为所有的数据都会自动加以保护。 即使用户忘记加密或没有访问权限的文件(如隐藏的文件、临时文件和交换空间)也都会自动加以保护。

    虽然 FDE 也可以通过软件技术实现,但基于硬件的 FDE 性能更好,且固有的保护特性更强。 基于硬件的 FDE 是在硬盘层以自我加密 SSD 的方式进行。SSD 控制器内含一个硬件加密引擎,也是在硬盘自身之内存储私有秘钥。

    Because software based FDE relies on the host processor to perform encryption, it is usually slower - whereas hardware based FDE has much lower overhead as it can take advantage of the drive’s integrated crypto-processor. 基于硬盘的 FDE 还可以加密硬盘的主引导记录,与之相反,基于软件的加密却不具备这种功能。

    以硬件为中心的 FDE 不仅对用户透明,对主机操作系统亦然。 他们在后台悄无声息的工作,而且运行无需任何特殊软件。 除有助于优化操作简便性之外,这也意味着敏感的加密秘钥会独立于主机操作系统和存储器进行保存,所有的私有秘钥全部存储于硬盘本身。

    提高数据安全性

    这款基于硬件的 FDE 不仅能够提供当今市场急需的透明且便于使用的加密过程,而且具备现代 SSD 特有的数据安全优势。 NAND 单元的使用寿命有限,而现代 SSD 采用先进的损耗均衡算法,来尽可能的延长使用寿命。 写入操作并不会在数据更新时覆盖原有的 NAND 单元,而是在文件更新时在硬盘中来回移动,通常的结果是同一条数据存在多个副本,分散存储于 SSD 之中。 这种损耗均衡技术非常有效,但是也让基于文件的加密和数据擦除变得更加困难,因为现在有多个数据副本需要加密或擦除。

    FDE 为 SSD 解决了加密和擦除两方面的问题。 由于所有的数据都加密,所以无需担心是否存在未加密数据残留。 另外,因为所使用的加密方式(一般为 256 位 AES)非常安全,擦除硬盘就同擦除私有秘钥一样便于操作。

    解决嵌入式数据安全问题

    嵌入式设备通常都会成为 IT 部门的一大安全挑战,因为这些设备经常在非受控环境下使用,还有可能被未授权人员使用。 虽然企业 IT 有权限执行企业范围内的数据安全政策和访问控制,但是在工业环境或现场环境中针对嵌入式设备使用这种方法却难上加难。

    针对嵌入式应用中数据安全的简易解决方案正是基于硬件的 FDE。 自我加密硬盘带有硬盘加密处理器,其处理开销非常低而且在后台运行,对用户和主机操作系统透明。 其便于使用的特性也有助于提高安全性,因为管理员不需要依靠用户执行安全政策,而且私有秘钥不会暴露给软件或操作系统。

  • March 08, 2017

    NVMe 控制器突破数据中心传统磁介质的局限性具备高速且低成本优势的 NVMe SSD 共享存储已进入第二代

    作者:Nick Ilyadis,Marvell 产品线技术副总裁

    Marvell 在 OCP 峰会上初次推出第二代 NVM Express SSD 控制器 88SS1092  

    88SS1092_C-sized 数据中心 SSD: NVMe and Where We’ve Been 

    When solid-state drives (SSDs) were first introduced into the data center, the infrastructure mandated they work within the confines of the then current bus technology, such as Serial ATA (SATA) and Serial Attached SCSI (SAS), developed for rotational media. 即使非常快的 HDD 也比不上 SSD 的速度,其对应总线的吞吐能力也成为阻碍充分发挥 SSD 技术优势的一大瓶颈。 作为一种在网络,图形以及其他插入式设备上广泛应用的高带宽总线, PCIe 成为了可行的选择,但 PCIe 总线配合原本为 HDD 开发的存储协议(例如 AHCI)仍然无法有效发挥易失性存储介质的性能优势。 因此,NVM Express (NVMe) 行业工作组成立,创建一组为 PCIe 总线开发的标准化协议和命令,以便允许多条路径可以充分利用数据中心中 SSD 的优势。 The NVMe specification was designed from the ground up to deliver high-bandwidth and low-latency storage access for current and future NVM technologies. 

    NVMe 优化了命令发送和完成通路, 单个 I/O 队列中支持多达 64K 条命令。 此外,增加了许多企业功能支持,例如端到端数据保护(与 T10 DIF 和 DIX 标准兼容),增强错误报告和虚拟化功能。 总而言之,NVMe 作为一种可扩展的存储应用协议,旨在充分发挥 PCIe SSD 性能,从而更好的满足企业级、数据中心和消费级等各种场景的应用需求。

    SSD Network Fabrics 

    New NVMe controllers from companies like Marvell allowed the data center to share storage data to further maximize cost and performance efficiencies. 使用 SSD 建立存储集群,取代以往为每台服务器单独配置存储的方式,提高数据中心存储总容量。 此外,通过为其他服务器建立公共区传送数据,可以方便地访问共享数据。 因此,这些新计算模型使数据中心不仅能够充分优化 SSD 的高速性能,而且能够更经济地在整个数据中心部署 SSD,从而降低总体成本并简化维护。 针对负载较高的服务器,不用增加额外的 SSD,而是从存储池动态分配满足其需求。

    以下是这些网络结构工作原理的简单示例: 如果一个系统有 10 个服务器,每个服务器的 PCIe 总线上都有一个 SSD,那么可以通过每个 SSD 形成一个 SSD 组成的存储池群,它不仅提供了一种额外的存储手段,而且还提供了一种集中和共享数据访问的方法。 假设一台服务器的利用率只有 10%,而另一台服务器被超负荷使用,SSD 组成的存储池群将为超负荷的服务器提供更多的存储空间,而无需为其额外添加 SSD。 在这个例子中如果是数百台服务器,您会看到其成本、维护和性能效率是非常高的。

    Marvell 当初推出第一款 NVMe SSD 控制器,就是服务于这种新的数据中心存储架构。 该产品仅可支持四个 PCIe 3.0 通道,根据主机需要,可以配置为支持 4BG/S 和 2BG/S 两种带宽方式。 它使用 NVMe 高级命令处理实现了出类拔萃的 IOPS 性能。 为了充分利用 PCIe 总线带宽,Marvell 创新的 NVMe 设计通过大量的硬件辅助来增强 PCIe 链路数据的传送。 这有助于解决传统的主机控制瓶颈问题,发挥出闪存真正的性能。

    第二代 NVMe 控制器已经面市! 

    Marvell 第二代 NVMe SSD 控制器芯片 88SS1092 已推出,并已通过了内部测试和第三方操作系统/平台的兼容性测试。 因此,Marvell®88SS1092 已为用于增强下一代存储和数据中心系统做好准备,并在 2017 年 3 月举行的美国加州圣荷塞举行的开放计算项目 (OCP) 峰会上初次亮相。

    The Marvell 88SS1092 is Marvell's second-generation NVMe SSD controller capable of PCIe 3.0 X 4 end points to provide full 4GB/s interface to the host and help remove performance bottlenecks. While the new controller advances a solid-state storage system to a more fully flash-optimized architecture for greater performance, it also includes Marvell's third-generation error-correcting, low-density parity check (LDPC) technology for the additional reliability enhancement, endurance boost and TLC NAND device support on top of MLC NAND. 

    今天,NVMe SSD 共享存储的速度和成本优势不仅成为了现实,而且已经发展到了第二代。 网络模式已经发生了变化。 通过使用 NVMe 协议充分发挥 SSD 的全部性能,突破传统磁介质的限制,建立全新的架构。 SSD 性能还可以进一步提高,而 SSD 组成的存储池群和新的网络架构支持实现存储池和共享数据访问。 在当今的数据中心,随着新的控制器和技术帮助优化了 SSD 技术的性能和成本效率,NVMe 工作组的辛勤工作正在付诸实践。

    Marvell 88SS1092 Second-Generation NVMe SSD Controller

    New process and advanced NAND controller design includes:

    88SS1092-chart-sized

     

  • January 17, 2017

    Marvell Honored with 2016 Analysts’ Choice Award by The Linley Group for its Storage Processor

    By Marvell PR Team

    Linley We pride ourselves on delivering innovative solutions to help our global customers store, move and access data—fast, securely, reliably and efficiently. In further recognition of our world-class technology, we are excited to share that The Linley Group, one of the most prominent semiconductor analyst firms, has selected Marvell’s ARMADA® SP (storage processor) as the Best Embedded Processor in its 2016 Analysts' Choice Awards. 

    Honoring the best and the brightest in semiconductor technology, the Analysts' Choice Awards recognize the solutions that deliver superior power, performance, features and pricing for their respective end applications and markets.

    To learn more about Marvell’s SP solution, visit: http://www.marvell.com/storage/armada-sp/.

  • October 10, 2014

    固态硬盘错误代码修正

    作者:Engling Yeo,嵌入式低功耗闪存控制器业务总监

    固态硬盘 (SSD) 组

    更低成本,更佳可靠性?

    像许多科技创新一样,固态硬盘 (SSD) 最初的起点是高性能伴随高价格。 数据中心看到了其中的价值,随着技术的发展,OEM 也看到了更小更轻外形规格 SSD 的发展潜力(从而造就了 Apple MacBook Air 等新产品),相信这类产品会逐渐成为主流消费品技术。 而谈到主流消费品技术,必然伴随对价格的高度敏感性。 虽然最终用户可能不大愿意讨论错误代码纠正 (ECC) 机制,而会声称他们最关注的的是价格,但同样的用户如果遭遇低价 SSD 丢失数据的问题,也会非常生气! 因此,工程师们必须要关注 ECC 之类的问题 - 而且我们非常享受这种谈话。

    那么让我们开始讨论吧。 如上所述,使用固态嵌入式存储或 NAND 闪存设备的消费市场,对于成本非常敏感。 Marvell 所做的大部分工作可以总结归纳为“信号处理”,以弥合影响消费级存储产品基础层面的问题。 任何固态存储产品的根本构成要素都是浮栅晶体管单元。 浮栅可以存储电子电荷的分散等级。 这些等级转化为一或多个存储二进位。 NAND 闪存制造商通常采取两种方法增加存储密度 1) 物理集中,让更多浮栅设备贴合更近,以及 2) 让每一个存储元件存储更多位数(目前非常先进的技术可以在每个浮栅晶体管中存储 3 位)。 但是,这两种方法都会在位数据取回过程中增加错误出现的概率。 Marvell 的挑战是创造一种增强型 ECC 技术,当应用于高效的硬件架构时,以高密度 NAND 闪存达到相同的数据完整性,如果没有这种技术,其原位错误率会更高。

    除复杂性之外,每个浮栅晶体管的编程-擦除 (P/E) 循环数量都有限,如果超过限制,错误的概率就会增加到临界值以上,使得晶体管呈现无效或不可修复的状态。 这种限制是由于擦除过程中设备会承受高电压,从而造成晶体管的物理损坏。 随着 P/E 循环数量的增加,出现错误的概率也会增加。 一种好的纠错策略能够弥合这种影响,从而延长设备的使用寿命。

    Marvell 目前正处于固态存储应用的第三代低密度奇偶校验代码的开发过程之中。 Marvell 的目标是提供有效的 ECC 管理和策略,让消费者在不牺牲可靠性的情况下,获得单位成本更低的存储设备。 而这一点值得深入探讨!

档案文件