spacer

Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Click the Logos below for our full Coverage on that Vendor
spacer
spacer
spacer
spacer
spacer
spacer
spacer
spacer
spacer
spacer
spacer
spacer
spacer
spacer
spacer
spacer
spacer
spacer
spacer
spacer
spacer
spacer
spacer
spacer
spacer
spacer
spacer
spacer
spacer
spacer
spacer
spacer
spacer
Subscribe to receive our weekly newsletter
November 14, 2012

PLX Looks to Bring PCIe Fabric to Market

Nicole Hemsoth

PLX Technology, a maker of PCIe switches and bridges, has been spreading the word about using PCIe  as a general-purpose HPC interconnect.  At SC12 this week, the company went a little further down that path, unveiling their upcoming Express Fabric technology. We asked Vijay Meduri, executive vice president of engineering at PLX to talk about the rationale of using PCIe as an interconnect fabric.

spacer HPCwire: Your company has been talking about PCI Express as an external interconnect for some time. What’s the status of the technology/product set and are there any HPC deployments you can point to or are in the pipeline?

Vijay Meduri: PLX supports a wide portfolio of PCIe switches that support multiple usage models with a range of features. We are currently shipping PCIe Gen3 eight-gigatransfers-a-second  switches and customers have launched products with the latest family of devices.

PCIe Gen2, which delivers five gigatransfers a second, is a mature technology and has been used in HPC markets like GPGPU computing boxes and mainframes. Early interconnect usage focused on multi-host failover systems and extending the PCIe links over cables for I/O expansion boxes and embedded systems.

With the introduction of PCIe Gen3, the performance, cost and power advantages have become significant in comparison to earlier alternatives and new architectures are under development that will fully utilize these advantages.

Some examples of production systems that use PCIe as an interconnect are the Dell C410x HPC computing appliance and the IBM System Z mainframe.  Dell EqualLogic storage products have successfully replaced Ethernet bridges for inter-processor communication and now use native PCIe.

These systems utilize PLX software gaskets that allow their Ethernet applications to run seamlessly – an example of enterprise class robustness of a PCIe fabric.  Clustered Systems has announced prototype products with PCIe as the main interconnect.

HPCwire: What developments have made PCIe as an external interconnect viable? Why hasn’t it caught on sooner?

Meduri: PCIe has been successful in fan-out, dual- and multi-host computing, and enterprise storage applications. Now it is beginning to get traction in rack-level scale-out and other HPC applications.  here are both technical and business reasons why this migration has been slow to these new markets.

The primary technical hurdles were hardware features and software support.  Hardware features that enable rack-scale topologies and support for failover and redundancy have not been inherent yet. Key enhancements had to be made to overcome the legacy “address-domain” limitations of traditional PCI/PCI-X, while maintaining software compatibility, and staying within the PCIe base specification extensions.

The traditional usage for PCI/PCI-X has been within the box due to the electrical limitations of the bus, but these limitations no longer exist with PCIe. It has taken some time for the industry to apply and innovate around the new capabilities; the first wave of products was just legacy bus replacements.

While software gaskets to make the migration seamless for existing software applications were not available, significant progress has been made to address this area by leveraging existing open-source software stacks from Ethernet and InfiniBand.

On the business side of the spectrum, startups have tried and have been successful in implementing solutions with PCIe-based rack-level fabrics, but they have not financially survived the “system business model” of competing against the larger players. In contrast, PLX is focused on providing silicon solutions with the appropriate software and reference system support to enable the usage model. By being disciplined in how we design these enhancements, it is an incremental investment for us, which enables entry into new markets without jeopardizing the volume markets.

HPCwire: Briefly, what are the advantages of PCIe over traditional HPC fabrics like InfiniBand and Ethernet? Will it complement or, in some cases, replace those technologies?

Meduri: PCIe is a strong alternative in small- to medium-sized clusters — 20-1000 nodes, in one to eight racks — and allows for optimization of numerous bridges and switches in converged rack usage models. The usage model anticipates co-existence with large-scale fabrics like Ethernet and InfiniBand for scale-out. The PCIe convergence model and ongoing software development is being implemented to be compatible and co-exist with Ethernet and other fabrics in the data center.

With the advent of SSD-based tiered storage, micro-servers for web-centric applications, GPGPU computing and commodity cloud usage models there is an increasing need for a cost-effective, low-latency, standards-based fabric to address the right price points.  There is significant innovation in the industry to address these markets.  PLX has focused on allowing system vendors to pick from an array of CPU suppliers and I/O vendors, and connect them over the standard PCIe fabric

Related Posts

  • spacer Ethernet Tunneling through PCI Express Inter-Processor Communication, Low Latency Storage IO
  • spacer A Case for PCI Express as a High-Performance Cluster Interconnect
  • spacer NextIO Brings I/O Virtualization to HPC
  • spacer Intelโ€™s B
gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.