SOURCE: PathScale, Inc.

June 21, 2005 00:01 ET

ISC 2005: PathScale's New InfiniPath™ Interconnect for InfiniBand™ Continues to Set Low-Latency Performance Records for HPC Interconnects

Comprehensive MPI Benchmarks Prove That InfiniPath Clusters Scale Better Than the Competition and Uniquely Exploit Multi-Processor Nodes and Dual-Core Processors

HEIDELBERG, GERMANY and MOUNTAIN VIEW, CA -- (MARKET WIRE) -- June 21, 2005 -- PathScale, developer of innovative software and hardware solutions to accelerate the performance and efficiency of Linux® clusters, has released new benchmark results that show its new InfiniPath™ interconnect for InfiniBand™ dramatically outperforms competitive interconnect solutions by providing the lowest latency across a broad spectrum of cluster-specific benchmarks. These results were released today at the International Supercomputer Conference 2005 in Heidelberg, Germany.

The InfiniPath HTX™ Adapter is a low-latency cluster interconnect for InfiniBand™ that plugs into standard HyperTransport technology-based HTX slots on AMD Opteron™ servers. Optimized for communications-sensitive applications, InfiniPath is the industry's lowest-latency Linux cluster interconnect for message passing (MPI) and TCP/IP applications.

PathScale InfiniPath achieved an MPI latency of 1.32 microseconds (as measured by the standard MPI "ping-pong" benchmark), n1/2 message size of 385 bytes and TCP/IP latency of 6.7 microseconds. This represents performance advantages that are 50 percent to 200 percent better than the recently announced Mellanox and Myricom interconnect products. InfiniPath also produced industry-leading benchmarks on more comprehensive metrics that predict how real applications will perform.

"When evaluating interconnect performance for HPC applications, it is essential to go beyond the simplistic zero-byte latency and peak streaming bandwidth benchmarks," said Art Goldberg, COO of PathScale. "InfiniPath delivers the industry's best performance on simple MPI benchmarks and provides dramatically better results on more meaningful interconnect metrics such as n1/2 message size (or half-power point), latency across a spectrum of message sizes, and latency across multiprocessor nodes. These are important benchmarks that give better indications of real world application performance. We challenge users to benchmark their own applications on an InfiniPath cluster and see what the impact of this breakthrough performance means to them."

PathScale InfiniPath uniquely exploits multi-processor nodes and dual-core processors to deliver greater effective bandwidth as additional CPUs are added. Any of the existing serial offload HCA designs cause messages to stack up when multiple processors try to access the adapter. By contrast, the unique messaging parallelization capabilities of InfiniPath enable multiple processors or cores to send messages simultaneously, maintaining constant latency while dramatically improving small message capacity and further reducing the n1/2 message size and substantially increasing effective bandwidth.

"We compared the performance of PathScale's InfiniPath interconnect on a 16-node/32-CPU test run with VASP, a quantum mechanics application used frequently in our facility, and found that VASP running on InfiniPath was about 50 percent faster than on Myrinet," said Martin Cuma, Scientific Applications Programmer for the Center for High-Performance Computing at the University of Utah. "Standard benchmarks do not give an accurate picture of how well an interconnect will perform in a real-world environment. Performance improvement will vary with different applications due to their parallelization strategies, but InfiniPath almost always delivers better performance than other interconnects when you scale it to larger systems and run communications-intensive scientific codes. InfiniPath has proven to be faster and to scale better for our parallel applications than other cluster interconnect solutions that we tested."

PathScale InfiniPath Performance Results

PathScale has published a white paper that includes a technical analysis of several application benchmarks that compare the new InfiniPath interconnect with competitive interconnects. This PathScale white paper can be downloaded from: www.pathscale.com/whitepapers.html

PathScale Customer Benchmark Center

PathScale has established a fully-integrated InfiniPath cluster at its Customer Benchmark Center in Mountain View, California. Potential customers and ISVs are invited to remotely test their own MPI and TCP/IP applications and personally experience the clear performance advantages of the InfiniPath low-latency interconnect.

InfiniPath Availability

Over 25 leading Linux system vendors around the world have signed on to resell the InfiniPath HTX Adapter, including vendors in every major European market. The InfiniPath HTX Adapter card will ship in late June and is orderable immediately from vendors participating in the PathScale FastPath™ reseller program as described at www.pathscale.com/authorized_resellers.html

About PathScale

Based in Mountain View, California, PathScale develops innovative software and hardware technologies that substantially increase the performance and efficiency of Linux clusters, the next significant wave in high-end computing. Applications that benefit from PathScale's technologies include seismic processing, complex physical modeling, EDA simulation, molecular modeling, biosciences, econometric modeling, computational chemistry, computational fluid dynamics, finite element analysis, weather modeling, resource optimization, decision support and data mining. PathScale's investors include Adams Street Partners, Charles River Ventures, Enterprise Partners Venture Capital, CMEA Ventures, ChevronTexaco Technology Ventures and the Dow Employees Pension Plan. For more details, visit www.pathscale.com, send email to sales@pathscale.com or telephone 1-650-934-8100.

Contact Information

  • Media Contact:
    David Wright
    MediaBridge Public Relations®
    Email Contact
    1-650-618-1544