SOURCE: Mellanox Technologies

November 14, 2005 10:30 ET

"Mellanox InfiniBand Accelerated" Clusters Continue Explosive Growth

InfiniBand-Based Supercomputers Listed on Top500 Nearly 3X Last Year's Total

SEATTLE, WA -- (MARKET WIRE) -- November 14, 2005 -- SUPERCOMPUTING 2005 -- Mellanox™ Technologies Ltd, the leader in business and technical computing interconnects, announced today that InfiniBand is the fastest growing cluster interconnect reported in the latest Top500 list of the world's most powerful computers. The number of supercomputers on the list using InfiniBand interconnect is nearly three times the total reported on the November 2004 list, with InfiniBand-based supercomputers holding two of the top five most prestigious ranking positions. The list (available at www.Top500.org) is published twice a year and ranks the top 500 commercially available computer systems according to the Linpack benchmark rating. This list is also presented at SuperComputing 2005 -- the world's leading conference on high performance computing (HPC), networking and storage. Other highlights of InfiniBand interconnect usage on the November 2005 Top500 list include:

--  The two highest ranked supercomputers built using 100% commercially
    available components are based on industry-standard InfiniBand interconnect
--  InfiniBand is the only high-speed, low-latency interconnect that
    reported an increasing rate of new supercomputers on the list
--  The average efficiency of all reported InfiniBand-based supercomputers
    is 75% -- far superior to the average efficiency of Gigabit Ethernet
    connected clusters at 54%
    
"InfiniBand's strong growth matched with the decline in usage of proprietary interconnects demonstrates that high-performance compute clusters favor industry-standard technologies," said Thad Omura, vice president of product marketing at Mellanox Technologies. "Widely available 10Gb/s InfiniBand fabric solutions are proven to efficiently scale multi-thousand node supercomputers today."

Also noteworthy on the list is an increase in the percentage of clusters versus other architecture types. Seventy-two percent of the systems on the list are clusters, up from 59% reported in November of 2004. As the Top500 list is a bellwether technology indicator for the broader business and technical computing market, the trend towards the clustered computing model reflects the increased demand for InfiniBand solutions.

"By clustering commodity servers, sites can build economical systems with superior processing power compared to monolithic systems. These clustered systems have the capability to incrementally scale-out in response to future, increased computing demands," said Michael Kagan, vice president of architecture at Mellanox Technologies. "InfiniBand enables the development of these systems with an optimal price/performance ratio while also eliminating costly forklift upgrades required by symmetric multi-processing (SMP) server architectures."

Mellanox congratulates the following sites reporting InfiniBand fabric usage on the November 2005 Top500.org list:

Installation Site (Rank)
NASA/Ames Research Center/NAS (4)
Sandia National Laboratories (5)
Virginia Tech (20)
Institute of Physical and Chemical Res. (RIKEN) (38)
University of Sherbrooke (51)
NASA/Goddard Space Flight Center (55)
NCSA (59)
Wright-Patterson Air Force Base/DoD ASC (61)
University of Oklahoma (67)
KTH - Royal Institute of Technology (70)
Hewlett-Packard (101, 128)
Galactic Computing (Shenzhen) Ltd. (130)
Los Alamos National Laboratory (132)
Trinity College Dublin (226)
SARA (Stichting Academisch Rekencentrum) (277)
Sandia National Laboratories (286)
Intel (289)
Institute of Genomics and Integrative Biology (293)
Texas Advanced Computing Center/Univ. of Texas (301)
NERSC/LBNL (305)
AMD Development Center (323)
Arizona State Univ. HPC Center (328)
United Institute of Informatics Problems (330)
Sun (368, 369)
Universitaet Paderborn - PC2 (371)
HWW/Universitaet Stuttgart (372)
Qualcomm (377)
Lawrence Livermore National Laboratory (408)
Come See Mellanox (Booth #902) at Supercomputing 2005

"Mellanox InfiniBand Accelerated" solutions will be on display in Mellanox's booth, InfiniBand vendor booths, the SCinet InfiniBand fabric, and StorCloud at Supercomputing 2005, November 14-17. A wide range of InfiniBand servers, switches and storage products will be displayed that take advantage of InfiniBand's price/performance benefits in high-performance computing, enterprise data centers, departmental workgroup clusters, and personal supercomputer environments.

About InfiniBand®

InfiniBand is an industry-standard interconnect technology defined by the InfiniBand Trade Association for the purpose of delivering exceptional I/O fabric performance demanded by data centers, high-performance computing, and embedded environments. Today, InfiniBand solutions provide high-bandwidth, low-latency 10 and 20Gb/s server-to-server and server-to-storage connections, and 30Gb/s and 60Gb/s switch-to-switch connections with a defined roadmap to 120Gb/s performance. InfiniBand architecture standardizes remote-direct-memory-access (RDMA) and 100% reliable transport -- hallmark capabilities for the industry's lowest latency, highest bandwidth network or fabric available today.

With the InfiniBand industry delivering affordable server, storage, and network platforms, open-source, interoperable software stacks, cluster, grid, storage and virtualization solutions, InfiniBand is the optimal fabric technology for world-class computing.

About Mellanox

Mellanox's field-proven offering of interconnect solutions for communications, storage and compute clustering are changing the shape of both business and technical computing. As the leader of industry-standard, InfiniBand-based silicon and card-based solutions, Mellanox is the driving force behind the most cost-effective, highest performance interconnect solutions available. For more information, visit www.mellanox.com.

Contact Information

  • Contact:
    Mellanox Technologies, Inc.
    Thad Omura
    Vice President of Product Marketing
    408-970-3400
    Email Contact