Today, Mellanox announced the immediate release of its MetroX TX6100 solution that enables InfiniBand and Ethernet RDMA connectivity between data centers. Mellanox says that "MetroX allows for rapid disaster recovery and improve utilization of remote storage and compute infrastructures across long distances and multiple geographic sites."
"A common problem facing data-driven researchers is the time cost of moving their data between systems, from machines in one facility to the next, which can slow their computations and delay their results," said Mike Shuey, HPC systems manager at Purdue University. "Mellanox's MetroX solution lets us unify systems across campus, and maintain the high-speed access our researchers need for intricate simulations -- regardless of the physical location of their work."
Purdue University recently deployed MetroX TX6100 over six kilometers to connect their computational clusters to storage facilities thus providing access to its remote-based supercomputers and allowing Purdue University to organize limited access to data center space more efficiently resulting in higher facilities utilization.
Mellanox says that its MetroX technology is commonly used for internal data center long reach connectivity as well as between nodes up to ten kilometers apart. "The demand for long-haul interconnect technologies continues to increase as organizations deploy remote, agile systems," said Gilad Shainer, vice president of marketing at Mellanox. "Mellanox's MetroX RDMA systems provide the highest performing interconnect solution over long distances."