Enabling the future starts with the first generation of ExpressFabric. This 1U top-of-rack switch holds a relatively simple design that houses three PLX PCIe switches under the black heat sinks. The networking card is located to the rear of the switch. This single card provides all Ethernet communication for the cluster.
Utilizing PCIe simplifies application integration. PCIe speeds the performance of existing protocols, and the software stack already understands existing Infiniband and Ethernet protocols.
QFSP+ connectors line the edge of the switch.
This is a working cluster of servers connected to the ExpressFabric switch. Copper and optical interconnects are used to demonstrate the flexibility of the existing interconnect technology. Copper can handle most close-quarters connections, and optical can stretch its legs up to 100 meters.
There are monitors and USB connections to each server in this mockup. In actual deployment, the eight cables in the middle would be gone, and the simple row of four cables on the right provide all communication between the servers. The switch provides plenty of interconnects to allow more servers, and multiple connections per server, if required.
The current implementation tops out at 20Gbits/sec, and we caught a picture of it running at 19.1Gbits/sec during the demo.