Schematic diagram of the connection between Flux nodes and the U-M campus network showing the connection speeds and paths |
While any high-bandwidth network demand or application will work with this gateway and the Flux compute nodes, the example we have is of bandwidth to NFS storage. A researcher in the School of Public Health has a high-bandwidth NFS file server connected to the campus backbone at 10Gbps. He also has a Flux allocation and asked us to ensure that the network path to his storage servers is via the InfiniBand to Ethenet Gateway.
After the configuration was set, which is available to anyone using Flux, his compute jobs read data from his NFS server at between 4.6Gbps and 9.2Gbps—approaching the maximum speed of the 10Gbps interface on the file server.
Network traffic approaching 10Gbps in January 2014; this traffic was between Flux nodes and one file server attached at 10Gbps |
The gateway hardware handles the balance between the two 10Gbps links to the campus network, ensuring that there is no bottleneck between Flux and the campus network.
Balanced network traffic to the 10Gbps Ethernet connections to campus minimizes network bottlenecks |
If you are interested in making use of this high-bandwidth Ethernet connection, please let us know at hpc-support@umich.edu.