A student recently asked us what kinds of optical networking systems are used in Flux. Thinking this information might be of interest to the community, we decided to post the answer here as well.
Flux uses two main networking technologies; InfiniBand within the cluster and Ethernet to the rest of campus and the Internet.
The type of InfiniBand (IB) network we use is called QDR 4x, the QDR standing for quad data rate. In QDR IB, each data lane has a raw transmission rate of 10 gbit/s and there are four lanes per connection, so each link has a raw data rate of 40 gbit/s. QDR IB can run over copper or fiber optic cables, and the vast majority of the IB cables used in Flux are fiber optic. Our fiber IB cables come pre-terminated with QSFP connectors, so it is not entirely obvious what kind of lasers are used. That said, my understanding is that there are actually eight fiber strands in a QDR IB cable; four 10 gbit/s strands for each direction of data transfer.
On the Ethernet side, we use multiple 10 gbit/s 10GBASE-SR links between the Flux access switches and their serving distribution switches. There are two distribution switches serving Flux; each has a 100 gbit/s Ethernet link to the campus backbone and a 100 gbit/s Ethernet link to the other distribution switch.
The 100 gbit/s link between the distribution switches is of the 100GBASE-SR10 type, and the links between the distribution switches and the campus backbone are 100GBASE-LR4/ER4.
The ARC Data Science Platform (a.k.a. Fladoop or Flux Hadoop) uses nine 40 gbit/s Ethernet connections within the datacenter; each of these are 40GBASE-SR links.