Computers are normally measured in Flops a measure of how many adds/multiples etc. a system could reach per-second on floating point numbers. In scientific computing we normally are interested in Double Precision numbers. In general if you are using Single Precision, or Floats, performance and available memory will be double. This isn't the same in all cases, eg. see the Nvidia Tesla K10 (GK104, 4,580 SP GFlops, 190 DP GFlops).
So how fast is each part of Flux:
|Purchase||node ct.||core/node||cores||clock Ghz||DP flop/hz||DP GFlops|
- Anything in Italics is entering service, and is not yet available
- The highlighted elements are accelerators (GPU's or Phi's)
- The 40 K20x GPU's in FluxG are faster than Flux1 and Flux2 combined, at %9 the cost
- Machines marked Private are part of FOE,
- Machines flux4 or newer support the AVX instruction, which doubled the performance of vectorized codes.