In line with the University's holiday schedule, the CAEN HPC group will be on holiday from December 25th through January 1st. Nyx and Flux will be operational during this time and we will be monitoring these systems to ensure everything is operating appropriately.
Staff will be monitoring the ticket system but in general only be responding to critical or systems-related issues during the holiday break and will address non-critical issues after the holiday. As a reminder, immediately following the holiday break is the 2013-14 Winter Outage and there will be no access to the cluster or storage systems as of 6am on January 2nd.
As always, email any questions you may have to hpc-support@umich.edu and have a great holiday.
CAEN HPC Staff
News and updates about the Flux high performance computing cluster at the University of Michigan.
Friday, December 20, 2013
Thursday, December 12, 2013
SPINEVOLUTION on Flux
SPINEVOLUTION is, from its web site, a highly efficient computer program for the numerical simulation of NMR experiments and spin dynamics in general.
Its installation on Flux has some more nuances than other software, but Flux is a general platform and SPINEVOLUTION can be installed and run on Flux.
The LSA Research Support and Advocacy group, and Mark Montague in particular, have documented in installation and use of SPINEVOLUTION on Flux at https://sites.google.com/a/umich.edu/flux-support/software/lsa/spinevolution
While SPINEVOLUTION is a narrowly focused software package, the details of its installation on Flux may be applicable to other narrowly focused software packages.
For more information, please send email to hpc-support@umich.edu.
Undergraduate Student job in web data visualization
The CAEN HPC group would like to improve the graphical reporting of
much of the data available from the cluster.
In the past, we would run commands via scripts and parse the output and make graphs.
The most recent versions of the cluster management software present some (and increasingly more) of the information via a REST-ful interface that returns JSON-formatted results.
In addition, JavaScript graphing libraries are improving in usefulness and usability. Among these are d3.js, JavaScript InfoVis Toolkit, Chart.js, Google Charts and others.
Our current usage graphs (an example of which is below) do not differentiate different types of Flux products (regular nodes, larger memory nodes, GPU nodes, FOE nodes, etc.) and do not separate utilization by Flux project accounts or by Flux user accounts.
We would like
In the past, we would run commands via scripts and parse the output and make graphs.
The most recent versions of the cluster management software present some (and increasingly more) of the information via a REST-ful interface that returns JSON-formatted results.
In addition, JavaScript graphing libraries are improving in usefulness and usability. Among these are d3.js, JavaScript InfoVis Toolkit, Chart.js, Google Charts and others.
Our current usage graphs (an example of which is below) do not differentiate different types of Flux products (regular nodes, larger memory nodes, GPU nodes, FOE nodes, etc.) and do not separate utilization by Flux project accounts or by Flux user accounts.
Figure 1: The current Flux usage graphs do not differentiate between Flux projects, do not offer different time scales, and are generally of limited use. |
- an overview page that improves on the current usage graphs
- a way to see daily, weekly, monthly, and yearly detail
- a way to see Flux products (as above) individually and stacked together
- a place for this to live (locally? MiServer? AWS? Google Sites?)
- a web site akin to http://flux-stats/?account-flux(|m|g) that provides per-Flux-project reports including:
- allocated cores over time
- running cores (by user) over time
- current resources in use per total:
- xrunning / xallocated cores
- yrunning / yallocated GB RAM
- the current queue represented as running jobs:
job owner # cores in use # GB RAM in use times (start, running, total) job name job ID
job owner # cores req’d # GB RAM req’d time req’d job name job ID - some heuristic advice along the lines of:
- if you had X more cores, then Y more jobs would start
- if you had A more GB of RAM, then B more jobs would start
- “you would save money by switching your the allocations in project G from standard Flux to larger memory Flux”, etc.
Email us at coe-hpc-jobs@umich.edu if you are interested.
Subscribe to:
Posts (Atom)