Start Up & Retention Packages
Including Flux in College of Engineering Start Up and Retention Packages
To use Flux in a College of Engineering faculty start-up or retention package, chairs and deans should:
- Work with the prospective faculty member and College High Performance Computing (HPC) or The Office of Research Cyberinfrastructure (ORCI) staff to determine the suitability of Flux for their research.
- Determine an allocation size and budget for Flux for the duration of the agreement.
- Describe the Flux allocation in the offer letter.
Using Flux is the recommended approach to providing HPC resources to College of Engineering faculty. The College of Engineering discourages buying equipment to support computing activities that can be reasonably supported by Flux.
Flux is primarily a computing resource. Storage, networking, and other "Cyberinfrastructure" requirements will not be addressed by Flux and should be discussed with the Associate Dean for Academic Affairs.
Flux is an HPC Linux-based cluster intended to support parallel and other applications that are not suitable for departmental or individual computers. Each Flux compute node comprises multiple CPU cores with at least 4 GB of RAM per core; Flux has over 8,000 cores. All compute nodes are interconnected with InfiniBand networking.
Computing jobs on Flux are managed through a combination of the Moab Scheduler, the Terascale Open-Source Resource and QUEue Manager (Torque) and the GOLD Allocation Manager from Adaptive Computing.
Flux Configuration has a detailed description of the Flux cluster.
The system also includes high speed scratch storage using the Lustre parallel network file system. The storage is also connected with InfiniBand. This file system allows researchers to store data on a short term basis to perform calculations; it is not for long term data storage or archival purposes.
All Flux nodes are interconnected with quad-data rate InfiniBand, delivering up to 40 Gbps of bandwidth and less than 5μs latency.
Flux is connected to the University of Michigan’s campus backbone to provide access to student and researcher desktops as well as other campus computing and storage systems. The campus backbone provides connectivity to the commodity Internet and the research networks Internet2 and MiLR.
The Flux cluster includes a comprehensive software suite of commercial and open source research software, including major software compilers, and many of the common research specific applications such as Mathematica, Matlab, R and Stata.
Data Center Facilities
Flux is housed in a HP Performance Optimized Data Center (POD) 240a that is professionally run by Information and Technology Services. The HP POD has batteries to provide power sufficient for a graceful shutdown of Flux. The high cooling efficiency of the POD reduces the over-all cost for Flux allocations.
Flux computing services are provided through a collaboration of University of Michigan units: The Office of Research Cyberinfrastructure (in the Office of the VP of Research and the Provost’s Office), the College of Engineering's central IT group, CAEN, and Information and Technology Services, as well as the computing groups in schools and colleges at the University.
The following steps will help you prepare a faculty start-up or retention package for a computationally focused researcher who may be able to use Flux
- Determine the suitability of Flux for the prospective faculty member by consulting with College HPC staff (Dr. Ken Powell) to consider whether a large computing resource is required and whether Flux can meet the research needs of the faculty member.
- Determine the size of the Flux allocation for ths start-up package, including the changes or flexibility in the size of the allocation over time. For more information about estimating Flux allocations see Flux Sizing or contact firstname.lastname@example.org.
- Determine an appropriate budget to include in the offer letter. For more information about estimating a budget for Flux see Flux Costing or contact email@example.com.
- Copy and paste into the offer letter the appropriate parts of the Flux Description above and the parts of Language for Offer Letters below.
- Plan for the end of the start-up/retention period or the exhaustion of the funds. At that time, the allocation on Flux expires and no more jobs associated with that Flux project can run. The user accounts and data are not purged at the end of the Flux project. The same accounts and data may be used for other projects or additional funds may be deposited into the original project to make it usable again.
This text may be copied and modified for use in offer letters.
For your research computing needs, you will be provided with an allocation of X core-months of computing time on the U-M's campus-level high performance computing cluster, called Flux. This allocation may be used during your Y-year start up period.
This allocation is flexible and may be managed by you to accommodate your demands and usage style over the allocation's lifetime. The number of cores you can use at one time is flexible - for example, you can use as few cores as you need starting out, and more cores as your computing requirements grow.
The computing costs represent a commitment of $ Z.
Flux includes compute hardware that is interconnected with high-speed networking, a high-speed parallel scratch file system, a rich suite of scientific and engineering libraries and applications, high-quality data-center space and professional systems support and administration.
More information about high performance computing in the College, including a description of Flux, is available on the College's website, http://www.engin.umich.edu/caen/hpc/.