Update - The Helium system has been decommissioned December 1, 2016 as planned. Home accounts are available until January 4, 2017 as documented here: https://wiki.uiowa.edu/display/hpcdocs/Accessing+Your+Helium+Account
The HPC policy committee originally set the retirement date for the Helium HPC system on October 1, 2016. The current estimated date is December 1, 2016 due to delays in the acquisition of the replacement Argon HPC system. This date remains tentative at this time as due to delays in University purchasing. Further updates on timeline will be made as they are available. Additional information related to the retirement is as follows:
- Initial hardware warranty expirations began in August 2015. Every effort will be made to keep the system running out of warranty but there is a small chance of catastrophic failure that can't be easily repaired. Compute nodes failing out of warranty will not be repaired.
- Failed compute nodes will first be taken from the UI queue on Helium. As such, the size of the UI queue will shrink over the next until the system is retired.
- All home accounts and scratch filesystems on Helium will be decommissioned along with the system. Data will not be automatically migrated to other locations.
- No new accounts will be created on Helium after July 1, 2016.
- No new central software updates will be done on Helium after July 1, 2016.
- /scratch on Helium will be retired on July 1, 2016.
Frequently Asked Questions
What does retirement or decommissioning mean? - This means that the Helium HPC cluster is scheduled to be turned off on December 1, 2016. No logins to the system or computations will be possible after this date.
If I invested in Helium, do I get a discount on Argon? - Yes, please contact email@example.com for details.
If I have an account on Helium will you automatically create an account for me on newer systems? - No, to request access to newer cluster systems please visit this page.
If I'm using custom compiled software on Helium, will it work on the newer cluster systems? - Custom compiled applications will need to be recompiled on newer systems.
If I'm using centrally installed software on Helium will it be available on newer systems? - Maybe. The HPC team starts with central installations of widely used packages such as R, Python, and the Intel Compilers. Additional central installs are possible by contacting firstname.lastname@example.org but staffing capacity is limited so at times it can take a while to get packages installed centrally.
How does the system retirement affect my data (storage)? - It depends on where you are storing your data:
- Helium Home Accounts, /scratch, and /nfsscratch - These locations are a part of the infrastructure of the Helium system and will be retired along with the rest of the system. All important data stored in these locations should be migrated to a new location prior to system retirement. Data stored in these locations will not be accessible after the system is retired unless communicated to the contrary. A one month grace period will be provided for home accounts. A read only copy of home accounts will be accessible via CIFS at storage01.hpc.uiowa.edu until January 4, 2017. For assistance accessing this data please contact email@example.com.
- Neon Home Accounts and /nfsscratch - These are separate data storage systems from those on Helium and will not be affected by the Helium retirement.
- Paid HPC/Large Scale Storage File Shares - Individuals and groups using purchased shared storage will not be affected by the Helium retirement.
I have data on Helium in /scratch, /nfsscratch, or my home account that I wish to keep. What are my migration options?
Directions for one method of copying data from Helium to Neon is available here: https://wiki.uiowa.edu/display/hpcdocs/Accessing+Your+Helium+Account
- If you are using the Neon HPC system you can transfer data to home and scratch filesystems on the Neon system. Please remember that /scratch, /nfsscratch, and home accounts are NOT backed up and are not designed as long term storage so important data should be stored in other ways too.
- For long term data storage needs in the HPC environment we recommend the paid Large Scale Storage service which is available for $80/TB/Year including backups. Contact firstname.lastname@example.org to request a file share.
- The Research Data Storage Service provides 5TB of long term data storage to each faculty member on campus at no cost but is not available from HPC systems but data can be transferred from HPC systems to this space.
- OneDrive provides 1TB of long term cloud storage space to every individual with a HawkID. This is not mounted on HPC systems but data can be transferred from the HPC system to this space.