Neon Cluster Frequently Asked Questions

The Neon HPC cluster is the second collaborative cluster system on the University of Iowa campus. The system brings together twenty-five groups (investors) from seven different colleges who purchased compute capacity to build a shared campus resource.

If you have any questions please contact hpc-sysadmins@iowa.uiowa.edu

System Availability & General Questions

When will the Neon system be available? – Below is our current timeline. This could still change if major unanticipated issues occur.

·      October 3, 2013 - Infrastructure Equipment Delivered


·      October 10, 2013 - Infrastructure Racked/Cabled


·      October 21, 2013 - Remaining Cluster Equipment Delivered


·      October 22 - 25, 2013 - Vendor Onsite for Hardware Installation & Integration


·      November 1, 2013 - Initial Software Load Complete
November 8, 2013 - System Testing/Validation Complete


·      November 11, 2013 - Preliminary Job Run Testing Begins


·      November 22, 2013 - System Testing/Software Improvements/Preliminary Job Run Testing Complete


·      December 4, 2013 - System Opens to Investors


·      January 6, 2013 - System Opens to Non-Investors


·      March 3, 2013 - All Shared Storage Available on Both Helium & Neon

Will non-investors have access to the Neon system? – Yes, the University of Iowa has committed funds to create a UI queue similar to that on Helium.

Can I still buy compute nodes in the Neon system? – Yes. Purchases are available until July 31st, 2014 or until all physical space available has been filled.

Will the Helium cluster be retired when the Neon system becomes available? – No, we anticipate that Helium will remain available through at least the end of 2016.

 

System Configuration

What are the major differences between Neon & Helium? - The Neon system runs a newer operating system, CentOS 6 (a version of Linux). Neon also uses newer, faster processors, systems contain more memory, and accelerator cards are more common.

What resources will be available in the UI queue on Neon? - Numbers may still change but it will be similar to the following:

            Qty 28 Standard Nodes (64GB, 16 Core, 2.6GHz) – 448 Cores

            Qty 5 High Memory Nodes (512GB, 24 Core, 2.9GHz) – 120 Cores

            Qty 4 Xeon Phi Accelerators

            Qty 2 Nvidia K20 GPU Accelerators

How many cores will the Neon system have? – When launched the system will have 2600 standard cores and 1800 Xeon Phi accelerator cores.

Will the software on Neon be the same as on Helium? – No. The software stack will be similar but will run CentOS 6 instead of CentOS 5.

 

How do I access the Neon system? - The Neon system is primarily accessed using the ssh protocol. We also run FreeNX or No Machine on the system to allow for a remote desktop graphical experience.

Can I run interactive jobs on Neon compute nodes? - Yes. While you are required to submit jobs through the batch scheduler you can request an interactive session using the qlogin command. https://wiki.uiowa.edu/display/hpcdocs/Qlogin+for+Interactive+Sessions

 

Software

Will all of my software be installed by default on Neon? – No. Because of the operating system change updates are required for a significant portion of applications. Because of this very few applications will be carried over by default from Helium. Once the Neon system is available users may request additional software installs. We anticipate this will follow already published guidelines here:

http://hpc.uiowa.edu/resources/software

What software will be installed by default on Neon?

  • cmake 2.8.11
  • GCC - 4.4 & 4.8 series (includes GNU Fortran Compilers)
  • fftw 3.3.3
  • git
  • Intel Composer 2013 (Compilers + MKL)
  • iozone
  • iperf
  • java 1.7.0_13
  • Matlab R2013b
  • netcdf
  • openmpi 1.6.2-1 (tentative version)
  • python 2.6.6-29
  • R 3.0.2
  • svn
  • tecplot 2013R1
  • totalview 8.8
  • valgrind

 

Storage

When will shared storage on Helium be available on Neon? - Our current timeline is March 3, 2014. If you have a strong need for this functionality please contact hpc-sysadmins@iowa.uiowa.edu and we will attempt to prioritize your file share.

Will home and scratch storage be shared between Helium and Neon? – No. Each system will have it's own home accounts and scratch storage space.

Can I use my RDSS 3TB file share with the HPC clusters? - No. RDSS is not intended for use with high performance systems and cannot be used directly with the HPC clusters.

Will performance of my shared storage be poor if it is not located near Neon? (Shared storage located near Helium and computing on Neon) - No, we believe performance will be very similar. Our initial testing has not shown significant performance differences. We will continue to monitor this as the system moves into production.