SCS Clusters

info on scs clusters

Introduction and Access

SCS maintains a Linux computational cluster which is available for use by anyone in the School of Chemical Sciences called lop (FQDN: lop.scs.illinois.edu) for instruction or research. There is currently no charge associated with this service. To get access, fill out the form located here: https://go.scs.illinois.edu/cluster-account . Once you have access, you will log in using your NetID and password (the same password used for e-mail, VPN, etc) using the SSH protocol to connect.
In order to be able to reach the cluster you need to either be on campus or using the VPN.

Accessing software:

While some of the software installed will just “work” if you try to run it when logging in, most packages will require you to load a module first. To do so: 

  1. Type this command to see the available modules to load
    1. module avail
  2. To load a specific module, type the following command (gaussian is used as an example, replace with the one you want)
    1. module load gaussian/g16
    2. If you get a message such as "ERROR: Module 'X' depends on one of the modules 'Y" you need to load module(s) Y first.
  3. To see which modules you currently have loaded
    1. module list
  4. To unload a module (to load a different version for example). 
    1. module unload gaussian/g16

If there is any software not currently on the cluster that you would like installed, e-mail scs-help@illinois.edu and inquire if that is possible. 

Queues on Lop:

amd16smt: This queue is composed of dual socket AMD Opteron CPUs with 4 physical cores each with SMT (Simultaneous MultiThreading) enabled yielding 16 cores per node. They each have 64 GB of memory.
gpu1: This queue is composed of nodes populated with Nvidia GPUs for use in cuda calculations. Compute-1-20 contains 4xGTX 980s. Compute-1-21 has 4x TITAN X. This queue is only to be used for cuda calculations. Never submit a CPU compute job to this queue.
gpu2: This queue is composed of nodes populated with Nvidia GPUs for use in cuda calculations. Compute-1-22 and Compute-1-23 both have 4xTesla k80s. This queue is only to be used for cuda calculations. Never submit a CPU compute job to this queue.
ib1: This queue is composed of dual socket AMD Opteron CPUs with 4 physical cores each with SMT enabled for 16 total cores. Nodes have 64GB of memory. They are connected together via infiniband for multi-node calculations (eg. MPI) NOTE: Infiniband is currently unavailable (10-25-2022)
ib1-bigmem: This queue is just like ib1 except the nodes have SIGNIFICANTLY more memory. A good place for high memory jobs.
ib2: Same specs as ib1 queue but with 24 total cores instead of 16
Intel24: This queue is composed of dual socket Intel Xeon CPUs with 12 physical cores each with HyperThreading disabled for 24 cores total per node. They have 256GB or memory per node.

Job Scheduler:

The clusters are installed using Rocks which uses SGE (Sun Grid Engine) as its job scheduler (SGE man pages). All calculations MUST be run using the scheduler.
Some useful commands:
qstat : This will show the status of your jobs in the queue. If you add a -f it should the status of all the queues and nodes, adding a -u "*" show all jobs, so combined: qstat -f -u "*"  Seeing all jobs can be helpful for showing you what is currently available
qdel: This is used with a jobid to cancel a job. example: qdel 12345. If it doesn't work, try adding a -f: qdel -f 12345. If you still cannot cancel a job, contact scs-help@illinois.edu and we can cancel it
qsub: This is how you submit a job. Read through the man page for all the options.
 
NEVER run a job directly on the headnode, lop. Always submit the job to the queue.

Quotas:

There is currently a 148 core per user limit enforced on Lop to ensure that resources are available to all. In the future, SCS Computing plans to make this an adaptive quota so that resources are not left idle if only a few people are trying to use them, but for now it is static. 
If an individual has a pressing need to go above the quota, you can contact us at scs-help@illinois.edu and we will evaluate the request on a case by case basis.
The command qquota will show how many cores you are currently using. NOTE: There is no output if you have no jobs running.

Job Duration:

                Any job running on the cluster for longer than 7 days is subject to termination without warning. We will attempt to contact the user ahead of time to see if an extension can be worked out but if it is time sensitive this may not be possible. If you have a job or two that you need to run for an extended period of time, contact scs-help@illinois.edu to request an exception. This exception WILL NOT be granted for more than two jobs at a time.

Storage:

Each user has a quota of 250GB on Lop. If more storage is required beyond that (large data sets being calculated for example) contact scs-help@illinois.edu to see if more can be allocated on a case by case basis. Also remember the Storage Policy - Home Directories from the access form.

Students in courses are treated differently from SCS students, staff and faculty. Their access to Lop will be revoked after the end of finals for a semester and all data will be deleted. If a student wishes to keep anything, said student must remove it from lop before the end of the semester.

Software Tutorials: 

Below, find various links to tutorials for software on the cluster. Many of these are out of date but we are working to make them current: 

Unix/Linux Primer

Intro to Gaussian Part I

Intro to Gaussian Part II

Tutorial - Quantum Chemistry - Simulating Vibrationally-resolved Electronic Spectra Using Gaussian 

Determining the pKa of Simple Molecules using Gaussian

Referencing for Research:

When writing a paper from research done using the SC Cluster, please add a statement similar to "We are grateful to the School of Chemical Sciences Computing for support and access to the SCS HPC Cluster."

X-forwarding:

A few software packages have a graphical interface so you will need to set up X forwarding. 

Windows Instructions:

MobaXterm seems to work best right after install with little configuration. https://mobaxterm.mobatek.net/ Just install, put lop.scs.illinois.edu into the quick connect box and you're off. See below for instructions on getting Xming to work, though it hasn't been as reliable since end of 2020.

  •  Xming-mesa instructions 


      1. Download and install Xming-mesa (not normal xming). It can be found here: https://sourceforge.net/projects/xming/files/Xming-mesa/
      2. Run the installer
      3. Open NotePad as administrator 
      4. Click on file –> Open in notepad. Open the X0.hosts file in the xming install directory as shown in this picture: scsclustersimage.png                               
      5. Under localhost on a new line put in 130.126.43.205 (this is the IP for lop). Save the file and close notepad 
      6. Download and install putty found here: https://the.earth.li/~sgtatham/putty/latest/w64/putty-64bit-0.74-installer.msi  
      7. Launch xming
      8. Launch putty
      9. Expand the SSH tree under Connection and click on x11. Click on the box for Enable x11 forwarding
      10. Click on the top option on the right "Session" 
      11. In the Host Name box put lop.scs.illinois.edu then click the Open button
      12. A box will pop up prompting you to "togin as" – put in your netID and hit Enter
      13. It will then prompt you for a password. Use your netID password. Nothing will show as you type, this is standard Unix behavior. 
      14. You will get a prompt. You are now connected to Lop.

Mac Instructions:

  1. Download and install xQuartz 2.7.7 (do not get any other version, they don't seem to work with Gaussview)
  2. Install the downloaded dmg
  3. Reboot your computer entirely
  4. Log back in after reboot and launch xQuartz. DO NOT RUN UPDATES. always decline.
  5. If an xterm window did not open when you launched Quartz, right click/control click on it in the Dock and launch xterm
  6. In the opened xterm type: ssh -X netid@lop.scs.illinois.edu 
  7. Put in your netID password when prompted
  8. This will give errors about unable to connect. This is fine, this step seems to possibly set something in the user profile
  9. load the gaussian module (module load gaussian/g16) and then type gv. This will fail saying cannot connect to x-server
  10. type exit to disconnect
  11. type ssh -Y netid@lop.scs.illinois.edu
  12. load the gaussian modjule and launch gv. This should bring up the Gaussview window properly.
  13. If it does not work, open a new xterm window. In this window type the following (without the quotes) "xhost +local:" then repeat steps 6-12   
  14. If the above steps do not work, try making a brand new account on your mac and follow the above steps under it to see if it works there
  15. If it still does not work, contact scs-help@illinois.edu 



Keywords:lop lipid cluster   Doc ID:104365
Owner:Mark H.Group:University of Illinois School of Chemical Sciences
Created:2020-07-26 22:37 CDTUpdated:2023-03-13 14:25 CDT
Sites:University of Illinois School of Chemical Sciences
CleanURL:https://answers.uillinois.edu/scs-clusters
Feedback:  3   0