Topics Map > SCS Computing > HPC - Clusters
Topics Map > SCS Computing > How-To

SCS Clusters

Information on the SCS Cluster, Lop

 

Introduction and Access

SCS maintains a Linux computational cluster which is available for use by anyone in the School of Chemical Sciences at UIUC called lop (FQDN: lop.scs.illinois.edu) for instruction or research. 

To get access, fill out the form located here: https://go.scs.illinois.edu/cluster-account. Although the form asks for a CFOP (University account number), there is currently no charge associated with this service.

Once you have access, you will log in using your NetID and password (the same password used for e-mail, VPN, etc) using the SSH protocol to connect. In order to be able to reach the cluster you need to either be connected to a campus network (wired or wireless) or using the campus VPN, as the SSH protocol is not allowed through the campus firewall.

Using software on lop

While some of the software installed will just “work” if you try to run it when logging in, most packages will require you to load a module first. To do so:

  1. module avail
    • Shows you all available modules

  2. module load [module name]
    • Used to load a specific module. Example:  module load gaussian/g16
    • NOTE: If you get a message such as "ERROR: Module 'X' depends on one of the modules 'Y", this means that you need to load module(s) 'Y' first, then load module 'X'. 

  3. module list
    • Used to see which modules you currently have loaded

  4. module unload [module name]
    • Used to unload a module - useful if you want to load a different version.  Example:  module unload gaussian/g16

If there is any software not currently on the cluster that you would like installed, e-mail scs-help@illinois.edu and inquire if that is possible. 

Queues on Lop

  • amd16smt: This queue is composed of dual socket AMD Opteron CPUs with 4 physical cores each with SMT (Simultaneous MultiThreading) enabled yielding 16 cores per node. They each have 64 GB of memory.

  • gpu1: This queue is composed of nodes populated with Nvidia GPUs for use in cuda calculations. Compute-1-20 contains 4xGTX 980s. Compute-1-21 has 4x TITAN X. This queue is only to be used for cuda calculations. Never submit a CPU compute job to this queue.

  • gpu2: This queue is composed of nodes populated with Nvidia GPUs for use in cuda calculations. Compute-1-22 and Compute-1-23 both have 4xTesla k80s. This queue is only to be used for GPU calculations. Never submit a CPU compute job to this queue.

  • ib2:  This queue is composed of dual socket AMD Opteron CPUs with 4 physical cores each with SMT enabled for 24 total cores. Nodes have 64GB of memory.

  • intel24: This queue is composed of dual socket Intel Xeon CPUs with 12 physical cores each with HyperThreading disabled for 24 cores total per node. They have 256GB of memory per node.

  • intel72smt: This queue is composed of dual socket Intel Xeon CPUs with 18 physical cores each with HyperThreading enabled for 72 cores total per node. They have 96GB of memory per node.
  • remecmorisato - This queue consists of a GPU node purchased using a donation for Chemistry Course development. During the time the courses which need it are using it, it will be restricted to those members. All other times, it is open for all on Lop to use. It has four NVIDIA A40 GPUs in it. It has 512 GB of RAM and 80 (40 physical and 40 logical) CPU cores.

Job Scheduler

The clusters are installed using Rocks which uses SGE (Sun Grid Engine) as its job scheduler (SGE man pages). All calculations MUST be run using the scheduler.

Some useful commands:

  • qstat: This will show the status of your jobs in the queue.
    • To see all jobs, use qstat -f -u "*". It can be helpful to see what resources are is currently in use.  
      • -f shows the status of all the queues and nodes,
      • -u "*" shows all jobs  

  • qdel: This is used with a jobid to cancel a job. example: qdel 12345. If it doesn't work, try adding a -f, as in qdel -f 12345. If you still cannot cancel a job, contact scs-help@illinois.edu and we can cancel it for you.

  • qsub: This is how you submit a job. Read through the man page (man qsub) for all the options. Some useful flags (that are case sensitive):

    • -V - this exports all environment variables along with the job. it is recommended to use this for all submissions
    • -q QUEUENAME - this specifies which queue will handle the job
    • -pe NAME # - this specifies which parallel environment to use (NAME) and how many cores to use (#)
    • -l RESOURCE=# - this specifics a resource to employ (RESOURCE) and how many to use (#). This is needed for GPU jobs (-l slots_gpu=1)
    • -e FILENAME - writes job errors to the file FILENAME
    • -o FILENAME - writes the standard output of the batch job to FILENAME. This is different from the output of the calculation.
    • -N JOBNAME - sets the name of the running job to JOBNAME.
    • -cwd - Tells the scheduler to use the directory the job is launched from as the working directory.
  • qquota: Shows how many cores you are currently using. NOTE: There is no output if you have no jobs running.  See Core Usage Limits for more information. 

IMPORTANT: All jobs must be submitted to a queue.  Any job found running on the head node will be killed without warning.

X-forwarding

A few software packages have a graphical interface so you will need to set up X forwarding. 

Windows Instructions:

MobaXTerm (recommended)

MobaXterm is the recommended solution on Windows, as it seems to work best right after install with little configuration.  Download MobaXTerm from https://mobaxterm.mobatek.net/. Just install, type lop.scs.illinois.edu into the quick connect box and you're off.

Xming-mesa instructions

NOTE:  Xming-mesa has not been super reliable since an upgrade at the end of 2020. 

    1. Download and install Xming-mesa (not normal xming). It can be found here: https://sourceforge.net/projects/xming/files/Xming-mesa/
    2. Run the installer
    3. Open NotePad as administrator
    4. Click on File –> Open in notepad. Open the "X0.hosts" file in the xming install directory as shown in this picture

scsclustersimage.png                               

    1. Under localhost on a new line put in130.126.43.205(this is the IP for lop). Save the file and close notepad
    2. Download and install putty found here: https://the.earth.li/~sgtatham/putty/latest/w64/putty-64bit-0.74-installer.msi
    3. Launch xming
    4. Launch putty
    5. Expand the SSH tree under "Connection" and click on x11. Click on the box for Enable x11 forwarding
    6. Click on the top option on the right "Session"
    7. In the Host Name box put lop.scs.illinois.edu then click the Open button
    8. A box will pop up prompting you to "togin as" – put in your netID and hit Enter
    9. It will then prompt you for a password. Use your netID password. Nothing will show as you type, this is standard Unix behavior.
    10. You will get a prompt. You are now connected to Lop.

Mac Instructions:

  1. Download and install xQuartz 2.7.7 (do not get any other version, they don't seem to work with Gaussview)
  2. Install the downloaded dmg
  3. Reboot your computer entirely
  4. Log back in after reboot and launch xQuartz. DO NOT RUN UPDATES. always decline.
  5. If an xterm window did not open when you launched Quartz, right click/control click on it in the Dock and launch xterm
  6. In the opened xterm type: ssh -X [netid]@lop.scs.illinois.edu 
  7. Put in your netID password when prompted
  8. This will give errors about unable to connect. This is fine, this step seems to possibly set something in the user profile
  9. load the gaussian module (module load gaussian/g16) and then type gv. This will fail saying cannot connect to x-server
  10. type exit to disconnect
  11. type ssh -Y netid@lop.scs.illinois.edu
  12. load the gaussian modjule and launch gv. This should bring up the Gaussview window properly.
  13. If it does not work, open a new xterm window. In this window type the following: xhost +local: then repeat steps 6-12   
  14. If the above steps do not work, try making a brand new account on your mac and follow the above steps under it to see if it works there
  15. If it still does not work, contact scs-help@illinois.edu 

Limits for using Lop: 

Core-Limitation Quota:

There is currently a 320 core per user limit enforced on Lop to ensure that resources are available to all. Each queue also has an individual limit, which is lower and different per queue. In the future, SCS Computing plans to make this an adaptive quota so that resources are not left idle if only a few people are trying to use them, but for now it is static. 

If an individual has a pressing need to go above the quota, you can contact us at scs-help@illinois.edu and we will evaluate the request on a case by case basis.

The command qquota will show how many cores you are currently using. NOTE: There is no output if you have no jobs running.

Job Duration:

Any job running on the cluster for longer than 7 days is subject to termination without warning.  We will attempt to contact the user ahead of time to see if an extension can be worked out but if it is time sensitive this may not be possible.

If you have a job or two that you need to run for an extended period of time, contact scs-help@illinois.edu to request an exception. This exception WILL NOT be granted for more than two jobs at a time.

Storage Quota:

Each user has a quota of 250GB on Lop. If more storage is required beyond that (large data sets being calculated for example) contact scs-help@illinois.edu to see if more can be allocated on a case by case basis. Also remember the Storage Policy - Home Directories from the access form.

Students in courses are treated differently from SCS students, staff and faculty using Lop for research. Their access to Lop will be revoked after the end of finals for a semester and all data will be deleted. If a student wishes to keep anything, said student must remove it from lop before the end of the semester.

Software Tutorials: 

Below, find various links to tutorials for software on the cluster. Many of these are out of date but we are working to make them current: 

Referencing for Research:

When writing a paper from research done using the SC Cluster, please add a statement similar to:

"We are grateful to the School of Chemical Sciences Computing for support and access to the SCS HPC Cluster."



Keywords:
lop lipid cluster queue module and16smt gpu1 gpu2 gpu ib1 ib1-bigmem bigmem ib2 intel24 mobaxterm xming xquartz 
Doc ID:
104365
Owned by:
Mark H. in School of Chemical Sciences
UIUC
Created:
2020-07-26
Updated:
2024-10-17
Sites:
University of Illinois School of Chemical Sciences