Using The Grid

After your account has been created, you may access the environment via either SSH or SFTP:

  • Hostname:
  • Username: Your NetID
  • Password: Your NetID password

All jobs must be run through the Sun Grid Engine (SGE) queue. Jobs that are not submitted through the queue will be killed. Don't worry if you are running MPI (Message Passing Interface) jobs, you will be able to run those just fine through the queue (see: Going Parallel).

  • You may assemble/compile/link programs on the grid, but these are the only exceptions to the rule.
  • Please do not compile OpenOffice on the grid.
  • Instructions are provided in Bash Command Basics on how to compile and link programs using the 'make' built-in shell command.

You have an allocated amount of space on the grid based on your account.

You can manipulate your code and jobs on the Grid using Shell commands as well as Sun Grid Engine (SGE) specific commands.  Jobs can also be run on multiple CPUs (parallel programming).  Parallel environments will be discussed below.

Three important commands for the grid are qsub, qstat, and qdel, which will be discussed in detail.

  • qsub - creates a job from commands
  • qstat - gathers information on running jobs
  • qdel - deletes a job

For help with Bash commands refer to the Bash Commands Basics section.

Setup a Work Space

There are disk quotas in place on the grid. You can always check your current quota usage: (this is just a sample, your quota may not be the same)

[user@hnode ~]$ quota
Size    Used    Avail    Use%    Mounted
1.0G    259M    765M     26%     /home/user
10G     899M    9.1G     9%      /scratch/user


This user’s home directory is currently limited to 5GB, but there is a /scratch folder that has a higher quota of 10GB. For that reason, this user would probably prefer running any jobs that use/generate large data sets from /scratch. i.e.:

[user@hnode ~]$ cd scratch
[user@hnode scratch]$ mkdir sge
[user@hnode scratch]$ cd sge
[user@hnode sge]$ qsub large_data_job.sge

Key: cd - change directory; mkdir - create directory; qsub - create job from command

Simple Examples

Your First Job

qsub (an SGE command) creates jobs from commands. The cwd option directs jobs to your working directory. We will start with the qsub command. True to programmer form, the first example is "hello world".

[user@hnode sge]$ qsub
echo hello world
Your job 1159 ("STDIN") has been submitted

Key: echo - prints text on screen

qsub reads in bash commands (be it from a keyboard or file) and will create a job from those commands. SGE will run this command and write its outputs to our home directory, instead of our current working directory (we will fix this in the next example).

[user@hnode sge]$ ls
### Nothing in this directory?!
[user@hnode sge]$ ls ~            # the '~' is your home directory
[user@hnode sge]$ cat ~/STDIN.o1159
hello world
[user@hnode sge]$

Key:ls - list files in a directory; cat - list contents of a file

We can see that SGE did run the job that we told it to run, simple as it may be. It even collected the output generated to STDOUT into a file which is labeled STDIN.o1159. The name of the output file takes its main section from the source of your input. So, if your input file is a script file, the output will use the file name of that script. In this case, we issued the commands to STDIN, so the job itself was named STDIN. At the end we see o1159. The "o" is for output, as in STDOUT, and there is another file which ends in e1159, the "e" being for STDERR (error). The number 1159 is the unique Job ID (JID) that SGE assigned to this job, so the next job run should be 1160.

Now what about that file placement? SGE will run jobs in your home directory unless you explicitly issue it the -cwd flag (an add-on to the main command). Then it will use the CWD (Current Working Directory) of the shell (user interface) when the qsub command is called. Let’s try that again, this time with the -cwd flag.

[user@hnode sge]$ qsub –cwd
echo hello world again
Your job 1160 ("STDIN") has been submitted
[user@hnode sge]$ ls
STDIN.e1160 STDIN.o1160
[user@hnode sge]$ cat STDIN.o1160
hello world again
[user@hnode sge]$


Much better, we got our file output to the current working directory.

Note, SGE is setup to run at intervals, so it can take as long as 15 seconds for a job in the queue to begin.

Scripting Jobs

Now we want to create a job file that we can submit. We do this by creating a simple bash script. A bash script is a plain text file with commands in it which are executed when the script is used on the Grid. The only difference between an SGE script and a regular bash script is that you can put SGE commands in the comments. Bash will ignore them, because they are comments, but SGE will still parse them to get information about how to run the job.

Let's remove the files we aren't using:

[user@hnode sge]$ ls
STDIN.e1160 STDIN.o1160
[user@hnode sge]$ rm STDIN.*

Key:rm - remove file; * - wildcard, text substitution feature that includes any characters after *

Now we'll create a simple bash script called ex1.sge:

#$ -cwd
echo hello world

Then, we will load the file and run it on The Grid.

[user@hnode sge]$ ls
[user@hnode sge]$ qsub ex1.sge
Your job 1161 ("ex1.sge") has been submitted
[user@hnode sge]$ ls
ex1.sge ex1.sge.e1161 ex1.sge.o1161
[user@hnode sge]$ cat ex1.sge.o1161
hello world
[user@hnode sge]$


Other useful options (add-ons to commands) for your submission script are:

  • -N: Overrides the default behavior of naming jobs based on the submission script name.
  • -M: (email address): Sends notifications about the job to the provided email address, -m option changes mailing preferences.
  • -m: (b|e|a|s|n): Changes the mailing preferences
    • b: Mail is sent when the job starts
    • e: Mail is sent when the job ends
    • a: Mail is sent if the job is aborted or rescheduled
    • s: Mail is sent if the job gets suspended.
  • -pe: Specifies how jobs should be distributed to multiple nodes. The various parallel environments will be discussed in great detail in the Going Parallel section.

Gathering Info on Running Jobs

Let’s assume we have a job which will take a little longer, a couple of minutes. We may want to check on that job to make sure it is running ok, that  it is not crashing.  We might also want to see how long it has been running for, or even if it has been run at all. We will create a new simple job to run which takes 30 seconds to run.  To gather information on a running job, use the qstat command.


#$ -cwd
echo "Starting up"
sleep 30
echo "Shutting down"


[user@hnode sge]$ ls
ex1.sge ex1.sge.e1161 ex1.sge.o1161 ex2.sge
[user@hnode sge]$ qsub ex2.sge
Your job 1162 ("ex2.sge") has been submitted
[user@hnode sge]$ qstat
job-ID     prior     name     user     state     submit/start     at        
1162       0.55500   ex2.sge  user     r         07/17/2007       11:30:02 


queue                       slots ja-task-ID
CTW.q@compute-0-3.local     1


Here we can see the ID that was given to us when we started the job (for easy identification), the name of the submission script, the user, the state (r = running, qw = queued, waiting) and the number of processors (slots) required for this job. We can also see which specific queue and server it is running on, in my case I am part of the CTW project and so my job is run in the CTW queue, on node C0-3.

Also notice that I am in the CTW queue. CTW is this user's generic testing set of parameters, most people will use either the Common.q or a queue that is specific to their research group. However, the queuing system will select the queue that will best serve you automatically.

In this example we can see a job that is still in the queue, waiting to run:

[user@hnode sge]$ qstat
job-ID    prior    name    user    state    submit/start    at    queue
1163      0.00000  ex2.sge user    qw       07/17/2007    11:33:28


We can get a fuller listing of all the queues/nodes, and what is on each by adding the option -f, so running qstat -f. We can of course see our own running job, in this case it landed on C0-13.

[user@hnode sge]$ qstat -f
queuename                qtype  used/tot.   load_avg   arch          states
CTW.q@compute-0-0.local  BIP    0/4         0.47       lx24-amd64
CTW.q@compute-0-1.local  BIP    0/4         3.00       lx24-amd64
CTW.q@compute-0-12.local BIP    0/4         -NA-       -NA-         au
CTW.q@compute-0-13.local BIP    1/4         0.00       lx24-amd64
   1164 0.55500 ex2.sge  user     r     07/17/2007     11:34:32     1
CTW.q@compute-0-2.local  BIP    0/4         0.52       lx24-amd64


If for some reason a job should fail to run, and you want to know more about why it failed, the qstat -j command will show you more information about your job, including helpful error messages. If you have a job which continually fails to run, it would be best if you can get the output from qstat -r and email it to the sysadmin (, along with the working directory and command which was used to start the job.

[user@hnode sge]$ qsub ex2.sge
Your job 1165 ("ex2.sge") has been submitted
[user@hnode sge]$ qstat
job-ID    prior    name    user    state    submit/start    at            queue
1165      0.00000  ex2.sge user    qw       07/17/2007      11:40:19
[user@hnode sge]$ qstat -j 1165
job_number:                1165
exec_file:                 job_scripts/1165
submission_time:           Tue Jul 17 11:40:19 2007
owner:                     user
uid:                       501
group:                     user
gid:                       501
sge_o_home:                /home/user
sge_o_log_name:            user
sge_o_path:                /share/apps/sge/bin/lx24-amd64:/usr/pbs/bin:/usr/pbs/sbin:/home/user/bin:/usr/kerberos/bin:/usr/java/jdk1.5.0_07/bin:/usr/local/bin:/bin:/usr/bin :/usr/X11R6/bin:/opt/Bio/ncbi/bin:/opt/Bio/mpiblast/bin/:/opt/Bio/hmmer/bin:/opt/Bio/Emboss/bin:/opt/Bio/clustalw/bin:/opt/Bio/t_coffee/bin:/opt/Bio/phylip/exe:/opt/Bio/mrbayes :/opt/Bio/fasta:/opt/Bio/glimmer/bin://opt/Bio/glimmer/scripts:/opt/Bio/gromacs/bin:/opt/eclipse:/opt/ganglia/bin:/opt/ganglia/sbin:/opt/maven/bin:/opt/openmpi/bin/:/opt/rocks/ bin:/opt/rocks/sbin:/opt/intel/fce/9.1.036/bin/:/usr/pbs/bin/:/opt/intel/fce/9.1.036/bin/:/sbin:/home/user/bin
sge_o_shell:               /bin/bash
sge_o_workdir:             /home/user/scratch/sge
sge_o_host:                hnode
account:                   sge
cwd:                       /home/user/scratch/sge
path_aliases:              /tmp_mnt/ * * /
mail_list:                 user@hnode.local
notify:                    FALSE
job_name:                  ex2.sge
jobshare:                  0
script_file:               ex2.sge
project:                   CTW.proj
scheduling info:           queue instance "all.q@compute-1-29.local" dropped because it is temporarily not available


As you can see, qstat -j gives you a wealth of information about the job in question.

Cleaning Up Bad Jobs

Once in a while you may submit a job that you didn't really want to submit. At any time while it is in the queue or while it is running, you can destroy that job with the qdel command.

[user@hnode sge]$ qsub ex2.sge
Your job 1166 ("ex2.sge") has been submitted
[user@hnode sge]$ qstat
job-ID    prior    name    user    state    submit/start    at    queue
1166      0.00000  ex2.sge user    qw       07/17/2007      11:48:50
[user@hnode sge]$ qdel 1166
user has deleted job 1166
[user@hnode sge]$ qstat
[user@hnode sge]$


Note of course that you can only remove your own jobs. Only the system administrator can delete jobs that belong to other people.

If you have submitted a lot of jobs, and you want to remove all jobs that have been in submitted in your name, the following shell scripting commands can be paired with the SGE binaries to get help automate the cleaning process:

  • qstat -u | sed 1,2d | awk '{print $1}'

    • Get a list of jobs owned by my username (qstat)
    • Remove the top two header lines (sed)
    • Only print the Job ID (awk)
    • for x in $(qstat -u user | sed 1,2d | awk '{print $1}'); do qdel $x; done

      1. Tell bash that for each of these Job IDs do the above (for $())
      2. Remove it from the queue (qdel)

      [user@hnode sge]$ for x in $(seq 1 5); do qsub ex2.sge; done
      Your job 1167 ("ex2.sge") has been submitted
      Your job 1168 ("ex2.sge") has been submitted
      Your job 1169 ("ex2.sge") has been submitted
      Your job 1170 ("ex2.sge") has been submitted
      Your job 1171 ("ex2.sge") has been submitted
      [user@hnode sge]$ qstat -u user
      job-ID    prior    name    user    state    submit/start    at    queue
      1167      0.00000  ex2.sge user    qw       07/17/2007      11:51:37
      1168      0.00000  ex2.sge user    qw       07/17/2007      11:51:37
      1169      0.00000  ex2.sge user    qw       07/17/2007      11:51:37
      1170      0.00000  ex2.sge user    qw       07/17/2007      11:51:37
      1171      0.00000  ex2.sge user    qw       07/17/2007      11:51:37
      [user@hnode sge]$ qstat -u user | sed 1,2d | awk '{print $1}'
      [user@hnode sge]$ for x in $(qstat -u user | sed 1,2d | awk '{print $1}'); do qdel $x; done
      user has registered the job 1168 for deletion
      user has registered the job 1170 for deletion
      user has registered the job 1177 for deletion
      user has registered the job 1182 for deletion
      user has registered the job 1169 for deletion
      [user@hnode sge]$ qstat
      [user@hnode sge]$

      Going Parallel

      One of the big goals of clusters is the ability to run one job with many different CPU's or nodes working together. Such a program is called a parallel program. All queuing software packages include support for running jobs which require multiple CPUs or nodes to run, and in the case of SGE, this is called a "Parallel Environment". Parallel Environments (PE) can be used for systems like MPI, or can be used for your own custom parallel jobs. What follows is a description of the various PEs. One thing that needs to be made clear is how to specify the parallel environment. In the run script add the variable setting: -pe , where is the name of the identifier of the PE, and is the number of nodes requested.


      This environment is most similar to running on a multiple core machine. The number specified after this is the number of cores to allocate on a single machine. The job will not begin until a machine with the specified number of cores becomes available, meaning that a job can potentially never run if you don't have access to the appropriate machines.


      This environment is for allocating mpich (an MPI variant) jobs. mpich uses a fill and spill allocation, meaning that all available slots on a machine will be taken before spilling the rest over to another node. So if 5 slots were requested for a job on several dual slot machines, the end result would be 2 full machines and one with only one slot taken up. The first machine would have both slots taken before any other machine was queried.


      The orte environment is used for OpenMPI jobs. Like mpich, it uses fill and spill for allocation. The OpenMPI compiler is located at: /usr/lib64/openmpi/1.3.2-gcc/bin/mpiCC and the associated mpirun command is /usr/lib64/openmpi/1.3.2-gcc/bin/mpirun.


      This environment is identical to orte, except that instead of fill and spill it uses a round-robin scheme to fill slots. Slots are filled one per machine until all machines have one slot filled, and then the next slot on each machine is filled, until all requested slots are provided.

      It is important the when mpirun is called, that it be called with -np $NSLOTS as an argument. This will ensure that MPI internally uses the correct number of slots to run your program.


      Sometimes, a job will crash. There are scores of reasons why this will happen, but there are a few common culprits that can be easily diagnosed and avoided.  Sometimes jobs will be run in a file or directory that doesn’t exist, which will cause an error. Using the qstat command, along with modifications to the command can give information about crashes. The letter "E" in the job indicates an error state. Use qstat to view a more complete report of the error.

      Silent Crashes

      Usually a job will fail without the queue being aware of any problems. The queue will simply see the job script terminate and will remove the job from the queue. In this case you can find more information about what happened by looking in the output and error output files.

      So if I run a job from the file job.sge, and it gets job number 12345, then there will be two files created, job.sge.o12345 and job.sge.e12345. Both may have information relevant to failure of the job, but most common errors will end up in job.sge.e12345.

      Example 1

      Assume we want to run the following script. However, the file (an example file) does not exist.


      #$ -cwd



      bash: No such file or directory

      Example 2

      Same situation, but this time changing directory to a non-existent directory.


      #$ -cwd

      cd foobar/


      /share/apps/sge/unr_research/spool/compute-0-3/job_scripts/5411023: line 4: cd: foobar/: No such file or directory

      Verbose Crashes

      Some job crashes will be detectable by the cuing system.  In this case, it will leave the job in the queue, and will set its status to Eqw, for "Error, queue-waiting". We can get more information about the job by using the qstat command, with the -j flag.

      [user@hnode sge]$ qstat -u '*'
      job-ID    prior    name    user    state    submit/start    at    queue
      #######   0.00000  ******  ******  Eqw      03/17/2008      15:31:51


      [user@hnode sge]$ qstat -j #######
      job_number:                 539####
      exec_file:                  job_scripts/539####
      submission_time:            Mon Mar 17 15:31:51 2008
      owner:                      ******
      uid:                        ###
      group:                      ******
      gid:                        ###
      sge_o_home:                 /home/*****
      sge_o_log_name:             *****
      sge_o_path:                 /share/apps/sge/bin/lx24-amd64:/usr/kerberos/bin:/usr/java/jdk1.5.0_07/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:/opt/Bio/ncbi/bin:/opt/Bio/mpiblast/bin/:/opt/Bio/hmmer/bin:/opt/Bio/Emboss/bin:/opt/Bio/clustalw/bin:/opt/Bio/t_coffee/bin:/opt/Bio/phylip/exe:/opt/Bio/mrbayes:/opt/Bio/fasta:/opt/Bio/glimmer/bin://opt/Bio/glimmer/scripts:/opt/Bio/gromacs/bin:/opt/eclipse:/opt/ganglia/bin:/opt/ganglia/sbin:/opt/maven/bin:/opt/openmpi/bin/:/opt/rocks/bin:/opt/rocks/sbin:/opt/intel/fce/9.1.036/bin/:/usr/pbs/bin/:/share/apps/hex/bin:/home/*****/bin
      sge_o_shell:                /bin/bash
      sge_o_workdir:              /scratch/*****/sc031708/2NCD/output/00000317_run
      sge_o_host:                 hnode
      account:                    sge
      cwd:                        /scratch/*******/sc031708/2NCD/output/00000317_run
      path_aliases:               /tmp_mnt/ * * /
      mail_list:                  ******@hnode.local
      notify:                     FALSE
      job_name:                   *********.job
      jobshare:                   0
      script_file:                *********.job
      error reason 1:             03/18/2008 10:07:01 [564:1473]: error: can't chdir to /scratch/******/sc031708/2NCD/output/00000317_
      scheduling info:            queue instance "Benchmark.q@compute-20-0.local" dropped because it is temporarily not available
                                  queue instance "Benchmark.q@compute-20-11.local" dropped because it is temporarily not available
                                  queue instance "Benchmark.q@compute-20-3.local" dropped because it is temporarily not available
                                  queue instance "Benchmark.q@compute-21-6.local" dropped because it is temporarily not available


      Now we can see from the line error reason 1: what went wrong. In this case, the queue reports that it is trying to run from a non-existent directory. There are various things that could have caused this, so there is still some investigation to be done. However, now we know where to start.