<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.umiacs.umd.edu/cbcb/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Mbaney</id>
	<title>Cbcb - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.umiacs.umd.edu/cbcb/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Mbaney"/>
	<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/cbcb/index.php/Special:Contributions/Mbaney"/>
	<updated>2026-04-12T19:03:40Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.7</generator>
	<entry>
		<id>https://wiki.umiacs.umd.edu/cbcb/index.php?title=Torque&amp;diff=9068</id>
		<title>Torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/cbcb/index.php?title=Torque&amp;diff=9068"/>
		<updated>2024-06-12T13:15:04Z</updated>

		<summary type="html">&lt;p&gt;Mbaney: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Refer to the [https://wiki.umiacs.umd.edu/umiacs/index.php/Nexus/CBCB UMIACS SLURM wiki] instead of this page for current information.&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Mbaney</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/cbcb/index.php?title=CBCB_Software_Modules&amp;diff=9056</id>
		<title>CBCB Software Modules</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/cbcb/index.php?title=CBCB_Software_Modules&amp;diff=9056"/>
		<updated>2021-10-15T18:43:29Z</updated>

		<summary type="html">&lt;p&gt;Mbaney: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Starting in the Spring of 2015, communal CBCB Bioinformatics Software has been installed using GNU Modules.&lt;br /&gt;
== Common Modules ==&lt;br /&gt;
&lt;br /&gt;
CBCB Software modules are already configured for interactive shells on Red Hat 7 machines - no additional setup is required. The module files are installed in the following location:&lt;br /&gt;
 /cbcb/sw/RedHat-7-x86_64/common/modules/release/latest&lt;br /&gt;
&lt;br /&gt;
To see see what modules are available:&lt;br /&gt;
 bash$ module avail 2&amp;gt;&amp;amp;1 | less&lt;br /&gt;
or&lt;br /&gt;
 tcsh&amp;gt; module avail |&amp;amp; less&lt;br /&gt;
&lt;br /&gt;
To add a module to your environment, use &amp;lt;code&amp;gt;module add&amp;lt;/code&amp;gt;:&lt;br /&gt;
 $ module add samtools/0.1.19&lt;br /&gt;
&lt;br /&gt;
Note that you can also specify the software name without the version:&lt;br /&gt;
 $ module add samtools&lt;br /&gt;
&lt;br /&gt;
Now samtools has been added to your environment:&lt;br /&gt;
 $ which samtools&lt;br /&gt;
 /cbcb/sw/RedHat-7-x86_64/common/local/samtools/0.1.19/bin/samtools&lt;br /&gt;
&lt;br /&gt;
== User Modules ==&lt;br /&gt;
&lt;br /&gt;
We have created some scripts to assist you with installing software and modules to your user directory in &amp;lt;code&amp;gt;/cbcb/sw&amp;lt;/code&amp;gt;. We recommend that you use these scripts because this will help us share installed software among other CBCB users.&lt;br /&gt;
&lt;br /&gt;
Run this initialization script to set up your directory structure:&lt;br /&gt;
 $ /cbcb/sw/RedHat-7-x86_64/common/scripts/init_cbcb_sw_user.sh&lt;br /&gt;
&lt;br /&gt;
This will create a directory for you:&lt;br /&gt;
 $ tree /cbcb/sw/RedHat-7-x86_64/users/&amp;lt;username&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/cbcb/sw/RedHat-7-x86_64/users/&amp;lt;username&amp;gt;&lt;br /&gt;
├── local&lt;br /&gt;
├── modules&lt;br /&gt;
│   └── &amp;lt;username&amp;gt;&lt;br /&gt;
│       └── env&lt;br /&gt;
├── module_template&lt;br /&gt;
├── README&lt;br /&gt;
├── scripts&lt;br /&gt;
│   ├── copy_module_template.sh&lt;br /&gt;
│   ├── init_install_vars.sh&lt;br /&gt;
│   └── install_package.sh&lt;br /&gt;
└── src&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Use the &amp;lt;code&amp;gt;src&amp;lt;/code&amp;gt; directory for storing and compiling source code.&lt;br /&gt;
* Use the &amp;lt;code&amp;gt;local&amp;lt;/code&amp;gt; directory as the installation prefix.&lt;br /&gt;
* Use the &amp;lt;code&amp;gt;modules&amp;lt;/code&amp;gt; directory to store your modulefiles.&lt;br /&gt;
&lt;br /&gt;
See [[#Installing Software]] below.&lt;br /&gt;
&lt;br /&gt;
To make use of your personal module file directory, add the following to your &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt;:&lt;br /&gt;
 module use /cbcb/sw/RedHat-7-x86_64/users/&amp;lt;username&amp;gt;/modules&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Installing Software ===&lt;br /&gt;
&lt;br /&gt;
Use the &amp;lt;code&amp;gt;install_package.sh&amp;lt;/code&amp;gt; script to automatically compile software that uses &amp;lt;code&amp;gt;./configure; make; make install&amp;lt;/code&amp;gt;. The advantage to using this install script is that it will:&lt;br /&gt;
* use the standardized directory structure for personal software installation, and&lt;br /&gt;
* &#039;&#039;&#039;&#039;&#039;automatically&#039;&#039;&#039;&#039;&#039; create a module file for the software in your &amp;lt;code&amp;gt;modules&amp;lt;/code&amp;gt; directory!&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For full details, read the &amp;lt;code&amp;gt;README&amp;lt;/code&amp;gt; placed in your personal directory (&amp;lt;code&amp;gt;/cbcb/sw/RedHat-7-x86_64/users/&amp;lt;username&amp;gt;&amp;lt;/code&amp;gt;). It&#039;s also available in the [https://gitlab.umiacs.umd.edu/cbcb/cbcb-sw/blob/master/templates/user_readme.md  cbcb-sw Gitlab repository].&lt;br /&gt;
&lt;br /&gt;
=== Listing All Modules ===&lt;br /&gt;
&lt;br /&gt;
The command &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; only lists modulefiles that appear in directories that have been added to your &amp;lt;code&amp;gt;$MODULEPATH&amp;lt;/code&amp;gt; environment variable (either by hand or via &amp;lt;code&amp;gt;module use&amp;lt;/code&amp;gt; command).&lt;br /&gt;
To see all modulefiles that are available for your use in both the Common Modules and the User Modules:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat /cbcb/sw/RedHat-7-x86_64/common/all_modules.txt&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
or see [http://cbcb.umd.edu/~lmendelo/cbcb_modules/all_modules.txt here].&lt;br /&gt;
&lt;br /&gt;
== Setup for non-interactive shells ==&lt;br /&gt;
&lt;br /&gt;
Modules are already configured for interactive shells, but to use modules with non-interactive shells, add the following to your ~/.bashrc:&lt;br /&gt;
 $ source /usr/share/Modules/init/${SHELL}&lt;br /&gt;
 bash$ source /etc/profile.d/ummodules.sh&lt;br /&gt;
&lt;br /&gt;
For more information, see the [https://wiki.umiacs.umd.edu/umiacs/index.php/Modules UMIACS wiki].&lt;br /&gt;
&lt;br /&gt;
== GNU Modules Cheatsheet ==&lt;br /&gt;
&lt;br /&gt;
Ask for help:&lt;br /&gt;
 module --help&lt;br /&gt;
&lt;br /&gt;
List available modules:&lt;br /&gt;
 $ module avail&lt;br /&gt;
&lt;br /&gt;
Read the description of a module:&lt;br /&gt;
 $ module whatis blast&lt;br /&gt;
&lt;br /&gt;
Read the help text for a module:&lt;br /&gt;
 $ module help blast&lt;br /&gt;
&lt;br /&gt;
Add a module to your environment:&lt;br /&gt;
 $ module add blast&lt;br /&gt;
&lt;br /&gt;
Add a specific version to your environment:&lt;br /&gt;
 $ module add blast/2.2.31&lt;br /&gt;
&lt;br /&gt;
List loaded modules:&lt;br /&gt;
 $ module list&lt;br /&gt;
&lt;br /&gt;
Remove a module from your environment:&lt;br /&gt;
 $ module rm blast&lt;br /&gt;
&lt;br /&gt;
Remove all modules from your environment:&lt;br /&gt;
 $ module purge&lt;br /&gt;
&lt;br /&gt;
Add a path to search for available modules:&lt;br /&gt;
 $ module use /cbcb/sw/RedHat-7-x86_64/users/lmendelo/modules&lt;br /&gt;
&lt;br /&gt;
See what changes a module makes to your environment:&lt;br /&gt;
 $ module show Python2/common/2.7.9&lt;br /&gt;
&lt;br /&gt;
== Maintainers ==&lt;br /&gt;
&lt;br /&gt;
=== Creating a new release ===&lt;br /&gt;
&lt;br /&gt;
=== Upgrading R ===&lt;br /&gt;
&lt;br /&gt;
There are a couple scripts to hopefully make upgrading R easier.&lt;br /&gt;
Let us say that you are installing a new version of R and R libraries.&lt;br /&gt;
If that is (for example) version 3.2.1, then a set of commands might include:&lt;br /&gt;
&lt;br /&gt;
 &amp;gt; cd $MOD/R/common&lt;br /&gt;
 &amp;gt; cp 3.2.0 3.2.1&lt;br /&gt;
&lt;br /&gt;
Now edit 3.2.1 to include the new version and change anything of interest.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt; mkdir $STOW/Rext/3.2.1&lt;br /&gt;
 &amp;gt; module add R/common/3.2.1&lt;br /&gt;
&lt;br /&gt;
Assuming that you edited the $MOD 3.2.1 file to tell it to load R version 3.2.1, then&lt;br /&gt;
$(which R) will show the new version and the module command will set the R_LIBS&lt;br /&gt;
variable to a new, empty directory.&lt;br /&gt;
 &lt;br /&gt;
 &amp;gt; cd $STOW/R/common&lt;br /&gt;
 &amp;gt; ./update_R.sh 3.2.0_list&lt;br /&gt;
&lt;br /&gt;
This little shell script should call Bioconductor&#039;s biocLite() once for each&lt;br /&gt;
R library which was installed in 3.2.0.&lt;br /&gt;
&lt;br /&gt;
(Adapted from /cbcb/sw/RedHat-7-x86_64/common/local/Rext/README.md)&lt;br /&gt;
&lt;br /&gt;
== Contact ==&lt;br /&gt;
&lt;br /&gt;
Brought to you by:&lt;br /&gt;
&lt;br /&gt;
* [mailto:abelew@umiacs.umd.edu Trey Belew]&lt;br /&gt;
* [mailto:keith@umiacs.umd.edu Keith Hughitt]&lt;br /&gt;
* [mailto:schrist1@umd.edu Steve Christensen]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== See Also ==&lt;br /&gt;
* [https://wiki.umiacs.umd.edu/umiacs/index.php/Modules UMIACS wiki]&lt;br /&gt;
* [https://gitlab.umiacs.umd.edu/cbcb/cbcb-sw Repository on GitLab]&lt;br /&gt;
* [https://docs.google.com/presentation/d/1UgKtnjcqHlpLZU79hGXfkgvNtse-ksMQ_gJMFCVLKRw/edit?usp=sharing CBCB Software slides on Google Drive]&lt;/div&gt;</summary>
		<author><name>Mbaney</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/cbcb/index.php?title=Torque&amp;diff=9050</id>
		<title>Torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/cbcb/index.php?title=Torque&amp;diff=9050"/>
		<updated>2018-11-16T20:35:34Z</updated>

		<summary type="html">&lt;p&gt;Mbaney: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Refer to the [https://wiki.umiacs.umd.edu/cbcb-private/index.php/Slurm private Slurm wiki] or the [https://wiki.umiacs.umd.edu/umiacs/index.php/SLURM UMIACS SLURM wiki] instead of this page for current information.&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Mbaney</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/cbcb/index.php?title=Torque&amp;diff=9039</id>
		<title>Torque</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/cbcb/index.php?title=Torque&amp;diff=9039"/>
		<updated>2017-06-06T17:36:57Z</updated>

		<summary type="html">&lt;p&gt;Mbaney: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Getting Started==&lt;br /&gt;
&lt;br /&gt;
Torque is a resource manager that interacts with another program called Maui which provides the scheduling for the cluster.   To get started you will need to ensure that your [https://wiki.umiacs.umd.edu/umiacs/index.php/SSH#SSH_Keys_.28and_Passwordless_SSH.29 SSH keys] are setup for password-less SSH.  In our Torque environments this is critical to delivering the error and output of your jobs back to where the job was submitted from. Please note that [https://wiki.umiacs.umd.edu/umiacs/index.php/Fairshare Fair Share] is enabled in this setup, with a historical scope of 12 hours.&lt;br /&gt;
&lt;br /&gt;
The hosts that you can submit from are any CBCB Workstation and the &amp;lt;tt&amp;gt;ibissub00.umiacs.umd.edu&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;ibissub01.umiacs.umd.edu&amp;lt;/tt&amp;gt; nodes.&lt;br /&gt;
&lt;br /&gt;
Now that you have that setup here are the queues that are available to users. Use &amp;lt;code&amp;gt;qstat -Q -f&amp;lt;/code&amp;gt; to see the resource limits for each queue.&lt;br /&gt;
&lt;br /&gt;
=== Red Hat 7 Queues ===&lt;br /&gt;
&lt;br /&gt;
* default- default memory is 3GB (max 4GB), default walltime is 1 hour (max 1 hour), allows up to 16 jobs per user concurrently&lt;br /&gt;
* shell - interactive jobs only - default memory is 2GB (max 4GB), default walltime is 12 hours (max 2 wks), allows up to 4 jobs per user concurrently - restricted to nodes with &#039;&#039;&#039;shell&#039;&#039;&#039;* property&lt;br /&gt;
* workstation - default memory is 4GB (max 47GB), default walltime is 8 hours (max one week), allows up to 4 jobs per user concurrently&lt;br /&gt;
* throughput  - no interactive jobs - default memory is 4GB (max 36GB), default walltime is 4 hours (max 18 hours), allows up to 125 jobs per user concurrently - restricted to nodes with &#039;&#039;&#039;ibis&#039;&#039;&#039;* property&lt;br /&gt;
* high_throughput - no interactive jobs - default memory is 4gb (max 8GB), default walltime is 3 hours (max 6 hours), allows up to 300 jobs per user concurrently - restricted to nodes with &#039;&#039;&#039;ibis&#039;&#039;&#039;* property&lt;br /&gt;
* long - no interactive jobs - default memory is 12gb (max 12gb), default walltime is 8 hours (max 1 week), allows up to 16 jobs per user concurrently&lt;br /&gt;
* large - no interactive jobs -default memory is 32GB (max 120GB), default walltime is 24 hours (max 11 days), allows up to 3 jobs per user concurrently&lt;br /&gt;
* xlarge - default memory is 100GB (max is unlimited), default walltime is 1 week (max 3 weeks), allows 1 job per user at a time&lt;br /&gt;
** The xlarge queue is restricted to members of the group cbcbtorque. If you need to run large jobs please send mail to staff@umiacs.umd.edu&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt;You can list nodes with a specific property by running &amp;quot;pbsnodes :property&amp;quot; where property is the specific property you want to see&lt;br /&gt;
&lt;br /&gt;
===qsub===&lt;br /&gt;
&lt;br /&gt;
qsub is how you submit jobs into a Torque cluster.  A job is a shell script that is given as STDIN or as a file on the command line.  The -l (lower case L) option allows the user to specify some options for your job submission. While your jobs will not always be penalized for using more resources or fewer resources than you request, it is very important to request resources as accurately as possible so that torque knows how many resources each machine has available when new jobs are scheduled. If your job is using more resources than you request, another job may be scheduled on that same machine and could potentially run the machine out of resources and cause segfaults and eventually bring down the machine; likewise, if you request more resources than you need, it will slow down the execution of other users&#039; jobs because torque may think a machine is at capacity when it actually is not.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To specify the queue that you would like to submit to, use the -q option,&lt;br /&gt;
&lt;br /&gt;
  qsub -q workstation&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Use these options with the -l (lower case L) option to request resources:&lt;br /&gt;
&lt;br /&gt;
* ncpus=4&lt;br /&gt;
* mem=32GB&lt;br /&gt;
* walltime=12:00:00&lt;br /&gt;
&lt;br /&gt;
You can find a full list of job submission arguments see here [http://docs.adaptivecomputing.com/torque/4-2-6/help.htm#topics/2-jobs/jobSubmission.htm Torque Job Submission Arguments].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As an example, to run the perl script myscript.pl on 4 CPUs with 128GB of memory for 12 hours you could run the following,&lt;br /&gt;
&lt;br /&gt;
  qsub -q large -l ncpus=4,mem=128GB,walltime=12:00:00 myscript.pl&lt;br /&gt;
&lt;br /&gt;
Note that the large queue was used in the above example because 128GB is more memory than the max allowed in all of the other queues. By default, all of the other queues reserve approximately the maximum memory allowed for that queue, but you may set a lower reservation if you know you will not need the full amount.&lt;br /&gt;
&lt;br /&gt;
Once you have submitted your job for execution you will get something back in the form of,&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;JOBID&amp;gt;.&amp;lt;PBSSERVER&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can use that &amp;lt;JOBID&amp;gt; to delete or find your job later if there is a problem.&lt;br /&gt;
&lt;br /&gt;
When a job finishes, Torque/PBS deposits the standard output and standard error as&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;jobname&amp;gt;.o&amp;lt;number&amp;gt; &lt;br /&gt;
* &amp;lt;jobname&amp;gt;.e&amp;lt;number&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;jobname&amp;gt; is the name of the script you submitted (or STDIN if it came from qsub&#039;s standard in), and &amp;lt;number&amp;gt; is the leading number in the job id.&lt;br /&gt;
&lt;br /&gt;
====Interactive Jobs====&lt;br /&gt;
&lt;br /&gt;
Interactive jobs allow you to schedule interactive shell access on Torque-scheduled compute nodes. You can get an interactive session with the -I (upper case i) option,&lt;br /&gt;
&lt;br /&gt;
  qsub -I &lt;br /&gt;
&lt;br /&gt;
Please note that only the &amp;quot;workstation&amp;quot;, &amp;quot;shell&amp;quot;, and &amp;quot;default&amp;quot; queues allow interactive jobs. If you require larger resource allocations than the queue defaults, the -l (lower case L) flag still applies.&lt;br /&gt;
&lt;br /&gt;
====Array Jobs====&lt;br /&gt;
&lt;br /&gt;
Array jobs let you submit the same script multiple times, each with a different setting for the environment variable &amp;lt;code&amp;gt;PBS_ARRAYID&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
 qsub -q throughput -t 0-999 my_script.sh&lt;br /&gt;
&lt;br /&gt;
Torque will run 1000 instances of &amp;lt;code&amp;gt;my_script.sh&amp;lt;/code&amp;gt; with the environment variable &amp;lt;code&amp;gt;PBS_ARRAYID&amp;lt;/code&amp;gt; set to the range of values specified by the &amp;lt;code&amp;gt;-t&amp;lt;/code&amp;gt; argument. In this case &amp;lt;code&amp;gt;my_script.sh&amp;lt;/code&amp;gt; will be executed once with &amp;lt;code&amp;gt;PBS_ARRAYID=0&amp;lt;/code&amp;gt;, again with  &amp;lt;code&amp;gt;PBS_ARRAYID=1&amp;lt;/code&amp;gt;, etc.&lt;br /&gt;
&lt;br /&gt;
You can also specify comma separate values for &amp;lt;code&amp;gt;PBS_ARRAYID&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
 qsub -q throughput -t 0,3,9  my_script.sh&lt;br /&gt;
&lt;br /&gt;
===qstat===&lt;br /&gt;
&lt;br /&gt;
This will display if any jobs are in the queue for your Torque cluster.  It is normally run with out any arguments and if it returns nothing then there is nothing running in the Torque cluster.&lt;br /&gt;
&lt;br /&gt;
Here is an example of what qstat will look like,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
$ qstat&lt;br /&gt;
Job id                    Name             User            Time Use S Queue&lt;br /&gt;
------------------------- ---------------- --------------- -------- - -----&lt;br /&gt;
135.cbcbtorque             STDIN            tgray26         00:00:00 R workstation&lt;br /&gt;
136.cbcbtorque             STDIN            tgray26         00:00:00 R default        &lt;br /&gt;
137.cbcbtorque             STDIN            tgray26         00:00:00 R default        &lt;br /&gt;
138.cbcbtorque             STDIN            tgray26                0 Q workstation&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For full information about the default settings and maximum resource limits for a queue, use &amp;lt;code&amp;gt;qstat -Q -f&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
$ qstat -Q -f throughput&lt;br /&gt;
Queue: throughput&lt;br /&gt;
    queue_type = Execution&lt;br /&gt;
    total_jobs = 0&lt;br /&gt;
    state_count = Transit:0 Queued:0 Held:0 Waiting:0 Running:0 Exiting:0 Comp&lt;br /&gt;
	lete:0&lt;br /&gt;
    resources_max.mem = 36gb&lt;br /&gt;
    resources_max.nodect = 1&lt;br /&gt;
    resources_max.walltime = 18:00:00&lt;br /&gt;
    resources_default.mem = 4gb&lt;br /&gt;
    resources_default.walltime = 04:00:00&lt;br /&gt;
    mtime = 1424395185&lt;br /&gt;
    disallowed_types = interactive&lt;br /&gt;
    resources_assigned.mem = 0b&lt;br /&gt;
    resources_assigned.ncpus = 0&lt;br /&gt;
    resources_assigned.nodect = 0&lt;br /&gt;
    max_user_run = 125&lt;br /&gt;
    enabled = True&lt;br /&gt;
    started = True&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===qdel===&lt;br /&gt;
&lt;br /&gt;
You can remove a running or stalled job with the qdel command.  It requires that you give it a &amp;lt;JOBID&amp;gt; that can be found by running qstat.&lt;br /&gt;
&lt;br /&gt;
===pbsnodes===&lt;br /&gt;
&lt;br /&gt;
To find out what resources and nodes are available in the Torque cluster you can run pbsnodes.  It will give you back a detailed list of the nodes and their current status.&lt;br /&gt;
&lt;br /&gt;
For example,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pbsnodes&lt;br /&gt;
redbud.umiacs.umd.edu&lt;br /&gt;
     state = free&lt;br /&gt;
     np = 32&lt;br /&gt;
     ntype = cluster&lt;br /&gt;
     status = rectime=1343744604,varattr=,jobs=,state=free,netload=384208495,gres=,loadave=0.02,ncpus=64,&lt;br /&gt;
              physmem=528633432kb,availmem=530365780kb,totmem=530730576kb,idletime=489899,nusers=0,nsessions=? 0,&lt;br /&gt;
              sessions=? 0,uname=Linux redbud.umiacs.umd.edu 2.6.18-308.11.1.el5 #1 SMP Fri Jun 15 15:41:53 EDT 2012 x86_64,opsys=linux&lt;br /&gt;
     gpus = 0&lt;br /&gt;
&lt;br /&gt;
beech.umiacs.umd.edu&lt;br /&gt;
     state = free&lt;br /&gt;
     np = 2&lt;br /&gt;
     ntype = cluster&lt;br /&gt;
     status = rectime=1343744577,varattr=,jobs=,state=free,netload=425438230,gres=,loadave=0.00,ncpus=2,&lt;br /&gt;
              physmem=7154944kb,availmem=8960412kb,totmem=9252088kb,idletime=49,nusers=0,nsessions=? 0,&lt;br /&gt;
              sessions=? 0,uname=Linux beech.umiacs.umd.edu 2.6.18-308.11.1.el5 #1 SMP Fri Jun 15 15:41:53 EDT 2012 x86_64,opsys=linux&lt;br /&gt;
     gpus = 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Using CBCB Modules with Torque==&lt;br /&gt;
&lt;br /&gt;
To use [[CBCB Software Modules]] with Torque, you will need to add these lines to your &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
 . /usr/share/Modules/init/bash&lt;br /&gt;
 . /etc/profile.d/ummodules.sh&lt;br /&gt;
 &lt;br /&gt;
==Host Monitoring==&lt;br /&gt;
http://ganglia.umiacs.umd.edu/ganglia/?c=cbcb_compute&amp;amp;m=load_one&amp;amp;r=hour&amp;amp;s=by%20name&amp;amp;hc=4&amp;amp;mc=2&lt;/div&gt;</summary>
		<author><name>Mbaney</name></author>
	</entry>
</feed>