<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.umiacs.umd.edu/umiacs/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Dkontyko</id>
	<title>UMIACS - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.umiacs.umd.edu/umiacs/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Dkontyko"/>
	<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php/Special:Contributions/Dkontyko"/>
	<updated>2026-05-09T21:09:20Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.7</generator>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Iribe/ConferenceRooms/Moderated&amp;diff=8539</id>
		<title>Iribe/ConferenceRooms/Moderated</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Iribe/ConferenceRooms/Moderated&amp;diff=8539"/>
		<updated>2019-09-06T13:17:18Z</updated>

		<summary type="html">&lt;p&gt;Dkontyko: added room name column&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;These signature [[Iribe/ConferenceRooms | rooms]] have a primary coordinator and a backup moderator.  The touch panels will not allow walk up reservations for these rooms; all reservations need to go through the moderation process.  You may [[Iribe/ConferenceRooms/Reserve | create a reservation]] by adding the room in Google Calendar.  The moderator will be notified and can approve your request.&lt;br /&gt;
&lt;br /&gt;
Instructions on reserving a room are [[Iribe/ConferenceRooms/Reserve | here]].&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Room&lt;br /&gt;
! Name&lt;br /&gt;
! Occupancy&lt;br /&gt;
! Primary Moderator&lt;br /&gt;
! Backup&lt;br /&gt;
|-&lt;br /&gt;
| IRB-1127&lt;br /&gt;
|&lt;br /&gt;
| 12&lt;br /&gt;
| Sharron McElroy&lt;br /&gt;
| Regis Boykin&lt;br /&gt;
|-&lt;br /&gt;
| IRB-2137&lt;br /&gt;
|&lt;br /&gt;
| 12&lt;br /&gt;
| Sharron McElroy&lt;br /&gt;
| Regis Boykin&lt;br /&gt;
|-&lt;br /&gt;
| IRB-3137&lt;br /&gt;
|&lt;br /&gt;
| 24&lt;br /&gt;
| Elizabeth Hontz&lt;br /&gt;
| Danae Johnson&lt;br /&gt;
|-&lt;br /&gt;
| IRB-3256&lt;br /&gt;
|&lt;br /&gt;
| 8&lt;br /&gt;
| Barbara Lewis&lt;br /&gt;
| Danae Johnson&lt;br /&gt;
|-&lt;br /&gt;
| IRB-4105&lt;br /&gt;
|&lt;br /&gt;
| 48&lt;br /&gt;
| Sharron McElroy&lt;br /&gt;
| Elizabeth Hontz&lt;br /&gt;
|-&lt;br /&gt;
| IRB-4107&lt;br /&gt;
|&lt;br /&gt;
| 20&lt;br /&gt;
| Sharron McElroy&lt;br /&gt;
| Elizabeth Hontz&lt;br /&gt;
|-&lt;br /&gt;
| IRB-4109&lt;br /&gt;
|&lt;br /&gt;
| 20&lt;br /&gt;
| Sharron McElroy&lt;br /&gt;
| Elizabeth Hontz&lt;br /&gt;
|-&lt;br /&gt;
| IRB-4137&lt;br /&gt;
|&lt;br /&gt;
| 12&lt;br /&gt;
| Elizabeth Hontz&lt;br /&gt;
| Danae Johnson&lt;br /&gt;
|-&lt;br /&gt;
| IRB-4237&lt;br /&gt;
|&lt;br /&gt;
| 12&lt;br /&gt;
| Janice Perrone&lt;br /&gt;
| Danae Johnson&lt;br /&gt;
|-&lt;br /&gt;
| IRB-5105&lt;br /&gt;
|&lt;br /&gt;
| 24&lt;br /&gt;
| Sharron McElroy&lt;br /&gt;
| Danae Johnson&lt;br /&gt;
|-&lt;br /&gt;
| IRB-5137&lt;br /&gt;
|&lt;br /&gt;
| 18&lt;br /&gt;
| Sharron McElroy&lt;br /&gt;
| Danae Johnson&lt;br /&gt;
|-&lt;br /&gt;
| IRB-5165&lt;br /&gt;
|&lt;br /&gt;
| 16&lt;br /&gt;
| Sharron McElroy&lt;br /&gt;
| Danae Johnson&lt;br /&gt;
|-&lt;br /&gt;
| IRB-5237&lt;br /&gt;
|&lt;br /&gt;
| 12&lt;br /&gt;
| Dana Purcell&lt;br /&gt;
| Danae Johnson&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Large Seminar Capabilities ===&lt;br /&gt;
&#039;&#039;&#039;3137&#039;&#039;&#039; and &#039;&#039;&#039;4105&#039;&#039;&#039; are the two large seminar rooms.&lt;br /&gt;
&lt;br /&gt;
* Dual Display (Projector and LCD)&lt;br /&gt;
* Dual camera conferencing via room PC&lt;br /&gt;
* Laptop presentation via HDMI or Mersive Solstice&lt;br /&gt;
* Blu-ray playback, tuner&lt;br /&gt;
* Lectern&lt;br /&gt;
&lt;br /&gt;
===Small Seminar and Large Conference Room Capabilities ===&lt;br /&gt;
Every other room on this page has the following setup.&lt;br /&gt;
&lt;br /&gt;
* Single Display (LCD)&lt;br /&gt;
* Single camera conferencing via room PC&lt;br /&gt;
* Laptop presentation via HDMI or Mersive Solstice&lt;/div&gt;</summary>
		<author><name>Dkontyko</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM/JobSubmission&amp;diff=8521</id>
		<title>SLURM/JobSubmission</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM/JobSubmission&amp;diff=8521"/>
		<updated>2019-08-19T18:58:50Z</updated>

		<summary type="html">&lt;p&gt;Dkontyko: /* Common srun arguments */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Job Submission=&lt;br /&gt;
&lt;br /&gt;
SLURM offers a variety of ways to run jobs. It is important to understand the different options available and how to request the resources required for a job in order for it to run successfully. All job submission should be done from submit nodes; any computational code should be run in a job allocation on compute nodes. The following commands outline how to allocate resources on the compute nodes and submit processes to be run on the allocated nodes.&lt;br /&gt;
&lt;br /&gt;
==srun==&lt;br /&gt;
&amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; is the command used to run a process on the compute nodes in the cluster. It works by passing it a command (this could be a script) which will be run on a compute node and then &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; will return. &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; accepts many command line options to specify the resources required by the command passed to it. Some common command line arguments are listed below and full documentation of all available options is available in the man page for &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt;, which can be accessed by running &amp;lt;code&amp;gt;man srun&amp;lt;/code&amp;gt;. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tgray26@opensub01:srun --mem=100mb --time=1:00:00 bash -c &#039;echo &amp;quot;Hello World from&amp;quot; `hostname`&#039;&lt;br /&gt;
Hello World from openlab06.umiacs.umd.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It is important to understand that &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; is an interactive command. By default input to &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; is broadcast to all compute nodes running your process and output from the compute nodes is redirected to &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt;. This behavior can be changed; however, &#039;&#039;&#039;srun will always wait for the command passed to finish before exiting, so if you start a long running process and end your terminal session, your process will stop running on the compute nodes and your job will end&#039;&#039;&#039;. To run a non-interactive submission that will remain running after you logout, you will need to wrap your &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; commands in a batch script and submit it with [[#sbatch | sbatch]]&lt;br /&gt;
===Common srun arguments===&lt;br /&gt;
* &amp;lt;code&amp;gt;--mem=1gb&amp;lt;/code&amp;gt; &#039;&#039;if no unit is given MB is assumed&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;--nodes=2&amp;lt;/code&amp;gt; &#039;&#039;if passed to srun, the given command will be run concurrently on each node&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;--qos=dpart&amp;lt;/code&amp;gt; &#039;&#039;to see the available QOS options on a cluster, run&#039;&#039; &amp;lt;code&amp;gt;show_qos&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;--time=hh:mm:ss&amp;lt;/code&amp;gt; &#039;&#039;time needed to run your job&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;--job-name=helloWorld&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;--output filename&amp;lt;/code&amp;gt; &#039;&#039;file to redirect stdout to&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;--error filename&amp;lt;/code&amp;gt; &#039;&#039;file to redirect stderr&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;--partition $PNAME&amp;lt;/code&amp;gt; &#039;&#039;request job run in the $PNAME partition&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;--ntasks 2&amp;lt;/code&amp;gt; &#039;&#039;request 2 &amp;quot;tasks&amp;quot; which map to cores on a CPU, if passed to srun the given command will be run concurrently on each core&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Interactive Shell Sessions===&lt;br /&gt;
An interactive shell session on a compute node can be useful for debugging or developing code that isn&#039;t ready to be run as a batch job. To get an interactive shell on a node, use &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; to invoke a shell:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tgray26@opensub01:srun --pty --mem 1gb --time=01:00:00 bash&lt;br /&gt;
tgray26@openlab06:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Please do not leave interactive shells running for long periods of time when you are not working. This blocks resources from being used by everyone else.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==salloc==&lt;br /&gt;
The salloc command can also be used to request resources be allocated without needing a batch script. Running salloc with a list of resources will allocate the resources you requested, create a job, and drop you into a subshell with the environment variables necessary to run commands in the newly created job allocation. When your time is up or you exit the subshell, your job allocation will be relinquished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tgray26@opensub00:salloc -N 1 --mem=2gb --time=01:00:00&lt;br /&gt;
salloc: Granted job allocation 159&lt;br /&gt;
tgray26@opensub00:srun /usr/bin/hostname&lt;br /&gt;
openlab00.umiacs.umd.edu&lt;br /&gt;
tgray26@opensub00:exit&lt;br /&gt;
exit&lt;br /&gt;
salloc: Relinquishing job allocation 159&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Please note that any commands not invoked with srun will be run locally on the submit node. Please be careful when using salloc.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==sbatch==&lt;br /&gt;
The sbatch command allows you to write a batch script to be submitted and run non-interactively on the compute nodes. To run a simple Hello World command on the compute nodes you could write a file, helloWorld.sh with the following contents:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
srun bash -c &#039;echo Hello World from `hostname`&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then you need to submit the script with sbatch and request resources:&lt;br /&gt;
&amp;lt;pre&amp;gt;tgray26@opensub00:sbatch --mem=1gb --time=1:00:00 helloWorld.sh&lt;br /&gt;
Submitted batch job 121&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
SLURM will return a job number that you can use to check the status of your job with squeue:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tgray26@opensub00:squeue&lt;br /&gt;
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
               121     dpart helloWor  tgray26  R       0:01      2 openlab[00-01]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
====Advanced Batch Scripts====&lt;br /&gt;
You can also write a batch script with all of your resources/options defined in the script itself. This is useful for jobs that need to be run 10s/100s/1000s of times. You can then handle any necessary environment setup and run commands on the resources you requested by invoking commands with srun. The srun commands can also be more complex and be told to only use portions of your entire job allocation, each of these distinct srun commands makes up one &amp;quot;job step&amp;quot;. The batch script will be run on the first node allocated as part of your job allocation and each job step will be run on whatever resources you tell them to. In the following example I have a batch job that will request 2 nodes in the cluster, then I load a specific version of Python into my environment and submit two job steps, each one using one node. Since srun is blocks until the command finishes, I use the &#039;&amp;amp;&#039; operator to background the process so that both job steps can run at once; however, this means that I then need to use the wait command to block processing until all background processes have finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
# Lines that begin with #SBATCH specify commands to be used by SLURM for scheduling&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=helloWorld                                   # sets the job name&lt;br /&gt;
#SBATCH --output helloWorld.out.%j                              # indicates a file to redirect STDOUT to; %j is the jobid &lt;br /&gt;
#SBATCH --error helloWorld.out.%j                               # indicates a file to redirect STDERR to; %j is the jobid&lt;br /&gt;
#SBATCH --time=00:05:00                                         # how long you think your job will take to complete; format=hh:mm:ss&lt;br /&gt;
#SBATCH --qos=dpart                                             # set QOS, this will determine what resources can be requested&lt;br /&gt;
#SBATCH --nodes=2                                               # number of nodes to allocate for your job&lt;br /&gt;
#SBATCH --ntasks=4                                              # request 4 cpu cores be reserved for your node total&lt;br /&gt;
#SBATCH --ntasks-per-node=2                                     # request 2 cpu cores be reserved per node&lt;br /&gt;
#SBATCH --mem 1gb                                               # memory required by job; if unit is not specified MB will be assumed&lt;br /&gt;
&lt;br /&gt;
module load Python/2.7.9                                        # run any commands necessary to setup your environment&lt;br /&gt;
&lt;br /&gt;
srun -N 1 --mem=512mb bash -c &amp;quot;hostname; python --version&amp;quot; &amp;amp;    # use srun to invoke commands within your job; using an &#039;&amp;amp;&#039;&lt;br /&gt;
srun -N 1 --mem=512mb bash -c &amp;quot;hostname; python --version&amp;quot; &amp;amp;    # will background the process allowing them to run concurrently&lt;br /&gt;
wait                                                            # wait for any background processes to complete&lt;br /&gt;
&lt;br /&gt;
# once the end of the batch script is reached your job allocation will be revoked&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another useful thing to know is that you can pass additional arguments into your sbatch scripts on the command line and reference them as &amp;lt;code&amp;gt;${1}&amp;lt;/code&amp;gt; for the first argument and so on.&lt;br /&gt;
&lt;br /&gt;
====More Examples====&lt;br /&gt;
&lt;br /&gt;
* [[SLURM/ArrayJobs]]&lt;br /&gt;
&lt;br /&gt;
===scancel===&lt;br /&gt;
The scancel command can be used to cancel job allocations or job steps that are no longer needed. It can be passed individual job IDs or an option to delete all of your jobs or jobs that meet certain criteria.&lt;br /&gt;
*&amp;lt;code&amp;gt;scancel 255&amp;lt;/code&amp;gt;     &#039;&#039;cancel job 255&#039;&#039;&lt;br /&gt;
*&amp;lt;code&amp;gt;scancel 255.3&amp;lt;/code&amp;gt;     &#039;&#039;cancel job step 3 of job 255&#039;&#039;&lt;br /&gt;
*&amp;lt;code&amp;gt;scancel --user tgray26 --partition dpart&amp;lt;/code&amp;gt;    &#039;&#039;cancel all jobs for tgray26 in the dpart partition&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Identifying Resources and Features=&lt;br /&gt;
The sinfo can show you additional features of nodes in the cluster but you need to ask it to show some non-default options using a command like this &lt;br /&gt;
&amp;lt;code&amp;gt;sinfo -o &amp;quot;%15N %10c %10m  %25f %10G&amp;quot;&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sinfo -o &amp;quot;%40N %8c %8m  %20f %25G&amp;quot;&lt;br /&gt;
NODELIST                                 CPUS     MEMORY    AVAIL_FEATURES       GRES&lt;br /&gt;
openlab[30-33]                           64       257759    Opteron,6274         (null)&lt;br /&gt;
openlab[00-07]                           8        7822      Opteron,2354         (null)&lt;br /&gt;
openlab[10-11,13-18,20-23,25,27-29]      16       23939     Xeon,x5560           (null)&lt;br /&gt;
openlab08                                32       128720    Xeon,E5-2690         gpu:k20:2&lt;br /&gt;
openlab09                                32       128722    Xeon,E5-2690         gpu:m40:1,gpu:k20:2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also identify further specific information about a node using [https://wiki.umiacs.umd.edu/umiacs/index.php/SLURM/ClusterStatus#scontrol scontrol].&lt;br /&gt;
&lt;br /&gt;
=Requesting GPUs=&lt;br /&gt;
If you need to do processing on a GPU, you will need to request that your job have access to GPUs just as you need to request processors or cpu cores. You will also need to make sure that you submit your job to the correct partition since nodes with GPUs are often put into their own partition to prevent the nodes from being tied up by jobs that don&#039;t utilize GPUs. In SLURM, GPUs are considered &amp;quot;generic resources&amp;quot; also known as GRES. To request some number of GPUs be reserved/available for your job you can use the flag &amp;lt;code&amp;gt;--gres:gpu:2&amp;lt;/code&amp;gt; or if there are multiple types of GPUs available in the cluster and you need a specific type, you can provide the type option to the gres flag &amp;lt;code&amp;gt;--gres:k20:1&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tgray26@opensub01:srun --pty --partition gpu --qos=gpu --gres=gpu:2 nvidia-smi&lt;br /&gt;
Wed Jul 13 15:33:18 2016&lt;br /&gt;
+------------------------------------------------------+&lt;br /&gt;
| NVIDIA-SMI 361.28     Driver Version: 361.28         |&lt;br /&gt;
|-------------------------------+----------------------+----------------------+&lt;br /&gt;
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |&lt;br /&gt;
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |&lt;br /&gt;
|===============================+======================+======================|&lt;br /&gt;
|   0  Tesla K20c          Off  | 0000:03:00.0     Off |                    0 |&lt;br /&gt;
| 30%   24C    P0    48W / 225W |     11MiB /  4799MiB |      0%      Default |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
|   1  Tesla K20c          Off  | 0000:84:00.0     Off |                    0 |&lt;br /&gt;
| 30%   23C    P0    52W / 225W |     11MiB /  4799MiB |     93%      Default |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                       GPU Memory |&lt;br /&gt;
|  GPU       PID  Type  Process name                               Usage      |&lt;br /&gt;
|=============================================================================|&lt;br /&gt;
|  No running processes found                                                 |&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Please note that your job will only be able to see/access the GPUs you requested. If you only need 1 GPU, please request only 1 GPU and the other one will be left available for other users:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tgray26@opensub01:srun --pty --partition gpu --qos=gpu --gres=gpu:k20:1 nvidia-smi&lt;br /&gt;
Wed Jul 13 15:31:29 2016&lt;br /&gt;
+------------------------------------------------------+&lt;br /&gt;
| NVIDIA-SMI 361.28     Driver Version: 361.28         |&lt;br /&gt;
|-------------------------------+----------------------+----------------------+&lt;br /&gt;
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |&lt;br /&gt;
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |&lt;br /&gt;
|===============================+======================+======================|&lt;br /&gt;
|   0  Tesla K20c          Off  | 0000:03:00.0     Off |                    0 |&lt;br /&gt;
| 30%   24C    P0    50W / 225W |     11MiB /  4799MiB |     92%      Default |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                       GPU Memory |&lt;br /&gt;
|  GPU       PID  Type  Process name                               Usage      |&lt;br /&gt;
|=============================================================================|&lt;br /&gt;
|  No running processes found                                                 |&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The &amp;lt;code&amp;gt;--gres&amp;lt;/code&amp;gt; flag may also be passed to [[#sbatch | sbatch]] and [[#salloc | salloc]] rather than directly to [[#srun | srun]]&lt;br /&gt;
&lt;br /&gt;
=MPI example=&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/usr/bin/bash &lt;br /&gt;
#SBATCH --job-name=mpi_test # Job name &lt;br /&gt;
#SBATCH --nodes=4 # Number of nodes &lt;br /&gt;
#SBATCH --ntasks=8 # Number of MPI ranks &lt;br /&gt;
#SBATCH --ntasks-per-node=2 # Number of MPI ranks per node &lt;br /&gt;
#SBATCH --ntasks-per-socket=1 # Number of tasks per processor socket on the node &lt;br /&gt;
#SBATCH --time=00:30:00 # Time limit hrs:min:sec &lt;br /&gt;
&lt;br /&gt;
module load mpi &lt;br /&gt;
&lt;br /&gt;
srun --mpi=openmpi /nfshomes/derek/testing/mpi/a.out &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dkontyko</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM/JobSubmission&amp;diff=8520</id>
		<title>SLURM/JobSubmission</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM/JobSubmission&amp;diff=8520"/>
		<updated>2019-08-19T18:18:00Z</updated>

		<summary type="html">&lt;p&gt;Dkontyko: /* Common srun arguments */ adding qos option list command&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Job Submission=&lt;br /&gt;
&lt;br /&gt;
SLURM offers a variety of ways to run jobs. It is important to understand the different options available and how to request the resources required for a job in order for it to run successfully. All job submission should be done from submit nodes; any computational code should be run in a job allocation on compute nodes. The following commands outline how to allocate resources on the compute nodes and submit processes to be run on the allocated nodes.&lt;br /&gt;
&lt;br /&gt;
==srun==&lt;br /&gt;
&amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; is the command used to run a process on the compute nodes in the cluster. It works by passing it a command (this could be a script) which will be run on a compute node and then &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; will return. &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; accepts many command line options to specify the resources required by the command passed to it. Some common command line arguments are listed below and full documentation of all available options is available in the man page for &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt;, which can be accessed by running &amp;lt;code&amp;gt;man srun&amp;lt;/code&amp;gt;. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tgray26@opensub01:srun --mem=100mb --time=1:00:00 bash -c &#039;echo &amp;quot;Hello World from&amp;quot; `hostname`&#039;&lt;br /&gt;
Hello World from openlab06.umiacs.umd.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It is important to understand that &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; is an interactive command. By default input to &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; is broadcast to all compute nodes running your process and output from the compute nodes is redirected to &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt;. This behavior can be changed; however, &#039;&#039;&#039;srun will always wait for the command passed to finish before exiting, so if you start a long running process and end your terminal session, your process will stop running on the compute nodes and your job will end&#039;&#039;&#039;. To run a non-interactive submission that will remain running after you logout, you will need to wrap your &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; commands in a batch script and submit it with [[#sbatch | sbatch]]&lt;br /&gt;
===Common srun arguments===&lt;br /&gt;
* &amp;lt;code&amp;gt;--mem=1gb&amp;lt;/code&amp;gt; &#039;&#039;if no unit is given MB is assumed&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;--nodes=2&amp;lt;/code&amp;gt; &#039;&#039;if passed to srun, the given command will be run concurrently on each node&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;--qos=dpart&amp;lt;/code&amp;gt; &#039;&#039;to see the available QOS options on a cluster, run&#039;&#039; &amp;lt;code&amp;gt;sacctmgr list qos&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;--time=hh:mm:ss&amp;lt;/code&amp;gt; &#039;&#039;time needed to run your job&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;--job-name=helloWorld&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;--output filename&amp;lt;/code&amp;gt; &#039;&#039;file to redirect stdout to&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;--error filename&amp;lt;/code&amp;gt; &#039;&#039;file to redirect stderr&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;--partition $PNAME&amp;lt;/code&amp;gt; &#039;&#039;request job run in the $PNAME partition&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;--ntasks 2&amp;lt;/code&amp;gt; &#039;&#039;request 2 &amp;quot;tasks&amp;quot; which map to cores on a CPU, if passed to srun the given command will be run concurrently on each core&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Interactive Shell Sessions===&lt;br /&gt;
An interactive shell session on a compute node can be useful for debugging or developing code that isn&#039;t ready to be run as a batch job. To get an interactive shell on a node, use &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; to invoke a shell:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tgray26@opensub01:srun --pty --mem 1gb --time=01:00:00 bash&lt;br /&gt;
tgray26@openlab06:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Please do not leave interactive shells running for long periods of time when you are not working. This blocks resources from being used by everyone else.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==salloc==&lt;br /&gt;
The salloc command can also be used to request resources be allocated without needing a batch script. Running salloc with a list of resources will allocate the resources you requested, create a job, and drop you into a subshell with the environment variables necessary to run commands in the newly created job allocation. When your time is up or you exit the subshell, your job allocation will be relinquished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tgray26@opensub00:salloc -N 1 --mem=2gb --time=01:00:00&lt;br /&gt;
salloc: Granted job allocation 159&lt;br /&gt;
tgray26@opensub00:srun /usr/bin/hostname&lt;br /&gt;
openlab00.umiacs.umd.edu&lt;br /&gt;
tgray26@opensub00:exit&lt;br /&gt;
exit&lt;br /&gt;
salloc: Relinquishing job allocation 159&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Please note that any commands not invoked with srun will be run locally on the submit node. Please be careful when using salloc.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==sbatch==&lt;br /&gt;
The sbatch command allows you to write a batch script to be submitted and run non-interactively on the compute nodes. To run a simple Hello World command on the compute nodes you could write a file, helloWorld.sh with the following contents:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
srun bash -c &#039;echo Hello World from `hostname`&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then you need to submit the script with sbatch and request resources:&lt;br /&gt;
&amp;lt;pre&amp;gt;tgray26@opensub00:sbatch --mem=1gb --time=1:00:00 helloWorld.sh&lt;br /&gt;
Submitted batch job 121&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
SLURM will return a job number that you can use to check the status of your job with squeue:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tgray26@opensub00:squeue&lt;br /&gt;
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
               121     dpart helloWor  tgray26  R       0:01      2 openlab[00-01]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
====Advanced Batch Scripts====&lt;br /&gt;
You can also write a batch script with all of your resources/options defined in the script itself. This is useful for jobs that need to be run 10s/100s/1000s of times. You can then handle any necessary environment setup and run commands on the resources you requested by invoking commands with srun. The srun commands can also be more complex and be told to only use portions of your entire job allocation, each of these distinct srun commands makes up one &amp;quot;job step&amp;quot;. The batch script will be run on the first node allocated as part of your job allocation and each job step will be run on whatever resources you tell them to. In the following example I have a batch job that will request 2 nodes in the cluster, then I load a specific version of Python into my environment and submit two job steps, each one using one node. Since srun is blocks until the command finishes, I use the &#039;&amp;amp;&#039; operator to background the process so that both job steps can run at once; however, this means that I then need to use the wait command to block processing until all background processes have finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
# Lines that begin with #SBATCH specify commands to be used by SLURM for scheduling&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=helloWorld                                   # sets the job name&lt;br /&gt;
#SBATCH --output helloWorld.out.%j                              # indicates a file to redirect STDOUT to; %j is the jobid &lt;br /&gt;
#SBATCH --error helloWorld.out.%j                               # indicates a file to redirect STDERR to; %j is the jobid&lt;br /&gt;
#SBATCH --time=00:05:00                                         # how long you think your job will take to complete; format=hh:mm:ss&lt;br /&gt;
#SBATCH --qos=dpart                                             # set QOS, this will determine what resources can be requested&lt;br /&gt;
#SBATCH --nodes=2                                               # number of nodes to allocate for your job&lt;br /&gt;
#SBATCH --ntasks=4                                              # request 4 cpu cores be reserved for your node total&lt;br /&gt;
#SBATCH --ntasks-per-node=2                                     # request 2 cpu cores be reserved per node&lt;br /&gt;
#SBATCH --mem 1gb                                               # memory required by job; if unit is not specified MB will be assumed&lt;br /&gt;
&lt;br /&gt;
module load Python/2.7.9                                        # run any commands necessary to setup your environment&lt;br /&gt;
&lt;br /&gt;
srun -N 1 --mem=512mb bash -c &amp;quot;hostname; python --version&amp;quot; &amp;amp;    # use srun to invoke commands within your job; using an &#039;&amp;amp;&#039;&lt;br /&gt;
srun -N 1 --mem=512mb bash -c &amp;quot;hostname; python --version&amp;quot; &amp;amp;    # will background the process allowing them to run concurrently&lt;br /&gt;
wait                                                            # wait for any background processes to complete&lt;br /&gt;
&lt;br /&gt;
# once the end of the batch script is reached your job allocation will be revoked&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another useful thing to know is that you can pass additional arguments into your sbatch scripts on the command line and reference them as &amp;lt;code&amp;gt;${1}&amp;lt;/code&amp;gt; for the first argument and so on.&lt;br /&gt;
&lt;br /&gt;
====More Examples====&lt;br /&gt;
&lt;br /&gt;
* [[SLURM/ArrayJobs]]&lt;br /&gt;
&lt;br /&gt;
===scancel===&lt;br /&gt;
The scancel command can be used to cancel job allocations or job steps that are no longer needed. It can be passed individual job IDs or an option to delete all of your jobs or jobs that meet certain criteria.&lt;br /&gt;
*&amp;lt;code&amp;gt;scancel 255&amp;lt;/code&amp;gt;     &#039;&#039;cancel job 255&#039;&#039;&lt;br /&gt;
*&amp;lt;code&amp;gt;scancel 255.3&amp;lt;/code&amp;gt;     &#039;&#039;cancel job step 3 of job 255&#039;&#039;&lt;br /&gt;
*&amp;lt;code&amp;gt;scancel --user tgray26 --partition dpart&amp;lt;/code&amp;gt;    &#039;&#039;cancel all jobs for tgray26 in the dpart partition&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Identifying Resources and Features=&lt;br /&gt;
The sinfo can show you additional features of nodes in the cluster but you need to ask it to show some non-default options using a command like this &lt;br /&gt;
&amp;lt;code&amp;gt;sinfo -o &amp;quot;%15N %10c %10m  %25f %10G&amp;quot;&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sinfo -o &amp;quot;%40N %8c %8m  %20f %25G&amp;quot;&lt;br /&gt;
NODELIST                                 CPUS     MEMORY    AVAIL_FEATURES       GRES&lt;br /&gt;
openlab[30-33]                           64       257759    Opteron,6274         (null)&lt;br /&gt;
openlab[00-07]                           8        7822      Opteron,2354         (null)&lt;br /&gt;
openlab[10-11,13-18,20-23,25,27-29]      16       23939     Xeon,x5560           (null)&lt;br /&gt;
openlab08                                32       128720    Xeon,E5-2690         gpu:k20:2&lt;br /&gt;
openlab09                                32       128722    Xeon,E5-2690         gpu:m40:1,gpu:k20:2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also identify further specific information about a node using [https://wiki.umiacs.umd.edu/umiacs/index.php/SLURM/ClusterStatus#scontrol scontrol].&lt;br /&gt;
&lt;br /&gt;
=Requesting GPUs=&lt;br /&gt;
If you need to do processing on a GPU, you will need to request that your job have access to GPUs just as you need to request processors or cpu cores. You will also need to make sure that you submit your job to the correct partition since nodes with GPUs are often put into their own partition to prevent the nodes from being tied up by jobs that don&#039;t utilize GPUs. In SLURM, GPUs are considered &amp;quot;generic resources&amp;quot; also known as GRES. To request some number of GPUs be reserved/available for your job you can use the flag &amp;lt;code&amp;gt;--gres:gpu:2&amp;lt;/code&amp;gt; or if there are multiple types of GPUs available in the cluster and you need a specific type, you can provide the type option to the gres flag &amp;lt;code&amp;gt;--gres:k20:1&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tgray26@opensub01:srun --pty --partition gpu --qos=gpu --gres=gpu:2 nvidia-smi&lt;br /&gt;
Wed Jul 13 15:33:18 2016&lt;br /&gt;
+------------------------------------------------------+&lt;br /&gt;
| NVIDIA-SMI 361.28     Driver Version: 361.28         |&lt;br /&gt;
|-------------------------------+----------------------+----------------------+&lt;br /&gt;
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |&lt;br /&gt;
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |&lt;br /&gt;
|===============================+======================+======================|&lt;br /&gt;
|   0  Tesla K20c          Off  | 0000:03:00.0     Off |                    0 |&lt;br /&gt;
| 30%   24C    P0    48W / 225W |     11MiB /  4799MiB |      0%      Default |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
|   1  Tesla K20c          Off  | 0000:84:00.0     Off |                    0 |&lt;br /&gt;
| 30%   23C    P0    52W / 225W |     11MiB /  4799MiB |     93%      Default |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                       GPU Memory |&lt;br /&gt;
|  GPU       PID  Type  Process name                               Usage      |&lt;br /&gt;
|=============================================================================|&lt;br /&gt;
|  No running processes found                                                 |&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Please note that your job will only be able to see/access the GPUs you requested. If you only need 1 GPU, please request only 1 GPU and the other one will be left available for other users:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tgray26@opensub01:srun --pty --partition gpu --qos=gpu --gres=gpu:k20:1 nvidia-smi&lt;br /&gt;
Wed Jul 13 15:31:29 2016&lt;br /&gt;
+------------------------------------------------------+&lt;br /&gt;
| NVIDIA-SMI 361.28     Driver Version: 361.28         |&lt;br /&gt;
|-------------------------------+----------------------+----------------------+&lt;br /&gt;
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |&lt;br /&gt;
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |&lt;br /&gt;
|===============================+======================+======================|&lt;br /&gt;
|   0  Tesla K20c          Off  | 0000:03:00.0     Off |                    0 |&lt;br /&gt;
| 30%   24C    P0    50W / 225W |     11MiB /  4799MiB |     92%      Default |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                       GPU Memory |&lt;br /&gt;
|  GPU       PID  Type  Process name                               Usage      |&lt;br /&gt;
|=============================================================================|&lt;br /&gt;
|  No running processes found                                                 |&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The &amp;lt;code&amp;gt;--gres&amp;lt;/code&amp;gt; flag may also be passed to [[#sbatch | sbatch]] and [[#salloc | salloc]] rather than directly to [[#srun | srun]]&lt;br /&gt;
&lt;br /&gt;
=MPI example=&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/usr/bin/bash &lt;br /&gt;
#SBATCH --job-name=mpi_test # Job name &lt;br /&gt;
#SBATCH --nodes=4 # Number of nodes &lt;br /&gt;
#SBATCH --ntasks=8 # Number of MPI ranks &lt;br /&gt;
#SBATCH --ntasks-per-node=2 # Number of MPI ranks per node &lt;br /&gt;
#SBATCH --ntasks-per-socket=1 # Number of tasks per processor socket on the node &lt;br /&gt;
#SBATCH --time=00:30:00 # Time limit hrs:min:sec &lt;br /&gt;
&lt;br /&gt;
module load mpi &lt;br /&gt;
&lt;br /&gt;
srun --mpi=openmpi /nfshomes/derek/testing/mpi/a.out &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dkontyko</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM/JobSubmission&amp;diff=8440</id>
		<title>SLURM/JobSubmission</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM/JobSubmission&amp;diff=8440"/>
		<updated>2019-07-09T18:32:30Z</updated>

		<summary type="html">&lt;p&gt;Dkontyko: /* Requesting GPUs */ Added the qos flag for the gpus&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Job Submission=&lt;br /&gt;
&lt;br /&gt;
SLURM offers a variety of ways to run jobs. It is important to understand the different options available and how to request the resources required for a job in order for it to run successfully. All job submission should be done from submit nodes; any computational code should be run in a job allocation on compute nodes. The following commands outline how to allocate resources on the compute nodes and submit processes to be run on the allocated nodes.&lt;br /&gt;
&lt;br /&gt;
==srun==&lt;br /&gt;
&amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; is the command used to run a process on the compute nodes in the cluster. It works by passing it a command (this could be a script) which will be run on a compute node and then &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; will return. &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; accepts many command line options to specify the resources required by the command passed to it. Some common command line arguments are listed below and full documentation of all available options is available in the man page for &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt;, which can be accessed by running &amp;lt;code&amp;gt;man srun&amp;lt;/code&amp;gt;. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tgray26@opensub01:srun --mem=100mb --time=1:00:00 bash -c &#039;echo &amp;quot;Hello World from&amp;quot; `hostname`&#039;&lt;br /&gt;
Hello World from openlab06.umiacs.umd.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It is important to understand that &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; is an interactive command. By default input to &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; is broadcast to all compute nodes running your process and output from the compute nodes is redirected to &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt;. This behavior can be changed; however, &#039;&#039;&#039;srun will always wait for the command passed to finish before exiting, so if you start a long running process and end your terminal session, your process will stop running on the compute nodes and your job will end&#039;&#039;&#039;. To run a non-interactive submission that will remain running after you logout, you will need to wrap your &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; commands in a batch script and submit it with [[#sbatch | sbatch]]&lt;br /&gt;
===Common srun arguments===&lt;br /&gt;
* &amp;lt;code&amp;gt;--mem=1gb&amp;lt;/code&amp;gt; &#039;&#039;if no unit is given MB is assumed&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;--nodes=2&amp;lt;/code&amp;gt; &#039;&#039;if passed to srun, the given command will be run concurrently on each node&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;--qos=dpart&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;--time=hh:mm:ss&amp;lt;/code&amp;gt; &#039;&#039;time needed to run your job&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;--job-name=helloWorld&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;--output filename&amp;lt;/code&amp;gt; &#039;&#039;file to redirect stdout to&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;--error filename&amp;lt;/code&amp;gt; &#039;&#039;file to redirect stderr&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;--partition $PNAME&amp;lt;/code&amp;gt; &#039;&#039;request job run in the $PNAME partition&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;--ntasks 2&amp;lt;/code&amp;gt; &#039;&#039;request 2 &amp;quot;tasks&amp;quot; which map to cores on a CPU, if passed to srun the given command will be run concurrently on each core&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Interactive Shell Sessions===&lt;br /&gt;
An interactive shell session on a compute node can be useful for debugging or developing code that isn&#039;t ready to be run as a batch job. To get an interactive shell on a node, use &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; to invoke a shell:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tgray26@opensub01:srun --pty --mem 1gb --time=01:00:00 bash&lt;br /&gt;
tgray26@openlab06:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Please do not leave interactive shells running for long periods of time when you are not working. This blocks resources from being used by everyone else.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==salloc==&lt;br /&gt;
The salloc command can also be used to request resources be allocated without needing a batch script. Running salloc with a list of resources will allocate the resources you requested, create a job, and drop you into a subshell with the environment variables necessary to run commands in the newly created job allocation. When your time is up or you exit the subshell, your job allocation will be relinquished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tgray26@opensub00:salloc -N 1 --mem=2gb --time=01:00:00&lt;br /&gt;
salloc: Granted job allocation 159&lt;br /&gt;
tgray26@opensub00:srun /usr/bin/hostname&lt;br /&gt;
openlab00.umiacs.umd.edu&lt;br /&gt;
tgray26@opensub00:exit&lt;br /&gt;
exit&lt;br /&gt;
salloc: Relinquishing job allocation 159&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Please note that any commands not invoked with srun will be run locally on the submit node. Please be careful when using salloc.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==sbatch==&lt;br /&gt;
The sbatch command allows you to write a batch script to be submitted and run non-interactively on the compute nodes. To run a simple Hello World command on the compute nodes you could write a file, helloWorld.sh with the following contents:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
srun bash -c &#039;echo Hello World from `hostname`&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then you need to submit the script with sbatch and request resources:&lt;br /&gt;
&amp;lt;pre&amp;gt;tgray26@opensub00:sbatch --mem=1gb --time=1:00:00 helloWorld.sh&lt;br /&gt;
Submitted batch job 121&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
SLURM will return a job number that you can use to check the status of your job with squeue:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tgray26@opensub00:squeue&lt;br /&gt;
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
               121     dpart helloWor  tgray26  R       0:01      2 openlab[00-01]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
====Advanced Batch Scripts====&lt;br /&gt;
You can also write a batch script with all of your resources/options defined in the script itself. This is useful for jobs that need to be run 10s/100s/1000s of times. You can then handle any necessary environment setup and run commands on the resources you requested by invoking commands with srun. The srun commands can also be more complex and be told to only use portions of your entire job allocation, each of these distinct srun commands makes up one &amp;quot;job step&amp;quot;. The batch script will be run on the first node allocated as part of your job allocation and each job step will be run on whatever resources you tell them to. In the following example I have a batch job that will request 2 nodes in the cluster, then I load a specific version of Python into my environment and submit two job steps, each one using one node. Since srun is blocks until the command finishes, I use the &#039;&amp;amp;&#039; operator to background the process so that both job steps can run at once; however, this means that I then need to use the wait command to block processing until all background processes have finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
# Lines that begin with #SBATCH specify commands to be used by SLURM for scheduling&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=helloWorld                                   # sets the job name&lt;br /&gt;
#SBATCH --output helloWorld.out.%j                              # indicates a file to redirect STDOUT to; %j is the jobid &lt;br /&gt;
#SBATCH --error helloWorld.out.%j                               # indicates a file to redirect STDERR to; %j is the jobid&lt;br /&gt;
#SBATCH --time=00:05:00                                         # how long you think your job will take to complete; format=hh:mm:ss&lt;br /&gt;
#SBATCH --qos=dpart                                             # set QOS, this will determine what resources can be requested&lt;br /&gt;
#SBATCH --nodes=2                                               # number of nodes to allocate for your job&lt;br /&gt;
#SBATCH --ntasks=4                                              # request 4 cpu cores be reserved for your node total&lt;br /&gt;
#SBATCH --ntasks-per-node=2                                     # request 2 cpu cores be reserved per node&lt;br /&gt;
#SBATCH --mem 1gb                                               # memory required by job; if unit is not specified MB will be assumed&lt;br /&gt;
&lt;br /&gt;
module load Python/2.7.9                                        # run any commands necessary to setup your environment&lt;br /&gt;
&lt;br /&gt;
srun -N 1 --mem=512mb bash -c &amp;quot;hostname; python --version&amp;quot; &amp;amp;    # use srun to invoke commands within your job; using an &#039;&amp;amp;&#039;&lt;br /&gt;
srun -N 1 --mem=512mb bash -c &amp;quot;hostname; python --version&amp;quot; &amp;amp;    # will background the process allowing them to run concurrently&lt;br /&gt;
wait                                                            # wait for any background processes to complete&lt;br /&gt;
&lt;br /&gt;
# once the end of the batch script is reached your job allocation will be revoked&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another useful thing to know is that you can pass additional arguments into your sbatch scripts on the command line and reference them as &amp;lt;code&amp;gt;${1}&amp;lt;/code&amp;gt; for the first argument and so on.&lt;br /&gt;
&lt;br /&gt;
====More Examples====&lt;br /&gt;
&lt;br /&gt;
* [[SLURM/ArrayJobs]]&lt;br /&gt;
&lt;br /&gt;
===scancel===&lt;br /&gt;
The scancel command can be used to cancel job allocations or job steps that are no longer needed. It can be passed individual job IDs or an option to delete all of your jobs or jobs that meet certain criteria.&lt;br /&gt;
*&amp;lt;code&amp;gt;scancel 255&amp;lt;/code&amp;gt;     &#039;&#039;cancel job 255&#039;&#039;&lt;br /&gt;
*&amp;lt;code&amp;gt;scancel 255.3&amp;lt;/code&amp;gt;     &#039;&#039;cancel job step 3 of job 255&#039;&#039;&lt;br /&gt;
*&amp;lt;code&amp;gt;scancel --user tgray26 --partition dpart&amp;lt;/code&amp;gt;    &#039;&#039;cancel all jobs for tgray26 in the dpart partition&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Identifying Resources and Features=&lt;br /&gt;
The sinfo can show you additional features of nodes in the cluster but you need to ask it to show some non-default options using a command like this &lt;br /&gt;
&amp;lt;code&amp;gt;sinfo -o &amp;quot;%15N %10c %10m  %25f %10G&amp;quot;&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sinfo -o &amp;quot;%40N %8c %8m  %20f %25G&amp;quot;&lt;br /&gt;
NODELIST                                 CPUS     MEMORY    AVAIL_FEATURES       GRES&lt;br /&gt;
openlab[30-33]                           64       257759    Opteron,6274         (null)&lt;br /&gt;
openlab[00-07]                           8        7822      Opteron,2354         (null)&lt;br /&gt;
openlab[10-11,13-18,20-23,25,27-29]      16       23939     Xeon,x5560           (null)&lt;br /&gt;
openlab08                                32       128720    Xeon,E5-2690         gpu:k20:2&lt;br /&gt;
openlab09                                32       128722    Xeon,E5-2690         gpu:m40:1,gpu:k20:2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also identify further specific information about a node using [https://wiki.umiacs.umd.edu/umiacs/index.php/SLURM/ClusterStatus#scontrol scontrol].&lt;br /&gt;
&lt;br /&gt;
=Requesting GPUs=&lt;br /&gt;
If you need to do processing on a GPU, you will need to request that your job have access to GPUs just as you need to request processors or cpu cores. You will also need to make sure that you submit your job to the correct partition since nodes with GPUs are often put into their own partition to prevent the nodes from being tied up by jobs that don&#039;t utilize GPUs. In SLURM, GPUs are considered &amp;quot;generic resources&amp;quot; also known as GRES. To request some number of GPUs be reserved/available for your job you can use the flag &amp;lt;code&amp;gt;--gres:gpu:2&amp;lt;/code&amp;gt; or if there are multiple types of GPUs available in the cluster and you need a specific type, you can provide the type option to the gres flag &amp;lt;code&amp;gt;--gres:k20:1&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tgray26@opensub01:srun --pty --partition gpu --qos=gpu --gres=gpu:2 nvidia-smi&lt;br /&gt;
Wed Jul 13 15:33:18 2016&lt;br /&gt;
+------------------------------------------------------+&lt;br /&gt;
| NVIDIA-SMI 361.28     Driver Version: 361.28         |&lt;br /&gt;
|-------------------------------+----------------------+----------------------+&lt;br /&gt;
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |&lt;br /&gt;
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |&lt;br /&gt;
|===============================+======================+======================|&lt;br /&gt;
|   0  Tesla K20c          Off  | 0000:03:00.0     Off |                    0 |&lt;br /&gt;
| 30%   24C    P0    48W / 225W |     11MiB /  4799MiB |      0%      Default |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
|   1  Tesla K20c          Off  | 0000:84:00.0     Off |                    0 |&lt;br /&gt;
| 30%   23C    P0    52W / 225W |     11MiB /  4799MiB |     93%      Default |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                       GPU Memory |&lt;br /&gt;
|  GPU       PID  Type  Process name                               Usage      |&lt;br /&gt;
|=============================================================================|&lt;br /&gt;
|  No running processes found                                                 |&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Please note that your job will only be able to see/access the GPUs you requested. If you only need 1 GPU, please request only 1 GPU and the other one will be left available for other users:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tgray26@opensub01:srun --pty --partition gpu --qos=gpu --gres=gpu:k20:1 nvidia-smi&lt;br /&gt;
Wed Jul 13 15:31:29 2016&lt;br /&gt;
+------------------------------------------------------+&lt;br /&gt;
| NVIDIA-SMI 361.28     Driver Version: 361.28         |&lt;br /&gt;
|-------------------------------+----------------------+----------------------+&lt;br /&gt;
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |&lt;br /&gt;
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |&lt;br /&gt;
|===============================+======================+======================|&lt;br /&gt;
|   0  Tesla K20c          Off  | 0000:03:00.0     Off |                    0 |&lt;br /&gt;
| 30%   24C    P0    50W / 225W |     11MiB /  4799MiB |     92%      Default |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                       GPU Memory |&lt;br /&gt;
|  GPU       PID  Type  Process name                               Usage      |&lt;br /&gt;
|=============================================================================|&lt;br /&gt;
|  No running processes found                                                 |&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The &amp;lt;code&amp;gt;--gres&amp;lt;/code&amp;gt; flag may also be passed to [[#sbatch | sbatch]] and [[#salloc | salloc]] rather than directly to [[#srun | srun]]&lt;br /&gt;
&lt;br /&gt;
=MPI example=&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/usr/bin/bash &lt;br /&gt;
#SBATCH --job-name=mpi_test # Job name &lt;br /&gt;
#SBATCH --nodes=4 # Number of nodes &lt;br /&gt;
#SBATCH --ntasks=8 # Number of MPI ranks &lt;br /&gt;
#SBATCH --ntasks-per-node=2 # Number of MPI ranks per node &lt;br /&gt;
#SBATCH --ntasks-per-socket=1 # Number of tasks per processor socket on the node &lt;br /&gt;
#SBATCH --time=00:30:00 # Time limit hrs:min:sec &lt;br /&gt;
&lt;br /&gt;
module load mpi &lt;br /&gt;
&lt;br /&gt;
srun --mpi=openmpi /nfshomes/derek/testing/mpi/a.out &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dkontyko</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=WebSpace&amp;diff=8434</id>
		<title>WebSpace</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=WebSpace&amp;diff=8434"/>
		<updated>2019-06-25T14:44:34Z</updated>

		<summary type="html">&lt;p&gt;Dkontyko: /* Personal Web Space */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;UMIACS provides web space hosting for research/lab pages and user pages.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Hosting websites in UMIACS Object Store &#039;&#039;(preferred method)&#039;&#039;&#039;&#039;&#039; ==&lt;br /&gt;
Please refer to the [https://obj.umiacs.umd.edu/obj/help UMIACS Object Store Help Page] for details on hosting a website in the UMIACS Object Store. This is currently our most updated and reliable method for hosting websites.&lt;br /&gt;
&lt;br /&gt;
==Main Website and Lab Pages==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;http://www.umiacs.umd.edu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Users can access the main website and lab sites for editing in two ways:&lt;br /&gt;
* From &amp;lt;b&amp;gt;Unix&amp;lt;/b&amp;gt; as /fs/www - and can be remotely accessed by [[SFTP]] to a supported Unix host (eg. [[OpenLAB]])&lt;br /&gt;
* From &amp;lt;b&amp;gt;Windows&amp;lt;/b&amp;gt; as \\fluidfs.ad.umiacs.umd.edu\www-umiacs - and remotely accessed by the same file share over the [[VPN]]&lt;br /&gt;
&lt;br /&gt;
Faculty members and authorized users can modify their own public profiles on the main UMIACS homepage. For instructions, see [[ContentManagement]].&lt;br /&gt;
&lt;br /&gt;
==Personal Web Space==&lt;br /&gt;
&lt;br /&gt;
Your personal website URL at UMIACS is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;http://www.umiacs.umd.edu/~username&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &#039;&#039;&#039;username&#039;&#039;&#039; is your UMIACS username. Users can set this page to redirect to any page of their choice by setting the &#039;&#039;&#039;Home Page&#039;&#039;&#039; attribute in their UMIACS [https://intranet.umiacs.umd.edu/directory/info/ directory entry].&lt;br /&gt;
&lt;br /&gt;
In general, large datasets related to a lab&#039;s research should go into the specific lab&#039;s web tree, not the individual users.  Remember that a user&#039;s webpage is not permanently maintained once the user leaves UMIACS.&lt;br /&gt;
&lt;br /&gt;
UMIACS currently supports two ways of hosting a personal website within our network, the Object Store and the OPENLab file space.&lt;br /&gt;
&lt;br /&gt;
===UMIACS Object Store===&lt;br /&gt;
&lt;br /&gt;
This is the preferred method of hosting a personal website at UMIACS. Please see the [https://obj.umiacs.umd.edu/obj/help UMIACS Object Store (OBJ) Help Page] for more information on creating a website within OBJ. Once you create your website in OBJ, you will need to set your directory &#039;&#039;&#039;Home Page&#039;&#039;&#039; to the bucket&#039;s URL (the URL that ends in &amp;lt;code&amp;gt;umiacs.io&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
===OPENLab File Space===&lt;br /&gt;
&lt;br /&gt;
This is primarily a legacy method for users who already have their websites configured this way. If you believe that your circumstances require your personal website to be hosted on this file space, please contact the [[HelpDesk | Help Desk]]. (This does not affect existing users who already have websites hosted on the OPENLab file space.)&lt;br /&gt;
&lt;br /&gt;
You will need set your directory &#039;&#039;&#039;Home Page&#039;&#039;&#039; attribute to &amp;lt;code&amp;gt;http://users.umiacs.umd.edu/~username&amp;lt;/code&amp;gt;, where &#039;&#039;&#039;username&#039;&#039;&#039; is your UMIACS username (similar to your personal URL above). You can access your website for editing in two ways:&lt;br /&gt;
&lt;br /&gt;
* From &amp;lt;b&amp;gt;Unix&amp;lt;/b&amp;gt; as /fs/www-users/username - and can be remotely accessed via [[SFTP]] to a supported UNIX host (eg. [[OpenLAB]]).&lt;br /&gt;
* From &amp;lt;b&amp;gt;Windows&amp;lt;/b&amp;gt; as \\fluidfs.ad.umiacs.umd.edu\www-users\username - and remotely accessed by the same file share over the [[VPN]].&lt;br /&gt;
&lt;br /&gt;
==Adding A Password Protected Folder To Your Web Space==&lt;br /&gt;
&lt;br /&gt;
1) Create the directory you want to password protect or &amp;lt;tt&amp;gt;cd&amp;lt;/tt&amp;gt; into the directory you want to password protect&lt;br /&gt;
&lt;br /&gt;
2) Create a file called &#039;&#039;.htaccess&#039;&#039; (&amp;lt;tt&amp;gt; vi .htaccess&amp;lt;/tt&amp;gt;) in the directory you wish to password protect.&lt;br /&gt;
&lt;br /&gt;
3) In the file you just created type the following lines &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AuthUserFile &amp;quot;/your/directory/here/&amp;quot;.htpasswd&lt;br /&gt;
AuthName &amp;quot;Secure Document&amp;quot;&lt;br /&gt;
AuthType Basic&lt;br /&gt;
require user username&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you were going to protect the &amp;lt;tt&amp;gt;/fs/www-users/username/private&amp;lt;/tt&amp;gt; directory and you want the required name to be  &amp;lt;tt&amp;gt;class239&amp;lt;/tt&amp;gt;, then your file would look like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AuthUserFile /fs/www-users/username/private/.htpasswd&lt;br /&gt;
AuthName &amp;quot;Secure Document&amp;quot;&lt;br /&gt;
AuthType Basic&lt;br /&gt;
require user class239&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4) Create a file called &#039;&#039;.htpasswd&#039;&#039; in the same directory as &#039;&#039;.htaccess&#039;&#039;. You create this file by typing in &amp;lt;tt&amp;gt;htpasswd -c .htpasswd &#039;&#039;username&#039;&#039;&amp;lt;/tt&amp;gt; in the directory area to be protected.&lt;br /&gt;
&lt;br /&gt;
In the example above, the username is &amp;lt;tt&amp;gt;class239&amp;lt;/tt&amp;gt; so you would type &amp;lt;tt&amp;gt;htpasswd -c .htpasswd class239&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will be prompted to enter the password you want. The &#039;&#039;.htpasswd&#039;&#039; file will be created in the current directory and will contain an encrypted version of the password.&lt;br /&gt;
&lt;br /&gt;
To later change the username, edit the &#039;&#039;.htaccess&#039;&#039; file and change the username. If you want to later change the password, just retype the above line in step 4 and enter the new password at the prompt.&lt;br /&gt;
&lt;br /&gt;
==Restricting Content based on IP address==&lt;br /&gt;
It is possible to have pages on your webspace only accessible to clients connecting from certain IP addresses. In order to accomplish this, cd in to the directory you wish to restrict, and edit your &#039;&#039;.htaccess&#039;&#039; or &#039;&#039;httpd.conf&#039;&#039; file. The example below shows how to make content only viewable to clients connecting from the UMD wifi in Apache 2.2.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre style=&amp;quot;white-space: pre-wrap; &lt;br /&gt;
white-space: -moz-pre-wrap; &lt;br /&gt;
white-space: -pre-wrap; &lt;br /&gt;
white-space: -o-pre-wrap; &lt;br /&gt;
word-wrap: break-word;&amp;quot;&amp;gt;SetEnvIF X-Forwarded-For &amp;quot;^128\.8\.\d+\.\d+$&amp;quot; UMD_NETWORK&lt;br /&gt;
SetEnvIF X-Forwarded-For &amp;quot;^129\.2\.\d+\.\d+$&amp;quot; UMD_NETWORK&lt;br /&gt;
SetEnvIF X-Forwarded-For &amp;quot;^192\.168\.\d+\.\d+$&amp;quot; UMD_NETWORK&lt;br /&gt;
SetEnvIF X-Forwarded-For &amp;quot;^206\.196\.(?:1[6-9][0-9]|2[0-5][0-9])\.\d+$&amp;quot; UMD_NETWORK&lt;br /&gt;
SetEnvIF X-Forwarded-For &amp;quot;^10\.\d+\.\d+\.\d+$&amp;quot; UMD_NETWORK&lt;br /&gt;
Order Deny,Allow&lt;br /&gt;
Deny from all&lt;br /&gt;
Allow from env=UMD_NETWORK&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The SetEnvIF directive will modify one&#039;s environment if the specified attribute matches the provided regular expression. In this example, IP addresses that are forwarded from an IP within UMD&#039;s IP space are tagged with UMD_NETWORK. Then, all traffic to the example directory is blocked unless it has the UMD_NETWORK tag. See the following pages for a more in depth explanation of the commands used.&lt;br /&gt;
&lt;br /&gt;
[https://httpd.apache.org/docs/2.2/howto/htaccess.html .htaccess], [https://httpd.apache.org/docs/2.2/mod/mod_setenvif.html#setenvif SetEnvIf], [https://httpd.apache.org/docs/2.2/mod/mod_authz_host.html#order Order], [https://httpd.apache.org/docs/2.2/mod/mod_authz_host.html#deny Deny], [https://httpd.apache.org/docs/2.2/mod/mod_authz_host.html#allow Allow]&lt;/div&gt;</summary>
		<author><name>Dkontyko</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=WebSpace&amp;diff=8433</id>
		<title>WebSpace</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=WebSpace&amp;diff=8433"/>
		<updated>2019-06-25T14:42:08Z</updated>

		<summary type="html">&lt;p&gt;Dkontyko: /* Personal Web Space */ substantial rewrite to this section, clarifying our preferred method of hosting a personal website and more clearly describing the process for setting one&amp;#039;s home directory&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;UMIACS provides web space hosting for research/lab pages and user pages.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Hosting websites in UMIACS Object Store &#039;&#039;(preferred method)&#039;&#039;&#039;&#039;&#039; ==&lt;br /&gt;
Please refer to the [https://obj.umiacs.umd.edu/obj/help UMIACS Object Store Help Page] for details on hosting a website in the UMIACS Object Store. This is currently our most updated and reliable method for hosting websites.&lt;br /&gt;
&lt;br /&gt;
==Main Website and Lab Pages==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;http://www.umiacs.umd.edu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Users can access the main website and lab sites for editing in two ways:&lt;br /&gt;
* From &amp;lt;b&amp;gt;Unix&amp;lt;/b&amp;gt; as /fs/www - and can be remotely accessed by [[SFTP]] to a supported Unix host (eg. [[OpenLAB]])&lt;br /&gt;
* From &amp;lt;b&amp;gt;Windows&amp;lt;/b&amp;gt; as \\fluidfs.ad.umiacs.umd.edu\www-umiacs - and remotely accessed by the same file share over the [[VPN]]&lt;br /&gt;
&lt;br /&gt;
Faculty members and authorized users can modify their own public profiles on the main UMIACS homepage. For instructions, see [[ContentManagement]].&lt;br /&gt;
&lt;br /&gt;
==Personal Web Space==&lt;br /&gt;
&lt;br /&gt;
Your personal website URL at UMIACS is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;http://www.umiacs.umd.edu/~username&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &#039;&#039;&#039;username&#039;&#039;&#039; is your UMIACS username. Users can set this page to redirect to any page of their choice by setting the &#039;&#039;&#039;Home Page&#039;&#039;&#039; attribute in their UMIACS [https://intranet.umiacs.umd.edu/directory/info/ directory entry].&lt;br /&gt;
&lt;br /&gt;
In general, large datasets related to a lab&#039;s research should go into the specific lab&#039;s web tree, not the individual users.  Remember that users&#039; webpage is not permanently maintained once the user leaves UMIACS.&lt;br /&gt;
&lt;br /&gt;
UMIACS currently supports two ways of hosting a personal website within our network, the Object Store and the OPENLab file space.&lt;br /&gt;
&lt;br /&gt;
===UMIACS Object Store===&lt;br /&gt;
&lt;br /&gt;
This is the preferred method of hosting a personal website at UMIACS. Please see the [https://obj.umiacs.umd.edu/obj/help UMIACS Object Store (OBJ) Help Page] for more information on creating a website within OBJ. Once you create your website in OBJ, you will need to set your directory &#039;&#039;&#039;Home Page&#039;&#039;&#039; to the bucket&#039;s URL (the URL that ends in &amp;lt;code&amp;gt;umiacs.io&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
===OPENLab File Space===&lt;br /&gt;
&lt;br /&gt;
This is primarily a legacy method for users who already have their websites configured this way. If you believe that your circumstances require your personal website to be hosted on this file space, please contact the [[HelpDesk | Help Desk]]. (This does not affect existing users who already have websites hosted on the OPENLab file space.)&lt;br /&gt;
&lt;br /&gt;
You will need set your directory &#039;&#039;&#039;Home Page&#039;&#039;&#039; attribute to &amp;lt;code&amp;gt;http://users.umiacs.umd.edu/~username&amp;lt;/code&amp;gt;, where &#039;&#039;&#039;username&#039;&#039;&#039; is your UMIACS username (similar to your personal URL above). You can access your website for editing in two ways:&lt;br /&gt;
&lt;br /&gt;
* From &amp;lt;b&amp;gt;Unix&amp;lt;/b&amp;gt; as /fs/www-users/username - and can be remotely accessed via [[SFTP]] to a supported UNIX host (eg. [[OpenLAB]]).&lt;br /&gt;
* From &amp;lt;b&amp;gt;Windows&amp;lt;/b&amp;gt; as \\fluidfs.ad.umiacs.umd.edu\www-users\username - and remotely accessed by the same file share over the [[VPN]].&lt;br /&gt;
&lt;br /&gt;
==Adding A Password Protected Folder To Your Web Space==&lt;br /&gt;
&lt;br /&gt;
1) Create the directory you want to password protect or &amp;lt;tt&amp;gt;cd&amp;lt;/tt&amp;gt; into the directory you want to password protect&lt;br /&gt;
&lt;br /&gt;
2) Create a file called &#039;&#039;.htaccess&#039;&#039; (&amp;lt;tt&amp;gt; vi .htaccess&amp;lt;/tt&amp;gt;) in the directory you wish to password protect.&lt;br /&gt;
&lt;br /&gt;
3) In the file you just created type the following lines &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AuthUserFile &amp;quot;/your/directory/here/&amp;quot;.htpasswd&lt;br /&gt;
AuthName &amp;quot;Secure Document&amp;quot;&lt;br /&gt;
AuthType Basic&lt;br /&gt;
require user username&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you were going to protect the &amp;lt;tt&amp;gt;/fs/www-users/username/private&amp;lt;/tt&amp;gt; directory and you want the required name to be  &amp;lt;tt&amp;gt;class239&amp;lt;/tt&amp;gt;, then your file would look like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AuthUserFile /fs/www-users/username/private/.htpasswd&lt;br /&gt;
AuthName &amp;quot;Secure Document&amp;quot;&lt;br /&gt;
AuthType Basic&lt;br /&gt;
require user class239&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4) Create a file called &#039;&#039;.htpasswd&#039;&#039; in the same directory as &#039;&#039;.htaccess&#039;&#039;. You create this file by typing in &amp;lt;tt&amp;gt;htpasswd -c .htpasswd &#039;&#039;username&#039;&#039;&amp;lt;/tt&amp;gt; in the directory area to be protected.&lt;br /&gt;
&lt;br /&gt;
In the example above, the username is &amp;lt;tt&amp;gt;class239&amp;lt;/tt&amp;gt; so you would type &amp;lt;tt&amp;gt;htpasswd -c .htpasswd class239&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will be prompted to enter the password you want. The &#039;&#039;.htpasswd&#039;&#039; file will be created in the current directory and will contain an encrypted version of the password.&lt;br /&gt;
&lt;br /&gt;
To later change the username, edit the &#039;&#039;.htaccess&#039;&#039; file and change the username. If you want to later change the password, just retype the above line in step 4 and enter the new password at the prompt.&lt;br /&gt;
&lt;br /&gt;
==Restricting Content based on IP address==&lt;br /&gt;
It is possible to have pages on your webspace only accessible to clients connecting from certain IP addresses. In order to accomplish this, cd in to the directory you wish to restrict, and edit your &#039;&#039;.htaccess&#039;&#039; or &#039;&#039;httpd.conf&#039;&#039; file. The example below shows how to make content only viewable to clients connecting from the UMD wifi in Apache 2.2.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre style=&amp;quot;white-space: pre-wrap; &lt;br /&gt;
white-space: -moz-pre-wrap; &lt;br /&gt;
white-space: -pre-wrap; &lt;br /&gt;
white-space: -o-pre-wrap; &lt;br /&gt;
word-wrap: break-word;&amp;quot;&amp;gt;SetEnvIF X-Forwarded-For &amp;quot;^128\.8\.\d+\.\d+$&amp;quot; UMD_NETWORK&lt;br /&gt;
SetEnvIF X-Forwarded-For &amp;quot;^129\.2\.\d+\.\d+$&amp;quot; UMD_NETWORK&lt;br /&gt;
SetEnvIF X-Forwarded-For &amp;quot;^192\.168\.\d+\.\d+$&amp;quot; UMD_NETWORK&lt;br /&gt;
SetEnvIF X-Forwarded-For &amp;quot;^206\.196\.(?:1[6-9][0-9]|2[0-5][0-9])\.\d+$&amp;quot; UMD_NETWORK&lt;br /&gt;
SetEnvIF X-Forwarded-For &amp;quot;^10\.\d+\.\d+\.\d+$&amp;quot; UMD_NETWORK&lt;br /&gt;
Order Deny,Allow&lt;br /&gt;
Deny from all&lt;br /&gt;
Allow from env=UMD_NETWORK&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The SetEnvIF directive will modify one&#039;s environment if the specified attribute matches the provided regular expression. In this example, IP addresses that are forwarded from an IP within UMD&#039;s IP space are tagged with UMD_NETWORK. Then, all traffic to the example directory is blocked unless it has the UMD_NETWORK tag. See the following pages for a more in depth explanation of the commands used.&lt;br /&gt;
&lt;br /&gt;
[https://httpd.apache.org/docs/2.2/howto/htaccess.html .htaccess], [https://httpd.apache.org/docs/2.2/mod/mod_setenvif.html#setenvif SetEnvIf], [https://httpd.apache.org/docs/2.2/mod/mod_authz_host.html#order Order], [https://httpd.apache.org/docs/2.2/mod/mod_authz_host.html#deny Deny], [https://httpd.apache.org/docs/2.2/mod/mod_authz_host.html#allow Allow]&lt;/div&gt;</summary>
		<author><name>Dkontyko</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=WebSpace&amp;diff=8432</id>
		<title>WebSpace</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=WebSpace&amp;diff=8432"/>
		<updated>2019-06-25T14:33:07Z</updated>

		<summary type="html">&lt;p&gt;Dkontyko: /* Personal Web Space */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;UMIACS provides web space hosting for research/lab pages and user pages.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Hosting websites in UMIACS Object Store &#039;&#039;(preferred method)&#039;&#039;&#039;&#039;&#039; ==&lt;br /&gt;
Please refer to the [https://obj.umiacs.umd.edu/obj/help UMIACS Object Store Help Page] for details on hosting a website in the UMIACS Object Store. This is currently our most updated and reliable method for hosting websites.&lt;br /&gt;
&lt;br /&gt;
==Main Website and Lab Pages==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;http://www.umiacs.umd.edu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Users can access the main website and lab sites for editing in two ways:&lt;br /&gt;
* From &amp;lt;b&amp;gt;Unix&amp;lt;/b&amp;gt; as /fs/www - and can be remotely accessed by [[SFTP]] to a supported Unix host (eg. [[OpenLAB]])&lt;br /&gt;
* From &amp;lt;b&amp;gt;Windows&amp;lt;/b&amp;gt; as \\fluidfs.ad.umiacs.umd.edu\www-umiacs - and remotely accessed by the same file share over the [[VPN]]&lt;br /&gt;
&lt;br /&gt;
Faculty members and authorized users can modify their own public profiles on the main UMIACS homepage. For instructions, see [[ContentManagement]].&lt;br /&gt;
&lt;br /&gt;
==Personal Web Space==&lt;br /&gt;
&lt;br /&gt;
Your personal website URL at UMIACS is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;http://www.umiacs.umd.edu/~username&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &#039;&#039;&#039;username&#039;&#039;&#039; is your UMIACS username.&lt;br /&gt;
&lt;br /&gt;
Users can set this page to redirect to any page of their choice by setting the &#039;&#039;&#039;Home Page&#039;&#039;&#039; attribute in their UMIACS [https://intranet.umiacs.umd.edu/directory/info/ directory entry].&lt;br /&gt;
&lt;br /&gt;
UMIACS currently supports multiple ways of hosting a personal website.&lt;br /&gt;
&lt;br /&gt;
===UMIACS Object Store===&lt;br /&gt;
&lt;br /&gt;
This is the preferred method of hosting a personal website at UMIACS. Please see the [https://obj.umiacs.umd.edu/obj/help UMIACS Object Store (OBJ) Help Page] for more information on creating a website within OBJ. Once you create your website in OBJ, you will need to set your directory &#039;&#039;&#039;Home Page&#039;&#039;&#039; to the bucket&#039;s URL (the URL that ends in &amp;lt;code&amp;gt;umiacs.io&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
===OPENLab File Space===&lt;br /&gt;
&lt;br /&gt;
This is primarily a legacy method for users who already have their websites configured this way. If you believe that your circumstances require your personal website to be hosted on this file space, please contact the [[HelpDesk | Help Desk]].&lt;br /&gt;
&lt;br /&gt;
Users can access their website for editing two ways:&lt;br /&gt;
&lt;br /&gt;
* From &amp;lt;b&amp;gt;Unix&amp;lt;/b&amp;gt; as /fs/www-users/username - and can be remotely accessed via [[SFTP]] to a supported UNIX host (eg. [[OpenLAB]])&lt;br /&gt;
* From &amp;lt;b&amp;gt;Windows&amp;lt;/b&amp;gt; as \\fluidfs.ad.umiacs.umd.edu\www-users\username - and remotely accessed by the same file share over the [[VPN]]&lt;br /&gt;
&lt;br /&gt;
In general, large datasets related to a Labs research should go into the specific lab&#039;s web tree, not the individual users.  Remember that users&#039; webpage is not permanently maintained once the user leaves UMIACS.&lt;br /&gt;
&lt;br /&gt;
==Adding A Password Protected Folder To Your Web Space==&lt;br /&gt;
&lt;br /&gt;
1) Create the directory you want to password protect or &amp;lt;tt&amp;gt;cd&amp;lt;/tt&amp;gt; into the directory you want to password protect&lt;br /&gt;
&lt;br /&gt;
2) Create a file called &#039;&#039;.htaccess&#039;&#039; (&amp;lt;tt&amp;gt; vi .htaccess&amp;lt;/tt&amp;gt;) in the directory you wish to password protect.&lt;br /&gt;
&lt;br /&gt;
3) In the file you just created type the following lines &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AuthUserFile &amp;quot;/your/directory/here/&amp;quot;.htpasswd&lt;br /&gt;
AuthName &amp;quot;Secure Document&amp;quot;&lt;br /&gt;
AuthType Basic&lt;br /&gt;
require user username&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you were going to protect the &amp;lt;tt&amp;gt;/fs/www-users/username/private&amp;lt;/tt&amp;gt; directory and you want the required name to be  &amp;lt;tt&amp;gt;class239&amp;lt;/tt&amp;gt;, then your file would look like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AuthUserFile /fs/www-users/username/private/.htpasswd&lt;br /&gt;
AuthName &amp;quot;Secure Document&amp;quot;&lt;br /&gt;
AuthType Basic&lt;br /&gt;
require user class239&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4) Create a file called &#039;&#039;.htpasswd&#039;&#039; in the same directory as &#039;&#039;.htaccess&#039;&#039;. You create this file by typing in &amp;lt;tt&amp;gt;htpasswd -c .htpasswd &#039;&#039;username&#039;&#039;&amp;lt;/tt&amp;gt; in the directory area to be protected.&lt;br /&gt;
&lt;br /&gt;
In the example above, the username is &amp;lt;tt&amp;gt;class239&amp;lt;/tt&amp;gt; so you would type &amp;lt;tt&amp;gt;htpasswd -c .htpasswd class239&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will be prompted to enter the password you want. The &#039;&#039;.htpasswd&#039;&#039; file will be created in the current directory and will contain an encrypted version of the password.&lt;br /&gt;
&lt;br /&gt;
To later change the username, edit the &#039;&#039;.htaccess&#039;&#039; file and change the username. If you want to later change the password, just retype the above line in step 4 and enter the new password at the prompt.&lt;br /&gt;
&lt;br /&gt;
==Restricting Content based on IP address==&lt;br /&gt;
It is possible to have pages on your webspace only accessible to clients connecting from certain IP addresses. In order to accomplish this, cd in to the directory you wish to restrict, and edit your &#039;&#039;.htaccess&#039;&#039; or &#039;&#039;httpd.conf&#039;&#039; file. The example below shows how to make content only viewable to clients connecting from the UMD wifi in Apache 2.2.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre style=&amp;quot;white-space: pre-wrap; &lt;br /&gt;
white-space: -moz-pre-wrap; &lt;br /&gt;
white-space: -pre-wrap; &lt;br /&gt;
white-space: -o-pre-wrap; &lt;br /&gt;
word-wrap: break-word;&amp;quot;&amp;gt;SetEnvIF X-Forwarded-For &amp;quot;^128\.8\.\d+\.\d+$&amp;quot; UMD_NETWORK&lt;br /&gt;
SetEnvIF X-Forwarded-For &amp;quot;^129\.2\.\d+\.\d+$&amp;quot; UMD_NETWORK&lt;br /&gt;
SetEnvIF X-Forwarded-For &amp;quot;^192\.168\.\d+\.\d+$&amp;quot; UMD_NETWORK&lt;br /&gt;
SetEnvIF X-Forwarded-For &amp;quot;^206\.196\.(?:1[6-9][0-9]|2[0-5][0-9])\.\d+$&amp;quot; UMD_NETWORK&lt;br /&gt;
SetEnvIF X-Forwarded-For &amp;quot;^10\.\d+\.\d+\.\d+$&amp;quot; UMD_NETWORK&lt;br /&gt;
Order Deny,Allow&lt;br /&gt;
Deny from all&lt;br /&gt;
Allow from env=UMD_NETWORK&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The SetEnvIF directive will modify one&#039;s environment if the specified attribute matches the provided regular expression. In this example, IP addresses that are forwarded from an IP within UMD&#039;s IP space are tagged with UMD_NETWORK. Then, all traffic to the example directory is blocked unless it has the UMD_NETWORK tag. See the following pages for a more in depth explanation of the commands used.&lt;br /&gt;
&lt;br /&gt;
[https://httpd.apache.org/docs/2.2/howto/htaccess.html .htaccess], [https://httpd.apache.org/docs/2.2/mod/mod_setenvif.html#setenvif SetEnvIf], [https://httpd.apache.org/docs/2.2/mod/mod_authz_host.html#order Order], [https://httpd.apache.org/docs/2.2/mod/mod_authz_host.html#deny Deny], [https://httpd.apache.org/docs/2.2/mod/mod_authz_host.html#allow Allow]&lt;/div&gt;</summary>
		<author><name>Dkontyko</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=NASUsers&amp;diff=8431</id>
		<title>NASUsers</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=NASUsers&amp;diff=8431"/>
		<updated>2019-06-25T14:12:53Z</updated>

		<summary type="html">&lt;p&gt;Dkontyko: /* Web Pages */ Redirecting this section to the WebSpace page to remove incorrect and redundant information&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Web Pages===&lt;br /&gt;
&lt;br /&gt;
Please see [[WebSpace#Personal%20Web%20Space | Personal Web Space]].&lt;br /&gt;
&lt;br /&gt;
===Personal FTP Sites for Distributing Data===&lt;br /&gt;
&lt;br /&gt;
Your ftp site is online at&lt;br /&gt;
  &lt;br /&gt;
  ftp://ftp.umiacs.umd.edu/pub/username&lt;br /&gt;
&lt;br /&gt;
On any supported UNIX workstation, you can access your ftp site as&lt;br /&gt;
  &lt;br /&gt;
  /fs/ftp-umiacs/pub/username&lt;br /&gt;
&lt;br /&gt;
Windows users can map it as a network drive from&lt;br /&gt;
  &lt;br /&gt;
  \\fluidfs.ad.umiacs.umd.edu\ftp-umiacs\pub&lt;br /&gt;
&lt;br /&gt;
You can also upload files using [[FTP]], [[SFTP]], and [[SCP]] through openlab.umiacs.umd.edu.&lt;br /&gt;
&lt;br /&gt;
Please note that anyone with an internet connection can log in and download these files so please to do not use your ftp site to store confidential data.&lt;br /&gt;
&lt;br /&gt;
This file system has regular backups with our [[TSM]] service and has [[Snapshots]] for easy user restores.&lt;br /&gt;
&lt;br /&gt;
===Usage Guidelines===&lt;br /&gt;
&lt;br /&gt;
Personal NAS is configured to be highly available and modest in both size and usage. Please store large or heavily accessed data sets in a dedicated project storage directory that is tuned for your application.&lt;br /&gt;
&lt;br /&gt;
Please avoid storing shared project data in personal storage allocations. Separating project data from personal data will simplify administration and data management for both researchers and staff.&lt;/div&gt;</summary>
		<author><name>Dkontyko</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=CUPS&amp;diff=8430</id>
		<title>CUPS</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=CUPS&amp;diff=8430"/>
		<updated>2019-06-25T00:47:18Z</updated>

		<summary type="html">&lt;p&gt;Dkontyko: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;UMIACS has replaced our LPRng printing system on our UNIX systems in favor of the industry standard [http://www.cups.org CUPS] (&#039;&#039;&#039;Common Unix Print System&#039;&#039;&#039;). This provides us with better support for printers and their specific options.&lt;br /&gt;
&lt;br /&gt;
Printing is only available from UMIACS Networks or when attached to the UMIACS VPN.&lt;br /&gt;
&lt;br /&gt;
A list of printers can be found using,&lt;br /&gt;
* http://print.umiacs.umd.edu/printers (available only on UMIACS Networks)&lt;br /&gt;
* &#039;&#039;&#039;lpstat -p&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can see documentation about printing from the command line on our RHEL/Ubuntu hosts at &lt;br /&gt;
* http://www.cups.org/documentation.php/doc-1.4/options.html&lt;br /&gt;
&lt;br /&gt;
==Changes from the old LPRng print system==&lt;br /&gt;
===Duplexing===&lt;br /&gt;
To duplex you need to specify &amp;lt;code&amp;gt;-o sides=two-sided-long-edge&amp;lt;/code&amp;gt; instead of &amp;lt;code&amp;gt;-Zduplex&amp;lt;/code&amp;gt; when submitting your job.  &amp;lt;code&amp;gt;-o sides=two-sided-short-edge&amp;lt;/code&amp;gt; can be used for the other duplex orientation.  You can specify defaults or create a personal instance of a print queue please see this [http://www.cups.org/documentation.php/doc-1.4/options.html#WITHOPTIONS documentation].&lt;br /&gt;
&lt;br /&gt;
===Commands===&lt;br /&gt;
Using the &#039;&#039;&#039;lpr&#039;&#039;&#039; command is available, although the &#039;&#039;&#039;lp&#039;&#039;&#039; command is the main supported command and has more functionality.  We encourage you to use the &#039;&#039;&#039;lp&#039;&#039;&#039; (SysV) commands.&lt;br /&gt;
===Discrete Queues===&lt;br /&gt;
Since now the [[CUPS]] and [[WindowsPrinting]] systems are now discrete.  There can be jobs in the queue from the other print system that if they misbehave can cause the other print system queue to stall.  Please contact staff for assistance if you run into a problem.&lt;br /&gt;
===Banner Pages===&lt;br /&gt;
The queues are all no banner queues by default.  If you would like to change this behavior for your account you can for each PRINTER_QUEUE by running &lt;br /&gt;
&lt;br /&gt;
   &amp;lt;code&amp;gt;lpoptions -o job-sheets=standard PRINTER_QUEUE&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dkontyko</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=CUPS&amp;diff=8429</id>
		<title>CUPS</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=CUPS&amp;diff=8429"/>
		<updated>2019-06-25T00:47:09Z</updated>

		<summary type="html">&lt;p&gt;Dkontyko: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
UMIACS has replaced our LPRng printing system on our UNIX systems in favor of the industry standard [http://www.cups.org CUPS] (&#039;&#039;&#039;Common Unix Print System&#039;&#039;&#039;). This provides us with better support for printers and their specific options.&lt;br /&gt;
&lt;br /&gt;
Printing is only available from UMIACS Networks or when attached to the UMIACS VPN.&lt;br /&gt;
&lt;br /&gt;
A list of printers can be found using,&lt;br /&gt;
* http://print.umiacs.umd.edu/printers (available only on UMIACS Networks)&lt;br /&gt;
* &#039;&#039;&#039;lpstat -p&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can see documentation about printing from the command line on our RHEL/Ubuntu hosts at &lt;br /&gt;
* http://www.cups.org/documentation.php/doc-1.4/options.html&lt;br /&gt;
&lt;br /&gt;
==Changes from the old LPRng print system==&lt;br /&gt;
===Duplexing===&lt;br /&gt;
To duplex you need to specify &amp;lt;code&amp;gt;-o sides=two-sided-long-edge&amp;lt;/code&amp;gt; instead of &amp;lt;code&amp;gt;-Zduplex&amp;lt;/code&amp;gt; when submitting your job.  &amp;lt;code&amp;gt;-o sides=two-sided-short-edge&amp;lt;/code&amp;gt; can be used for the other duplex orientation.  You can specify defaults or create a personal instance of a print queue please see this [http://www.cups.org/documentation.php/doc-1.4/options.html#WITHOPTIONS documentation].&lt;br /&gt;
&lt;br /&gt;
===Commands===&lt;br /&gt;
Using the &#039;&#039;&#039;lpr&#039;&#039;&#039; command is available, although the &#039;&#039;&#039;lp&#039;&#039;&#039; command is the main supported command and has more functionality.  We encourage you to use the &#039;&#039;&#039;lp&#039;&#039;&#039; (SysV) commands.&lt;br /&gt;
===Discrete Queues===&lt;br /&gt;
Since now the [[CUPS]] and [[WindowsPrinting]] systems are now discrete.  There can be jobs in the queue from the other print system that if they misbehave can cause the other print system queue to stall.  Please contact staff for assistance if you run into a problem.&lt;br /&gt;
===Banner Pages===&lt;br /&gt;
The queues are all no banner queues by default.  If you would like to change this behavior for your account you can for each PRINTER_QUEUE by running &lt;br /&gt;
&lt;br /&gt;
   &amp;lt;code&amp;gt;lpoptions -o job-sheets=standard PRINTER_QUEUE&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dkontyko</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=RHEL&amp;diff=8427</id>
		<title>RHEL</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=RHEL&amp;diff=8427"/>
		<updated>2019-06-25T00:32:28Z</updated>

		<summary type="html">&lt;p&gt;Dkontyko: Redirect until this page is properly built out&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[RHEL7]]&lt;/div&gt;</summary>
		<author><name>Dkontyko</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=ConferenceRooms&amp;diff=8401</id>
		<title>ConferenceRooms</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=ConferenceRooms&amp;diff=8401"/>
		<updated>2019-06-21T21:29:57Z</updated>

		<summary type="html">&lt;p&gt;Dkontyko: Redirected page to Iribe/ConferenceRooms&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Iribe/ConferenceRooms]]&lt;/div&gt;</summary>
		<author><name>Dkontyko</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Ubuntu&amp;diff=8385</id>
		<title>Ubuntu</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Ubuntu&amp;diff=8385"/>
		<updated>2019-06-12T17:33:56Z</updated>

		<summary type="html">&lt;p&gt;Dkontyko: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To provide a more up to date desktop linux experience UMIACS provides support for Ubuntu LTS (long-term support) releases.  We will only be supporting LTS releases as there is a large amount of work that goes into testing the auto installer and supporting the more cutting edge Ubuntu releases.&lt;br /&gt;
&lt;br /&gt;
Previous Ubuntu LTS (Long Term Support) releases are supported for 3 years on the desktop and 5 years on the server. Starting with Ubuntu 12.04 LTS, LTS releases will be supported for 5 years on both the desktop and the server.&lt;br /&gt;
* &amp;lt;b&amp;gt;Ubuntu 14.04 LTS&amp;lt;/b&amp;gt; (Trusty)- End of life date: April 2019&lt;br /&gt;
&lt;br /&gt;
===Features===&lt;br /&gt;
* Users have the ability to install software from blessed repositories without staff intervention.&lt;br /&gt;
* More bleeding edge desktop software experience than our RHEL offerings.&lt;br /&gt;
&lt;br /&gt;
===Software===&lt;br /&gt;
Besides being able to install software from the blessed Ubuntu universes we provide only our binary software common tree to Ubuntu LTS releases.  This is analogous to the traditional software found in &amp;lt;tt&amp;gt;/opt&amp;lt;/tt&amp;gt; but is now found in &amp;lt;tt&amp;gt;/opt/common&amp;lt;/tt&amp;gt;.  The software includes Matlab, Mathematica, Compilers, etc. Starting with Ubuntu 14.04 LTS, you can use [[Modules]] to easily load these paths and relevant libraries into your environment. Also, please see [[Ubuntu/SoftwareCenter | Ubuntu Software Center]].&lt;br /&gt;
&lt;br /&gt;
===Data  Storage===&lt;br /&gt;
Please see the UNIX section in our [[LocalDataStorage]] article.&lt;/div&gt;</summary>
		<author><name>Dkontyko</name></author>
	</entry>
</feed>