<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.umiacs.umd.edu/umiacs/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Wrichman</id>
	<title>UMIACS - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.umiacs.umd.edu/umiacs/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Wrichman"/>
	<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php/Special:Contributions/Wrichman"/>
	<updated>2026-05-09T18:41:30Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.7</generator>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM&amp;diff=9520</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM&amp;diff=9520"/>
		<updated>2020-12-18T17:21:03Z</updated>

		<summary type="html">&lt;p&gt;Wrichman: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Simple Linux Utility for Resource Management (SLURM)=&lt;br /&gt;
SLURM is an open-source workload manager designed for Linux clusters of all sizes. It provides three key functions. First, it allocates exclusive or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.&lt;br /&gt;
&lt;br /&gt;
==Documentation==&lt;br /&gt;
:[[SLURM/JobSubmission | Submitting Jobs]]&lt;br /&gt;
:[[SLURM/JobStatus | Checking Job Status]]&lt;br /&gt;
:[[SLURM/ClusterStatus | Checking Cluster Status]]&lt;br /&gt;
:[http://slurm.schedmd.com/documentation.html Official Documentation]&lt;br /&gt;
:[http://slurm.schedmd.com/faq.html FAQ]&lt;br /&gt;
&lt;br /&gt;
==Commands==&lt;br /&gt;
Below are some of the common commands used in Slurm. Further information on how to use these commands is found in the documentation linked above. To see all flags available for a command, please check the command&#039;s manual by using &amp;lt;code&amp;gt;man $COMMAND&amp;lt;/code&amp;gt; on the command line.&lt;br /&gt;
&lt;br /&gt;
====srun====&lt;br /&gt;
srun runs a parallel job on a cluster managed by Slurm.  If necessary, it will first create a resource allocation in which to run the parallel job.&lt;br /&gt;
&lt;br /&gt;
====salloc====&lt;br /&gt;
salloc allocates a Slurm job allocation, which is a set of resources (nodes), possibly with some set of constraints (e.g. number of processors per node).  When salloc successfully obtains the requested allocation, it then runs the command specified by the user.  Finally, when the user specified command is complete, salloc relinquishes the job allocation.  If no command is specified, salloc runs the user&#039;s default shell.&lt;br /&gt;
&lt;br /&gt;
====sbatch====&lt;br /&gt;
sbatch submits a batch script to Slurm.  The batch script may be given to sbatch through a file name on the command line, or if no file name is specified, sbatch will read in a script from standard input.  The batch script may contain options preceded with &amp;quot;#SBATCH&amp;quot; before any executable commands in the script.&lt;br /&gt;
&lt;br /&gt;
====squeue====&lt;br /&gt;
squeue views job and job step information for jobs managed by Slurm.&lt;br /&gt;
&lt;br /&gt;
====scancel====&lt;br /&gt;
scancel signals or cancels jobs, job arrays, or job steps.  An arbitrary number of jobs or job steps may be signaled using job specification filters or a space separated list of specific job and/or job step IDs.&lt;br /&gt;
&lt;br /&gt;
====sacct====&lt;br /&gt;
sacct displays job accounting data stored in the job accounting log file or Slurm database in a variety of forms for your analysis.  The sacct command displays information on jobs, job steps, status, and exitcodes by default.  You can tailor the output with the use of the --format= option to specify the fields to be shown.&lt;br /&gt;
&lt;br /&gt;
====sstat====&lt;br /&gt;
sstat displays job status information for your analysis.  The sstat command displays information pertaining to CPU, Task, Node, Resident Set Size (RSS) and Virtual Memory (VM).  You can tailor the output with the use of the --fields= option to specify the fields to be shown.&lt;br /&gt;
&lt;br /&gt;
==Modules==&lt;br /&gt;
If you are trying to use [[Modules | GNU Modules]] in a Slurm job, please read the section of our [[Modules]] documentation on [[Modules#Modules_in_Non-Interactive_Shell_Sessions | non-interactive shell sessions]].&lt;br /&gt;
&lt;br /&gt;
==Running Jupyter Notebook on a Compute Node==&lt;br /&gt;
The steps to run a Jupyter Notebook from a compute node are listed below.&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;h6&amp;gt;Setting up Python Virtual Environment&amp;lt;/h6&amp;gt;&lt;br /&gt;
In order to set up your python virtual environment, you&#039;ll first want to follow the steps listed [https://wiki.umiacs.umd.edu/umiacs/index.php/PythonVirtualEnv here] to create a Python virtual environment on the compute node you are assigned. Then, activate it using the steps listed [https://wiki.umiacs.umd.edu/umiacs/index.php/PythonVirtualEnv#Activating_the_VirtualEnv here]. Next, install Jupyter using pip by following the steps [https://jupyter.readthedocs.io/en/latest/install/notebook-classic.html#alternative-for-experienced-python-users-installing-jupyter-with-pip here].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h6&amp;gt; Running Jupyter Notebook &amp;lt;/h6&amp;gt;&lt;br /&gt;
After you&#039;ve set up the python virtual environment, run the following commands on the compute node you are assigned:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --port=8889 --ip=0.0.0.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This will start running the notebook on port 8889. &amp;lt;b&amp;gt;Note:&amp;lt;/b&amp;gt; You must keep this shell window open to be able to connect.&lt;br /&gt;
Then, on your local machine, run &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -N -f -L localhost:8888:$(NODENAME):8889 $(USERNAME)@$(SUBMISSIONNODE).umiacs.umd.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
This will tunnel port 8889 from the compute node to port 8888 on your local machine, using the $(SUBMISSIONNODE) as an intermediate node. Make sure to replace $(NODENAME) with the name of the compute node you are assigned, $(USERNAME) with your username, and $(SUBMISSIONNODE) with the name of the submission node you want to use. For example,  &amp;lt;code&amp;gt;esloate@clipsub00.umiacs.umd.edu&amp;lt;/code&amp;gt;. Also, if the ports mentioned above are not working, then you can replace them with whatever port you&#039;d like, just make sure to change it in the command you ran on the compute node. You can then open a web browser and type in &amp;lt;code&amp;gt;localhost:8888&amp;lt;/code&amp;gt; to access the notebook. &amp;lt;b&amp;gt;Note:&amp;lt;/b&amp;gt; You must be on a machine connected to the UMIACS network or connected to our [[Network/VPN | VPN]] in order to access the Jupyter notebook.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Quick Guide to translate PBS/Torque to SLURM=&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+User commands&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
!PBS/Torque&lt;br /&gt;
!SLURM&lt;br /&gt;
|-&lt;br /&gt;
!Job submission&lt;br /&gt;
|qsub [filename]&lt;br /&gt;
|sbatch [filename]&lt;br /&gt;
|-&lt;br /&gt;
!Job deletion&lt;br /&gt;
|qdel [job_id]&lt;br /&gt;
|scancel [job_id]&lt;br /&gt;
|-&lt;br /&gt;
!Job status (by job)&lt;br /&gt;
|qstat [job_id] &lt;br /&gt;
|squeue --job [job_id]&lt;br /&gt;
|-&lt;br /&gt;
!Full job status (by job)&lt;br /&gt;
|qstat -f [job_id]&lt;br /&gt;
|scontrol show job [job_id]&lt;br /&gt;
|-&lt;br /&gt;
!Job status (by user)&lt;br /&gt;
|qstat -u [username]&lt;br /&gt;
|squeue --user=[username]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Environment variables&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
!PBS/Torque  &lt;br /&gt;
!SLURM&lt;br /&gt;
|-&lt;br /&gt;
!Job ID&lt;br /&gt;
|$PBS_JOBID&lt;br /&gt;
|$SLURM_JOBID&lt;br /&gt;
|-&lt;br /&gt;
!Submit Directory&lt;br /&gt;
|$PBS_O_WORKDIR&lt;br /&gt;
|$SLURM_SUBMIT_DIR&lt;br /&gt;
|-&lt;br /&gt;
!Node List &lt;br /&gt;
|$PBS_NODEFILE&lt;br /&gt;
|$SLURM_JOB_NODELIST&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Job specification&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
!PBS/Torque  &lt;br /&gt;
!SLURM&lt;br /&gt;
|-&lt;br /&gt;
!Script directive&lt;br /&gt;
|#PBS&lt;br /&gt;
|#SBATCH&lt;br /&gt;
|-&lt;br /&gt;
!Job Name &lt;br /&gt;
| -N [name]&lt;br /&gt;
| --job-name=[name] OR -J [name]&lt;br /&gt;
|-&lt;br /&gt;
!Node Count&lt;br /&gt;
| -l nodes=[count]&lt;br /&gt;
| --nodes=[min[-max]] OR -N [min[-max]]&lt;br /&gt;
|-&lt;br /&gt;
!CPU Count&lt;br /&gt;
| -l ppn=[count]&lt;br /&gt;
| --ntasks-per-node=[count]&lt;br /&gt;
|-&lt;br /&gt;
!CPUs Per Task&lt;br /&gt;
|&lt;br /&gt;
| --cpus-per-task=[count]&lt;br /&gt;
|-&lt;br /&gt;
!Memory Size&lt;br /&gt;
| -l mem=[MB] &lt;br /&gt;
| --mem=[MB] OR --mem-per-cpu=[MB]&lt;br /&gt;
|-&lt;br /&gt;
!Wall Clock Limit&lt;br /&gt;
| -l walltime=[hh:mm:ss]&lt;br /&gt;
| --time=[min] OR --time=[days-hh:mm:ss]&lt;br /&gt;
|-&lt;br /&gt;
!Node Properties&lt;br /&gt;
| -l nodes=4:ppn=8:[property]&lt;br /&gt;
| --constraint=[list]&lt;br /&gt;
|-&lt;br /&gt;
!Standard Output File&lt;br /&gt;
| -o [file_name]&lt;br /&gt;
| --output=[file_name] OR -o [file_name]&lt;br /&gt;
|-&lt;br /&gt;
!Standard Error File&lt;br /&gt;
| -e [file_name]&lt;br /&gt;
| --error=[file_name] OR -e [file_name]&lt;br /&gt;
|-&lt;br /&gt;
!Combine stdout/stderr&lt;br /&gt;
| -j oe (both to stdout)&lt;br /&gt;
|(Default if you don&#039;t specify --error)&lt;br /&gt;
|-&lt;br /&gt;
!Job Arrays&lt;br /&gt;
| -t [array_spec]&lt;br /&gt;
| --array=[array_spec] OR -a [array_spec]&lt;br /&gt;
|-&lt;br /&gt;
!Delay Job Start&lt;br /&gt;
| -a [time]&lt;br /&gt;
| --begin=[time]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Wrichman</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM&amp;diff=9518</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM&amp;diff=9518"/>
		<updated>2020-12-15T22:27:52Z</updated>

		<summary type="html">&lt;p&gt;Wrichman: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Simple Linux Utility for Resource Management (SLURM)=&lt;br /&gt;
SLURM is an open-source workload manager designed for Linux clusters of all sizes. It provides three key functions. First, it allocates exclusive or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.&lt;br /&gt;
&lt;br /&gt;
==Documentation==&lt;br /&gt;
:[[SLURM/JobSubmission | Submitting Jobs]]&lt;br /&gt;
:[[SLURM/JobStatus | Checking Job Status]]&lt;br /&gt;
:[[SLURM/ClusterStatus | Checking Cluster Status]]&lt;br /&gt;
:[http://slurm.schedmd.com/documentation.html Official Documentation]&lt;br /&gt;
:[http://slurm.schedmd.com/faq.html FAQ]&lt;br /&gt;
&lt;br /&gt;
==Commands==&lt;br /&gt;
Below are some of the common commands used in Slurm. Further information on how to use these commands is found in the documentation linked above. To see all flags available for a command, please check the command&#039;s manual by using &amp;lt;code&amp;gt;man $COMMAND&amp;lt;/code&amp;gt; on the command line.&lt;br /&gt;
&lt;br /&gt;
====srun====&lt;br /&gt;
srun runs a parallel job on a cluster managed by Slurm.  If necessary, it will first create a resource allocation in which to run the parallel job.&lt;br /&gt;
&lt;br /&gt;
====salloc====&lt;br /&gt;
salloc allocates a Slurm job allocation, which is a set of resources (nodes), possibly with some set of constraints (e.g. number of processors per node).  When salloc successfully obtains the requested allocation, it then runs the command specified by the user.  Finally, when the user specified command is complete, salloc relinquishes the job allocation.  If no command is specified, salloc runs the user&#039;s default shell.&lt;br /&gt;
&lt;br /&gt;
====sbatch====&lt;br /&gt;
sbatch submits a batch script to Slurm.  The batch script may be given to sbatch through a file name on the command line, or if no file name is specified, sbatch will read in a script from standard input.  The batch script may contain options preceded with &amp;quot;#SBATCH&amp;quot; before any executable commands in the script.&lt;br /&gt;
&lt;br /&gt;
====squeue====&lt;br /&gt;
squeue views job and job step information for jobs managed by Slurm.&lt;br /&gt;
&lt;br /&gt;
====scancel====&lt;br /&gt;
scancel signals or cancels jobs, job arrays, or job steps.  An arbitrary number of jobs or job steps may be signaled using job specification filters or a space separated list of specific job and/or job step IDs.&lt;br /&gt;
&lt;br /&gt;
====sacct====&lt;br /&gt;
sacct displays job accounting data stored in the job accounting log file or Slurm database in a variety of forms for your analysis.  The sacct command displays information on jobs, job steps, status, and exitcodes by default.  You can tailor the output with the use of the --format= option to specify the fields to be shown.&lt;br /&gt;
&lt;br /&gt;
====sstat====&lt;br /&gt;
sstat displays job status information for your analysis.  The sstat command displays information pertaining to CPU, Task, Node, Resident Set Size (RSS) and Virtual Memory (VM).  You can tailor the output with the use of the --fields= option to specify the fields to be shown.&lt;br /&gt;
&lt;br /&gt;
==Modules==&lt;br /&gt;
If you are trying to use [[Modules | GNU Modules]] in a Slurm job, please read the section of our [[Modules]] documentation on [[Modules#Modules_in_Non-Interactive_Shell_Sessions | non-interactive shell sessions]].&lt;br /&gt;
&lt;br /&gt;
==Running Jupyter Notebook on a Compute Node==&lt;br /&gt;
The steps to run a Jupyter Notebook from a compute node are listed below.&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;h6&amp;gt;Setting up Python Virtual Environment&amp;lt;/h6&amp;gt;&lt;br /&gt;
In order to set up your python virtual environment, you&#039;ll first want to follow the steps listed [https://wiki.umiacs.umd.edu/umiacs/index.php/PythonVirtualEnv here] to create a Python virtual environment on the compute node you are assigned. Then, activate it using the steps listed [https://wiki.umiacs.umd.edu/umiacs/index.php/PythonVirtualEnv#Activating_the_VirtualEnv here]. Next, install Jupyter using pip by following the steps [https://jupyter.readthedocs.io/en/latest/install/notebook-classic.html#alternative-for-experienced-python-users-installing-jupyter-with-pip here].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h6&amp;gt; Running Jupyter Notebook &amp;lt;/h6&amp;gt;&lt;br /&gt;
After you&#039;ve set up the python virtual environment, run the following commands on the compute node you are assigned:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --port=8889 --ip=0.0.0.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This will start running the notebook on port 8889. &amp;lt;b&amp;gt;Note:&amp;lt;/b&amp;gt; You must keep this shell window open to be able to connect.&lt;br /&gt;
Then, on your local machine, run &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -N -f -L localhost:8888:$(NODENAME):8889 $(USERNAME)@$(CLUSTERNAME).umiacs.umd.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
This will tunnel port 8889 from the compute node to port 8888 on your local machine, using the $(CLUSTERNAME) node as an intermediate node. Make sure to replace $(NODENAME) with the name of the compute node you are assigned, $(USERNAME) with your username, and $(CLUSTERNAME) with the name of the compute cluster you are using. For example,  &amp;lt;code&amp;gt;esloate@clipsub00.umiacs.umd.edu&amp;lt;/code&amp;gt;. Also, if the ports mentioned above are not working, then you can replace them with whatever port you&#039;d like, just make sure to change it in the command you ran on the compute node. You can then open a web browser and type in &amp;lt;code&amp;gt;localhost:8888&amp;lt;/code&amp;gt; to access the notebook. &amp;lt;b&amp;gt;Note:&amp;lt;/b&amp;gt; You must be on a machine connected to the UMIACS network or connected to our [[Network/VPN | VPN]] in order to access the Jupyter notebook.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Quick Guide to translate PBS/Torque to SLURM=&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+User commands&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
!PBS/Torque&lt;br /&gt;
!SLURM&lt;br /&gt;
|-&lt;br /&gt;
!Job submission&lt;br /&gt;
|qsub [filename]&lt;br /&gt;
|sbatch [filename]&lt;br /&gt;
|-&lt;br /&gt;
!Job deletion&lt;br /&gt;
|qdel [job_id]&lt;br /&gt;
|scancel [job_id]&lt;br /&gt;
|-&lt;br /&gt;
!Job status (by job)&lt;br /&gt;
|qstat [job_id] &lt;br /&gt;
|squeue --job [job_id]&lt;br /&gt;
|-&lt;br /&gt;
!Full job status (by job)&lt;br /&gt;
|qstat -f [job_id]&lt;br /&gt;
|scontrol show job [job_id]&lt;br /&gt;
|-&lt;br /&gt;
!Job status (by user)&lt;br /&gt;
|qstat -u [username]&lt;br /&gt;
|squeue --user=[username]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Environment variables&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
!PBS/Torque  &lt;br /&gt;
!SLURM&lt;br /&gt;
|-&lt;br /&gt;
!Job ID&lt;br /&gt;
|$PBS_JOBID&lt;br /&gt;
|$SLURM_JOBID&lt;br /&gt;
|-&lt;br /&gt;
!Submit Directory&lt;br /&gt;
|$PBS_O_WORKDIR&lt;br /&gt;
|$SLURM_SUBMIT_DIR&lt;br /&gt;
|-&lt;br /&gt;
!Node List &lt;br /&gt;
|$PBS_NODEFILE&lt;br /&gt;
|$SLURM_JOB_NODELIST&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Job specification&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
!PBS/Torque  &lt;br /&gt;
!SLURM&lt;br /&gt;
|-&lt;br /&gt;
!Script directive&lt;br /&gt;
|#PBS&lt;br /&gt;
|#SBATCH&lt;br /&gt;
|-&lt;br /&gt;
!Job Name &lt;br /&gt;
| -N [name]&lt;br /&gt;
| --job-name=[name] OR -J [name]&lt;br /&gt;
|-&lt;br /&gt;
!Node Count&lt;br /&gt;
| -l nodes=[count]&lt;br /&gt;
| --nodes=[min[-max]] OR -N [min[-max]]&lt;br /&gt;
|-&lt;br /&gt;
!CPU Count&lt;br /&gt;
| -l ppn=[count]&lt;br /&gt;
| --ntasks-per-node=[count]&lt;br /&gt;
|-&lt;br /&gt;
!CPUs Per Task&lt;br /&gt;
|&lt;br /&gt;
| --cpus-per-task=[count]&lt;br /&gt;
|-&lt;br /&gt;
!Memory Size&lt;br /&gt;
| -l mem=[MB] &lt;br /&gt;
| --mem=[MB] OR --mem-per-cpu=[MB]&lt;br /&gt;
|-&lt;br /&gt;
!Wall Clock Limit&lt;br /&gt;
| -l walltime=[hh:mm:ss]&lt;br /&gt;
| --time=[min] OR --time=[days-hh:mm:ss]&lt;br /&gt;
|-&lt;br /&gt;
!Node Properties&lt;br /&gt;
| -l nodes=4:ppn=8:[property]&lt;br /&gt;
| --constraint=[list]&lt;br /&gt;
|-&lt;br /&gt;
!Standard Output File&lt;br /&gt;
| -o [file_name]&lt;br /&gt;
| --output=[file_name] OR -o [file_name]&lt;br /&gt;
|-&lt;br /&gt;
!Standard Error File&lt;br /&gt;
| -e [file_name]&lt;br /&gt;
| --error=[file_name] OR -e [file_name]&lt;br /&gt;
|-&lt;br /&gt;
!Combine stdout/stderr&lt;br /&gt;
| -j oe (both to stdout)&lt;br /&gt;
|(Default if you don&#039;t specify --error)&lt;br /&gt;
|-&lt;br /&gt;
!Job Arrays&lt;br /&gt;
| -t [array_spec]&lt;br /&gt;
| --array=[array_spec] OR -a [array_spec]&lt;br /&gt;
|-&lt;br /&gt;
!Delay Job Start&lt;br /&gt;
| -a [time]&lt;br /&gt;
| --begin=[time]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Wrichman</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM&amp;diff=9517</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM&amp;diff=9517"/>
		<updated>2020-12-15T22:24:09Z</updated>

		<summary type="html">&lt;p&gt;Wrichman: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Simple Linux Utility for Resource Management (SLURM)=&lt;br /&gt;
SLURM is an open-source workload manager designed for Linux clusters of all sizes. It provides three key functions. First, it allocates exclusive or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.&lt;br /&gt;
&lt;br /&gt;
==Documentation==&lt;br /&gt;
:[[SLURM/JobSubmission | Submitting Jobs]]&lt;br /&gt;
:[[SLURM/JobStatus | Checking Job Status]]&lt;br /&gt;
:[[SLURM/ClusterStatus | Checking Cluster Status]]&lt;br /&gt;
:[http://slurm.schedmd.com/documentation.html Official Documentation]&lt;br /&gt;
:[http://slurm.schedmd.com/faq.html FAQ]&lt;br /&gt;
&lt;br /&gt;
==Commands==&lt;br /&gt;
Below are some of the common commands used in Slurm. Further information on how to use these commands is found in the documentation linked above. To see all flags available for a command, please check the command&#039;s manual by using &amp;lt;code&amp;gt;man $COMMAND&amp;lt;/code&amp;gt; on the command line.&lt;br /&gt;
&lt;br /&gt;
====srun====&lt;br /&gt;
srun runs a parallel job on a cluster managed by Slurm.  If necessary, it will first create a resource allocation in which to run the parallel job.&lt;br /&gt;
&lt;br /&gt;
====salloc====&lt;br /&gt;
salloc allocates a Slurm job allocation, which is a set of resources (nodes), possibly with some set of constraints (e.g. number of processors per node).  When salloc successfully obtains the requested allocation, it then runs the command specified by the user.  Finally, when the user specified command is complete, salloc relinquishes the job allocation.  If no command is specified, salloc runs the user&#039;s default shell.&lt;br /&gt;
&lt;br /&gt;
====sbatch====&lt;br /&gt;
sbatch submits a batch script to Slurm.  The batch script may be given to sbatch through a file name on the command line, or if no file name is specified, sbatch will read in a script from standard input.  The batch script may contain options preceded with &amp;quot;#SBATCH&amp;quot; before any executable commands in the script.&lt;br /&gt;
&lt;br /&gt;
====squeue====&lt;br /&gt;
squeue views job and job step information for jobs managed by Slurm.&lt;br /&gt;
&lt;br /&gt;
====scancel====&lt;br /&gt;
scancel signals or cancels jobs, job arrays, or job steps.  An arbitrary number of jobs or job steps may be signaled using job specification filters or a space separated list of specific job and/or job step IDs.&lt;br /&gt;
&lt;br /&gt;
====sacct====&lt;br /&gt;
sacct displays job accounting data stored in the job accounting log file or Slurm database in a variety of forms for your analysis.  The sacct command displays information on jobs, job steps, status, and exitcodes by default.  You can tailor the output with the use of the --format= option to specify the fields to be shown.&lt;br /&gt;
&lt;br /&gt;
====sstat====&lt;br /&gt;
sstat displays job status information for your analysis.  The sstat command displays information pertaining to CPU, Task, Node, Resident Set Size (RSS) and Virtual Memory (VM).  You can tailor the output with the use of the --fields= option to specify the fields to be shown.&lt;br /&gt;
&lt;br /&gt;
==Modules==&lt;br /&gt;
If you are trying to use [[Modules | GNU Modules]] in a Slurm job, please read the section of our [[Modules]] documentation on [[Modules#Modules_in_Non-Interactive_Shell_Sessions | non-interactive shell sessions]].&lt;br /&gt;
&lt;br /&gt;
==Running Jupyter Notebook on a Compute Node==&lt;br /&gt;
The steps to run a Jupyter Notebook from a compute node are listed below.&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;h6&amp;gt;Setting up Python Virtual Environment&amp;lt;/h6&amp;gt;&lt;br /&gt;
In order to set up your python virtual environment, you&#039;ll first want to follow the steps listed [https://wiki.umiacs.umd.edu/umiacs/index.php/PythonVirtualEnv here] to create a Python virtual environment on the compute node you are assigned. Then, activate it using the steps listed [https://wiki.umiacs.umd.edu/umiacs/index.php/PythonVirtualEnv#Activating_the_VirtualEnv here]. Next, install Jupyter using pip by following the steps [https://jupyter.readthedocs.io/en/latest/install/notebook-classic.html#alternative-for-experienced-python-users-installing-jupyter-with-pip here].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h6&amp;gt; Running Jupyter Notebook &amp;lt;/h6&amp;gt;&lt;br /&gt;
After you&#039;ve set up the python virtual environment, run the following commands on the compute node you are assigned:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --port=8889 --ip=0.0.0.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This will start running the notebook on port 8889. &amp;lt;b&amp;gt;Note:&amp;lt;/b&amp;gt; You must keep this shell window open to be able to connect.&lt;br /&gt;
Then, on your local machine, run &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -N -f -L localhost:8888:$(NODENAME):8889 $(USERNAME)@$(CLUSTERNAME).umiacs.umd.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
This will tunnel port 8889 from the compute node to port 8888 on your local machine, using the (CLUSTERNAME)sub00 node as an intermediate node. Make sure to replace $(NODENAME) with the name of the compute node you are assigned, $(USERNAME) with your username, and $(CLUSTERNAME) with the name of the compute cluster you are using. For example,  &amp;lt;code&amp;gt;esloate@clipsub00.umiacs.umd.edu&amp;lt;/code&amp;gt;. Also, if the ports mentioned above are not working, then you can replace them with whatever port you&#039;d like, just make sure to change it in the command you ran on the compute node. You can then open a web browser and type in &amp;lt;code&amp;gt;localhost:8888&amp;lt;/code&amp;gt; to access the notebook. &amp;lt;b&amp;gt;Note:&amp;lt;/b&amp;gt; You must be on a machine connected to the UMIACS network or connected to our [[Network/VPN | VPN]] in order to access the Jupyter notebook.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Quick Guide to translate PBS/Torque to SLURM=&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+User commands&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
!PBS/Torque&lt;br /&gt;
!SLURM&lt;br /&gt;
|-&lt;br /&gt;
!Job submission&lt;br /&gt;
|qsub [filename]&lt;br /&gt;
|sbatch [filename]&lt;br /&gt;
|-&lt;br /&gt;
!Job deletion&lt;br /&gt;
|qdel [job_id]&lt;br /&gt;
|scancel [job_id]&lt;br /&gt;
|-&lt;br /&gt;
!Job status (by job)&lt;br /&gt;
|qstat [job_id] &lt;br /&gt;
|squeue --job [job_id]&lt;br /&gt;
|-&lt;br /&gt;
!Full job status (by job)&lt;br /&gt;
|qstat -f [job_id]&lt;br /&gt;
|scontrol show job [job_id]&lt;br /&gt;
|-&lt;br /&gt;
!Job status (by user)&lt;br /&gt;
|qstat -u [username]&lt;br /&gt;
|squeue --user=[username]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Environment variables&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
!PBS/Torque  &lt;br /&gt;
!SLURM&lt;br /&gt;
|-&lt;br /&gt;
!Job ID&lt;br /&gt;
|$PBS_JOBID&lt;br /&gt;
|$SLURM_JOBID&lt;br /&gt;
|-&lt;br /&gt;
!Submit Directory&lt;br /&gt;
|$PBS_O_WORKDIR&lt;br /&gt;
|$SLURM_SUBMIT_DIR&lt;br /&gt;
|-&lt;br /&gt;
!Node List &lt;br /&gt;
|$PBS_NODEFILE&lt;br /&gt;
|$SLURM_JOB_NODELIST&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Job specification&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
!PBS/Torque  &lt;br /&gt;
!SLURM&lt;br /&gt;
|-&lt;br /&gt;
!Script directive&lt;br /&gt;
|#PBS&lt;br /&gt;
|#SBATCH&lt;br /&gt;
|-&lt;br /&gt;
!Job Name &lt;br /&gt;
| -N [name]&lt;br /&gt;
| --job-name=[name] OR -J [name]&lt;br /&gt;
|-&lt;br /&gt;
!Node Count&lt;br /&gt;
| -l nodes=[count]&lt;br /&gt;
| --nodes=[min[-max]] OR -N [min[-max]]&lt;br /&gt;
|-&lt;br /&gt;
!CPU Count&lt;br /&gt;
| -l ppn=[count]&lt;br /&gt;
| --ntasks-per-node=[count]&lt;br /&gt;
|-&lt;br /&gt;
!CPUs Per Task&lt;br /&gt;
|&lt;br /&gt;
| --cpus-per-task=[count]&lt;br /&gt;
|-&lt;br /&gt;
!Memory Size&lt;br /&gt;
| -l mem=[MB] &lt;br /&gt;
| --mem=[MB] OR --mem-per-cpu=[MB]&lt;br /&gt;
|-&lt;br /&gt;
!Wall Clock Limit&lt;br /&gt;
| -l walltime=[hh:mm:ss]&lt;br /&gt;
| --time=[min] OR --time=[days-hh:mm:ss]&lt;br /&gt;
|-&lt;br /&gt;
!Node Properties&lt;br /&gt;
| -l nodes=4:ppn=8:[property]&lt;br /&gt;
| --constraint=[list]&lt;br /&gt;
|-&lt;br /&gt;
!Standard Output File&lt;br /&gt;
| -o [file_name]&lt;br /&gt;
| --output=[file_name] OR -o [file_name]&lt;br /&gt;
|-&lt;br /&gt;
!Standard Error File&lt;br /&gt;
| -e [file_name]&lt;br /&gt;
| --error=[file_name] OR -e [file_name]&lt;br /&gt;
|-&lt;br /&gt;
!Combine stdout/stderr&lt;br /&gt;
| -j oe (both to stdout)&lt;br /&gt;
|(Default if you don&#039;t specify --error)&lt;br /&gt;
|-&lt;br /&gt;
!Job Arrays&lt;br /&gt;
| -t [array_spec]&lt;br /&gt;
| --array=[array_spec] OR -a [array_spec]&lt;br /&gt;
|-&lt;br /&gt;
!Delay Job Start&lt;br /&gt;
| -a [time]&lt;br /&gt;
| --begin=[time]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Wrichman</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM&amp;diff=9514</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM&amp;diff=9514"/>
		<updated>2020-12-11T15:39:00Z</updated>

		<summary type="html">&lt;p&gt;Wrichman: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Simple Linux Utility for Resource Management (SLURM)=&lt;br /&gt;
SLURM is an open-source workload manager designed for Linux clusters of all sizes. It provides three key functions. First, it allocates exclusive or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.&lt;br /&gt;
&lt;br /&gt;
==Documentation==&lt;br /&gt;
:[[SLURM/JobSubmission | Submitting Jobs]]&lt;br /&gt;
:[[SLURM/JobStatus | Checking Job Status]]&lt;br /&gt;
:[[SLURM/ClusterStatus | Checking Cluster Status]]&lt;br /&gt;
:[http://slurm.schedmd.com/documentation.html Official Documentation]&lt;br /&gt;
:[http://slurm.schedmd.com/faq.html FAQ]&lt;br /&gt;
&lt;br /&gt;
==Commands==&lt;br /&gt;
Below are some of the common commands used in Slurm. Further information on how to use these commands is found in the documentation linked above. To see all flags available for a command, please check the command&#039;s manual by using &amp;lt;code&amp;gt;man $COMMAND&amp;lt;/code&amp;gt; on the command line.&lt;br /&gt;
&lt;br /&gt;
====srun====&lt;br /&gt;
srun runs a parallel job on a cluster managed by Slurm.  If necessary, it will first create a resource allocation in which to run the parallel job.&lt;br /&gt;
&lt;br /&gt;
====salloc====&lt;br /&gt;
salloc allocates a Slurm job allocation, which is a set of resources (nodes), possibly with some set of constraints (e.g. number of processors per node).  When salloc successfully obtains the requested allocation, it then runs the command specified by the user.  Finally, when the user specified command is complete, salloc relinquishes the job allocation.  If no command is specified, salloc runs the user&#039;s default shell.&lt;br /&gt;
&lt;br /&gt;
====sbatch====&lt;br /&gt;
sbatch submits a batch script to Slurm.  The batch script may be given to sbatch through a file name on the command line, or if no file name is specified, sbatch will read in a script from standard input.  The batch script may contain options preceded with &amp;quot;#SBATCH&amp;quot; before any executable commands in the script.&lt;br /&gt;
&lt;br /&gt;
====squeue====&lt;br /&gt;
squeue views job and job step information for jobs managed by Slurm.&lt;br /&gt;
&lt;br /&gt;
====scancel====&lt;br /&gt;
scancel signals or cancels jobs, job arrays, or job steps.  An arbitrary number of jobs or job steps may be signaled using job specification filters or a space separated list of specific job and/or job step IDs.&lt;br /&gt;
&lt;br /&gt;
====sacct====&lt;br /&gt;
sacct displays job accounting data stored in the job accounting log file or Slurm database in a variety of forms for your analysis.  The sacct command displays information on jobs, job steps, status, and exitcodes by default.  You can tailor the output with the use of the --format= option to specify the fields to be shown.&lt;br /&gt;
&lt;br /&gt;
====sstat====&lt;br /&gt;
sstat displays job status information for your analysis.  The sstat command displays information pertaining to CPU, Task, Node, Resident Set Size (RSS) and Virtual Memory (VM).  You can tailor the output with the use of the --fields= option to specify the fields to be shown.&lt;br /&gt;
&lt;br /&gt;
==Modules==&lt;br /&gt;
If you are trying to use [[Modules | GNU Modules]] in a Slurm job, please read the section of our [[Modules]] documentation on [[Modules#Modules_in_Non-Interactive_Shell_Sessions | non-interactive shell sessions]].&lt;br /&gt;
&lt;br /&gt;
==Running Jupyter Notebook on a Compute Node==&lt;br /&gt;
The steps to run a Jupyter Notebook from a compute node are listed below.&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;h6&amp;gt;Setting up Python Virtual Environment&amp;lt;/h6&amp;gt;&lt;br /&gt;
In order to set up your python virtual environment, you&#039;ll first want to follow the steps listed [https://wiki.umiacs.umd.edu/umiacs/index.php/PythonVirtualEnv here] to create a Python virtual environment on the compute node you are assigned. Then, activate it using the steps listed [https://wiki.umiacs.umd.edu/umiacs/index.php/PythonVirtualEnv#Activating_the_VirtualEnv here]. Next, install Jupyter using pip by following the steps [https://jupyter.readthedocs.io/en/latest/install/notebook-classic.html#alternative-for-experienced-python-users-installing-jupyter-with-pip here].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h6&amp;gt; Running Jupyter Notebook &amp;lt;/h6&amp;gt;&lt;br /&gt;
After you&#039;ve set up the python virtual environment, run the following commands on the compute node you are assigned:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --port=8889 --ip=0.0.0.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This will start running the notebook on port 8889. &amp;lt;b&amp;gt;Note:&amp;lt;/b&amp;gt; You must keep this shell window open to be able to connect.&lt;br /&gt;
Then, on your local machine, run &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -N -f -L localhost:8888:$(NODENAME):8889 $(USERNAME)@$(CLUSTERNAME)sub00.umiacs.umd.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
This will tunnel port 8889 from the compute node to port 8888 on your local machine, using the (CLUSTERNAME)sub00 node as an intermediate node. Make sure to replace $(NODENAME) with the name of the compute node you are assigned, $(USERNAME) with your username, and $(CLUSTERNAME) with the name of the compute cluster you are using. For example,  &amp;lt;code&amp;gt;esloate@clipsub00.umiacs.umd.edu&amp;lt;/code&amp;gt;. Also, if the ports mentioned above are not working, then you can replace them with whatever port you&#039;d like, just make sure to change it in the command you ran on the compute node. You can then open a web browser and type in &amp;lt;code&amp;gt;localhost:8888&amp;lt;/code&amp;gt; to access the notebook. &amp;lt;b&amp;gt;Note:&amp;lt;/b&amp;gt; You must be on a machine connected to the UMIACS network or connected to our [[Network/VPN | VPN]] in order to access the Jupyter notebook.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Quick Guide to translate PBS/Torque to SLURM=&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+User commands&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
!PBS/Torque&lt;br /&gt;
!SLURM&lt;br /&gt;
|-&lt;br /&gt;
!Job submission&lt;br /&gt;
|qsub [filename]&lt;br /&gt;
|sbatch [filename]&lt;br /&gt;
|-&lt;br /&gt;
!Job deletion&lt;br /&gt;
|qdel [job_id]&lt;br /&gt;
|scancel [job_id]&lt;br /&gt;
|-&lt;br /&gt;
!Job status (by job)&lt;br /&gt;
|qstat [job_id] &lt;br /&gt;
|squeue --job [job_id]&lt;br /&gt;
|-&lt;br /&gt;
!Full job status (by job)&lt;br /&gt;
|qstat -f [job_id]&lt;br /&gt;
|scontrol show job [job_id]&lt;br /&gt;
|-&lt;br /&gt;
!Job status (by user)&lt;br /&gt;
|qstat -u [username]&lt;br /&gt;
|squeue --user=[username]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Environment variables&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
!PBS/Torque  &lt;br /&gt;
!SLURM&lt;br /&gt;
|-&lt;br /&gt;
!Job ID&lt;br /&gt;
|$PBS_JOBID&lt;br /&gt;
|$SLURM_JOBID&lt;br /&gt;
|-&lt;br /&gt;
!Submit Directory&lt;br /&gt;
|$PBS_O_WORKDIR&lt;br /&gt;
|$SLURM_SUBMIT_DIR&lt;br /&gt;
|-&lt;br /&gt;
!Node List &lt;br /&gt;
|$PBS_NODEFILE&lt;br /&gt;
|$SLURM_JOB_NODELIST&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Job specification&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
!PBS/Torque  &lt;br /&gt;
!SLURM&lt;br /&gt;
|-&lt;br /&gt;
!Script directive&lt;br /&gt;
|#PBS&lt;br /&gt;
|#SBATCH&lt;br /&gt;
|-&lt;br /&gt;
!Job Name &lt;br /&gt;
| -N [name]&lt;br /&gt;
| --job-name=[name] OR -J [name]&lt;br /&gt;
|-&lt;br /&gt;
!Node Count&lt;br /&gt;
| -l nodes=[count]&lt;br /&gt;
| --nodes=[min[-max]] OR -N [min[-max]]&lt;br /&gt;
|-&lt;br /&gt;
!CPU Count&lt;br /&gt;
| -l ppn=[count]&lt;br /&gt;
| --ntasks-per-node=[count]&lt;br /&gt;
|-&lt;br /&gt;
!CPUs Per Task&lt;br /&gt;
|&lt;br /&gt;
| --cpus-per-task=[count]&lt;br /&gt;
|-&lt;br /&gt;
!Memory Size&lt;br /&gt;
| -l mem=[MB] &lt;br /&gt;
| --mem=[MB] OR --mem-per-cpu=[MB]&lt;br /&gt;
|-&lt;br /&gt;
!Wall Clock Limit&lt;br /&gt;
| -l walltime=[hh:mm:ss]&lt;br /&gt;
| --time=[min] OR --time=[days-hh:mm:ss]&lt;br /&gt;
|-&lt;br /&gt;
!Node Properties&lt;br /&gt;
| -l nodes=4:ppn=8:[property]&lt;br /&gt;
| --constraint=[list]&lt;br /&gt;
|-&lt;br /&gt;
!Standard Output File&lt;br /&gt;
| -o [file_name]&lt;br /&gt;
| --output=[file_name] OR -o [file_name]&lt;br /&gt;
|-&lt;br /&gt;
!Standard Error File&lt;br /&gt;
| -e [file_name]&lt;br /&gt;
| --error=[file_name] OR -e [file_name]&lt;br /&gt;
|-&lt;br /&gt;
!Combine stdout/stderr&lt;br /&gt;
| -j oe (both to stdout)&lt;br /&gt;
|(Default if you don&#039;t specify --error)&lt;br /&gt;
|-&lt;br /&gt;
!Job Arrays&lt;br /&gt;
| -t [array_spec]&lt;br /&gt;
| --array=[array_spec] OR -a [array_spec]&lt;br /&gt;
|-&lt;br /&gt;
!Delay Job Start&lt;br /&gt;
| -a [time]&lt;br /&gt;
| --begin=[time]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Wrichman</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM&amp;diff=9513</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM&amp;diff=9513"/>
		<updated>2020-12-11T15:36:39Z</updated>

		<summary type="html">&lt;p&gt;Wrichman: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Simple Linux Utility for Resource Management (SLURM)=&lt;br /&gt;
SLURM is an open-source workload manager designed for Linux clusters of all sizes. It provides three key functions. First, it allocates exclusive or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.&lt;br /&gt;
&lt;br /&gt;
==Documentation==&lt;br /&gt;
:[[SLURM/JobSubmission | Submitting Jobs]]&lt;br /&gt;
:[[SLURM/JobStatus | Checking Job Status]]&lt;br /&gt;
:[[SLURM/ClusterStatus | Checking Cluster Status]]&lt;br /&gt;
:[http://slurm.schedmd.com/documentation.html Official Documentation]&lt;br /&gt;
:[http://slurm.schedmd.com/faq.html FAQ]&lt;br /&gt;
&lt;br /&gt;
==Commands==&lt;br /&gt;
Below are some of the common commands used in Slurm. Further information on how to use these commands is found in the documentation linked above. To see all flags available for a command, please check the command&#039;s manual by using &amp;lt;code&amp;gt;man $COMMAND&amp;lt;/code&amp;gt; on the command line.&lt;br /&gt;
&lt;br /&gt;
====srun====&lt;br /&gt;
srun runs a parallel job on a cluster managed by Slurm.  If necessary, it will first create a resource allocation in which to run the parallel job.&lt;br /&gt;
&lt;br /&gt;
====salloc====&lt;br /&gt;
salloc allocates a Slurm job allocation, which is a set of resources (nodes), possibly with some set of constraints (e.g. number of processors per node).  When salloc successfully obtains the requested allocation, it then runs the command specified by the user.  Finally, when the user specified command is complete, salloc relinquishes the job allocation.  If no command is specified, salloc runs the user&#039;s default shell.&lt;br /&gt;
&lt;br /&gt;
====sbatch====&lt;br /&gt;
sbatch submits a batch script to Slurm.  The batch script may be given to sbatch through a file name on the command line, or if no file name is specified, sbatch will read in a script from standard input.  The batch script may contain options preceded with &amp;quot;#SBATCH&amp;quot; before any executable commands in the script.&lt;br /&gt;
&lt;br /&gt;
====squeue====&lt;br /&gt;
squeue views job and job step information for jobs managed by Slurm.&lt;br /&gt;
&lt;br /&gt;
====scancel====&lt;br /&gt;
scancel signals or cancels jobs, job arrays, or job steps.  An arbitrary number of jobs or job steps may be signaled using job specification filters or a space separated list of specific job and/or job step IDs.&lt;br /&gt;
&lt;br /&gt;
====sacct====&lt;br /&gt;
sacct displays job accounting data stored in the job accounting log file or Slurm database in a variety of forms for your analysis.  The sacct command displays information on jobs, job steps, status, and exitcodes by default.  You can tailor the output with the use of the --format= option to specify the fields to be shown.&lt;br /&gt;
&lt;br /&gt;
====sstat====&lt;br /&gt;
sstat displays job status information for your analysis.  The sstat command displays information pertaining to CPU, Task, Node, Resident Set Size (RSS) and Virtual Memory (VM).  You can tailor the output with the use of the --fields= option to specify the fields to be shown.&lt;br /&gt;
&lt;br /&gt;
==Modules==&lt;br /&gt;
If you are trying to use [[Modules | GNU Modules]] in a Slurm job, please read the section of our [[Modules]] documentation on [[Modules#Modules_in_Non-Interactive_Shell_Sessions | non-interactive shell sessions]].&lt;br /&gt;
&lt;br /&gt;
==Running Jupyter Notebook on a Compute Node==&lt;br /&gt;
The steps to run a Jupyter Notebook from a compute node are listed below.&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;h6&amp;gt;Setting up Python Virtual Environment&amp;lt;/h6&amp;gt;&lt;br /&gt;
In order to set up your python virtual environment, you&#039;ll first want to follow the steps listed [https://wiki.umiacs.umd.edu/umiacs/index.php/PythonVirtualEnv here] to create a Python virtual environment on the compute node you are assigned. Then, activate it using the steps listed [https://wiki.umiacs.umd.edu/umiacs/index.php/PythonVirtualEnv#Activating_the_VirtualEnv here]. Next, install Jupyter using pip by following the steps [https://jupyter.readthedocs.io/en/latest/install/notebook-classic.html#alternative-for-experienced-python-users-installing-jupyter-with-pip here].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h6&amp;gt; Running Jupyter Notebook &amp;lt;/h6&amp;gt;&lt;br /&gt;
After you&#039;ve set up the python virtual environment, run the following commands on the compute node you are assigned:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --port=8889 --ip=0.0.0.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This will start running the notebook on port 8889. &amp;lt;b&amp;gt;Note:&amp;lt;/b&amp;gt; You must keep this shell window open to be able to connect.&lt;br /&gt;
Then, on your local machine, run &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -N -f -L localhost:8888:$(NODENAME):8889 $(USERNAME)@$(CLUSTERNAME)sub00.umiacs.umd.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
This will tunnel port 8889 from the compute node to port 8888 on your local machine, using the (CLUSTERNAME)sub00 node as an intermediate node. Make sure to replace $(NODENAME) with the name of the compute node you are assigned, $(USERNAME) with your username, and $(CLUSTERNAME) with the name of the compute cluster you are using. For example,  esloate@clipsub00.umiacs.umd.edu. Also, if the ports mentioned above are not working, then you can replace them with whatever port you&#039;d like, just make sure to change it in the command you ran on the compute node. You can then open a web browser and type in &amp;lt;code&amp;gt;localhost:8888&amp;lt;/code&amp;gt; to access the notebook. &amp;lt;b&amp;gt;Note:&amp;lt;/b&amp;gt; You must be on a machine connected to the UMIACS network or connected to our [[Network/VPN | VPN]] in order to access the Jupyter notebook.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Quick Guide to translate PBS/Torque to SLURM=&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+User commands&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
!PBS/Torque&lt;br /&gt;
!SLURM&lt;br /&gt;
|-&lt;br /&gt;
!Job submission&lt;br /&gt;
|qsub [filename]&lt;br /&gt;
|sbatch [filename]&lt;br /&gt;
|-&lt;br /&gt;
!Job deletion&lt;br /&gt;
|qdel [job_id]&lt;br /&gt;
|scancel [job_id]&lt;br /&gt;
|-&lt;br /&gt;
!Job status (by job)&lt;br /&gt;
|qstat [job_id] &lt;br /&gt;
|squeue --job [job_id]&lt;br /&gt;
|-&lt;br /&gt;
!Full job status (by job)&lt;br /&gt;
|qstat -f [job_id]&lt;br /&gt;
|scontrol show job [job_id]&lt;br /&gt;
|-&lt;br /&gt;
!Job status (by user)&lt;br /&gt;
|qstat -u [username]&lt;br /&gt;
|squeue --user=[username]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Environment variables&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
!PBS/Torque  &lt;br /&gt;
!SLURM&lt;br /&gt;
|-&lt;br /&gt;
!Job ID&lt;br /&gt;
|$PBS_JOBID&lt;br /&gt;
|$SLURM_JOBID&lt;br /&gt;
|-&lt;br /&gt;
!Submit Directory&lt;br /&gt;
|$PBS_O_WORKDIR&lt;br /&gt;
|$SLURM_SUBMIT_DIR&lt;br /&gt;
|-&lt;br /&gt;
!Node List &lt;br /&gt;
|$PBS_NODEFILE&lt;br /&gt;
|$SLURM_JOB_NODELIST&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Job specification&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
!PBS/Torque  &lt;br /&gt;
!SLURM&lt;br /&gt;
|-&lt;br /&gt;
!Script directive&lt;br /&gt;
|#PBS&lt;br /&gt;
|#SBATCH&lt;br /&gt;
|-&lt;br /&gt;
!Job Name &lt;br /&gt;
| -N [name]&lt;br /&gt;
| --job-name=[name] OR -J [name]&lt;br /&gt;
|-&lt;br /&gt;
!Node Count&lt;br /&gt;
| -l nodes=[count]&lt;br /&gt;
| --nodes=[min[-max]] OR -N [min[-max]]&lt;br /&gt;
|-&lt;br /&gt;
!CPU Count&lt;br /&gt;
| -l ppn=[count]&lt;br /&gt;
| --ntasks-per-node=[count]&lt;br /&gt;
|-&lt;br /&gt;
!CPUs Per Task&lt;br /&gt;
|&lt;br /&gt;
| --cpus-per-task=[count]&lt;br /&gt;
|-&lt;br /&gt;
!Memory Size&lt;br /&gt;
| -l mem=[MB] &lt;br /&gt;
| --mem=[MB] OR --mem-per-cpu=[MB]&lt;br /&gt;
|-&lt;br /&gt;
!Wall Clock Limit&lt;br /&gt;
| -l walltime=[hh:mm:ss]&lt;br /&gt;
| --time=[min] OR --time=[days-hh:mm:ss]&lt;br /&gt;
|-&lt;br /&gt;
!Node Properties&lt;br /&gt;
| -l nodes=4:ppn=8:[property]&lt;br /&gt;
| --constraint=[list]&lt;br /&gt;
|-&lt;br /&gt;
!Standard Output File&lt;br /&gt;
| -o [file_name]&lt;br /&gt;
| --output=[file_name] OR -o [file_name]&lt;br /&gt;
|-&lt;br /&gt;
!Standard Error File&lt;br /&gt;
| -e [file_name]&lt;br /&gt;
| --error=[file_name] OR -e [file_name]&lt;br /&gt;
|-&lt;br /&gt;
!Combine stdout/stderr&lt;br /&gt;
| -j oe (both to stdout)&lt;br /&gt;
|(Default if you don&#039;t specify --error)&lt;br /&gt;
|-&lt;br /&gt;
!Job Arrays&lt;br /&gt;
| -t [array_spec]&lt;br /&gt;
| --array=[array_spec] OR -a [array_spec]&lt;br /&gt;
|-&lt;br /&gt;
!Delay Job Start&lt;br /&gt;
| -a [time]&lt;br /&gt;
| --begin=[time]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Wrichman</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM&amp;diff=9512</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM&amp;diff=9512"/>
		<updated>2020-12-09T18:23:30Z</updated>

		<summary type="html">&lt;p&gt;Wrichman: Added information about using Jupyter Notebook on a compute node&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Simple Linux Utility for Resource Management (SLURM)=&lt;br /&gt;
SLURM is an open-source workload manager designed for Linux clusters of all sizes. It provides three key functions. First, it allocates exclusive or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.&lt;br /&gt;
&lt;br /&gt;
==Documentation==&lt;br /&gt;
:[[SLURM/JobSubmission | Submitting Jobs]]&lt;br /&gt;
:[[SLURM/JobStatus | Checking Job Status]]&lt;br /&gt;
:[[SLURM/ClusterStatus | Checking Cluster Status]]&lt;br /&gt;
:[http://slurm.schedmd.com/documentation.html Official Documentation]&lt;br /&gt;
:[http://slurm.schedmd.com/faq.html FAQ]&lt;br /&gt;
&lt;br /&gt;
==Commands==&lt;br /&gt;
Below are some of the common commands used in Slurm. Further information on how to use these commands is found in the documentation linked above. To see all flags available for a command, please check the command&#039;s manual by using &amp;lt;code&amp;gt;man $COMMAND&amp;lt;/code&amp;gt; on the command line.&lt;br /&gt;
&lt;br /&gt;
====srun====&lt;br /&gt;
srun runs a parallel job on a cluster managed by Slurm.  If necessary, it will first create a resource allocation in which to run the parallel job.&lt;br /&gt;
&lt;br /&gt;
====salloc====&lt;br /&gt;
salloc allocates a Slurm job allocation, which is a set of resources (nodes), possibly with some set of constraints (e.g. number of processors per node).  When salloc successfully obtains the requested allocation, it then runs the command specified by the user.  Finally, when the user specified command is complete, salloc relinquishes the job allocation.  If no command is specified, salloc runs the user&#039;s default shell.&lt;br /&gt;
&lt;br /&gt;
====sbatch====&lt;br /&gt;
sbatch submits a batch script to Slurm.  The batch script may be given to sbatch through a file name on the command line, or if no file name is specified, sbatch will read in a script from standard input.  The batch script may contain options preceded with &amp;quot;#SBATCH&amp;quot; before any executable commands in the script.&lt;br /&gt;
&lt;br /&gt;
====squeue====&lt;br /&gt;
squeue views job and job step information for jobs managed by Slurm.&lt;br /&gt;
&lt;br /&gt;
====scancel====&lt;br /&gt;
scancel signals or cancels jobs, job arrays, or job steps.  An arbitrary number of jobs or job steps may be signaled using job specification filters or a space separated list of specific job and/or job step IDs.&lt;br /&gt;
&lt;br /&gt;
====sacct====&lt;br /&gt;
sacct displays job accounting data stored in the job accounting log file or Slurm database in a variety of forms for your analysis.  The sacct command displays information on jobs, job steps, status, and exitcodes by default.  You can tailor the output with the use of the --format= option to specify the fields to be shown.&lt;br /&gt;
&lt;br /&gt;
====sstat====&lt;br /&gt;
sstat displays job status information for your analysis.  The sstat command displays information pertaining to CPU, Task, Node, Resident Set Size (RSS) and Virtual Memory (VM).  You can tailor the output with the use of the --fields= option to specify the fields to be shown.&lt;br /&gt;
&lt;br /&gt;
==Modules==&lt;br /&gt;
If you are trying to use [[Modules | GNU Modules]] in a Slurm job, please read the section of our [[Modules]] documentation on [[Modules#Modules_in_Non-Interactive_Shell_Sessions | non-interactive shell sessions]].&lt;br /&gt;
&lt;br /&gt;
==Running Jupyter Notebook on a Compute Node==&lt;br /&gt;
In order to run Jupyter Notebook on a compute node, you first need to set up a python virtual environment.&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;h6&amp;gt;Setting up Python Virtual Environment&amp;lt;/h6&amp;gt;&lt;br /&gt;
In order to set up your python virtual environment, you&#039;ll first want to follow the steps listed [https://wiki.umiacs.umd.edu/umiacs/index.php/PythonVirtualEnv here] to create a Python virtual environment on the compute node you are assigned. Then, activate it using the steps listed [https://wiki.umiacs.umd.edu/umiacs/index.php/PythonVirtualEnv#Activating_the_VirtualEnv here]. Next, install Jupyter using pip by following the steps [https://jupyter.readthedocs.io/en/latest/install/notebook-classic.html#alternative-for-experienced-python-users-installing-jupyter-with-pip here].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h6&amp;gt; Running Jupyter Notebook &amp;lt;/h6&amp;gt;&lt;br /&gt;
After you&#039;ve set up the python virtual environment, run the following commands on the compute node you are assigned:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --port=8889 --ip=0.0.0.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This will start running the notebook on port 8889. &amp;lt;b&amp;gt;Note:&amp;lt;/b&amp;gt; You must keep this shell window open to be able to connect.&lt;br /&gt;
Then, on your local machine, run &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -N -f -L localhost:8888:$(NODENAME):8889 $(USERNAME)@$(CLUSTERNAME)sub00.umiacs.umd.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
This will tunnel port 8889 from the compute node to port 8888 on your local machine, using the (CLUSTERNAME)sub00 node as an intermediate node. Make sure to replace $(NODENAME) with the name of the compute node you are assigned, $(USERNAME) with your username, and $(CLUSTERNAME) with the name of the compute cluster you are using. For example,  esloate@clipsub00.umiacs.umd.edu. Also, if the ports mentioned above are not working, then you can replace them with whatever port you&#039;d like, just make sure to change it in the command you ran on the compute node. You can then open a web browser and type in &amp;lt;code&amp;gt;localhost:8888&amp;lt;/code&amp;gt; to access the notebook. &amp;lt;b&amp;gt;Note:&amp;lt;/b&amp;gt; You must be on a machine connected to the UMIACS network or connected to our [[Network/VPN | VPN]] in order to access the Jupyter notebook.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Quick Guide to translate PBS/Torque to SLURM=&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+User commands&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
!PBS/Torque&lt;br /&gt;
!SLURM&lt;br /&gt;
|-&lt;br /&gt;
!Job submission&lt;br /&gt;
|qsub [filename]&lt;br /&gt;
|sbatch [filename]&lt;br /&gt;
|-&lt;br /&gt;
!Job deletion&lt;br /&gt;
|qdel [job_id]&lt;br /&gt;
|scancel [job_id]&lt;br /&gt;
|-&lt;br /&gt;
!Job status (by job)&lt;br /&gt;
|qstat [job_id] &lt;br /&gt;
|squeue --job [job_id]&lt;br /&gt;
|-&lt;br /&gt;
!Full job status (by job)&lt;br /&gt;
|qstat -f [job_id]&lt;br /&gt;
|scontrol show job [job_id]&lt;br /&gt;
|-&lt;br /&gt;
!Job status (by user)&lt;br /&gt;
|qstat -u [username]&lt;br /&gt;
|squeue --user=[username]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Environment variables&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
!PBS/Torque  &lt;br /&gt;
!SLURM&lt;br /&gt;
|-&lt;br /&gt;
!Job ID&lt;br /&gt;
|$PBS_JOBID&lt;br /&gt;
|$SLURM_JOBID&lt;br /&gt;
|-&lt;br /&gt;
!Submit Directory&lt;br /&gt;
|$PBS_O_WORKDIR&lt;br /&gt;
|$SLURM_SUBMIT_DIR&lt;br /&gt;
|-&lt;br /&gt;
!Node List &lt;br /&gt;
|$PBS_NODEFILE&lt;br /&gt;
|$SLURM_JOB_NODELIST&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Job specification&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
!PBS/Torque  &lt;br /&gt;
!SLURM&lt;br /&gt;
|-&lt;br /&gt;
!Script directive&lt;br /&gt;
|#PBS&lt;br /&gt;
|#SBATCH&lt;br /&gt;
|-&lt;br /&gt;
!Job Name &lt;br /&gt;
| -N [name]&lt;br /&gt;
| --job-name=[name] OR -J [name]&lt;br /&gt;
|-&lt;br /&gt;
!Node Count&lt;br /&gt;
| -l nodes=[count]&lt;br /&gt;
| --nodes=[min[-max]] OR -N [min[-max]]&lt;br /&gt;
|-&lt;br /&gt;
!CPU Count&lt;br /&gt;
| -l ppn=[count]&lt;br /&gt;
| --ntasks-per-node=[count]&lt;br /&gt;
|-&lt;br /&gt;
!CPUs Per Task&lt;br /&gt;
|&lt;br /&gt;
| --cpus-per-task=[count]&lt;br /&gt;
|-&lt;br /&gt;
!Memory Size&lt;br /&gt;
| -l mem=[MB] &lt;br /&gt;
| --mem=[MB] OR --mem-per-cpu=[MB]&lt;br /&gt;
|-&lt;br /&gt;
!Wall Clock Limit&lt;br /&gt;
| -l walltime=[hh:mm:ss]&lt;br /&gt;
| --time=[min] OR --time=[days-hh:mm:ss]&lt;br /&gt;
|-&lt;br /&gt;
!Node Properties&lt;br /&gt;
| -l nodes=4:ppn=8:[property]&lt;br /&gt;
| --constraint=[list]&lt;br /&gt;
|-&lt;br /&gt;
!Standard Output File&lt;br /&gt;
| -o [file_name]&lt;br /&gt;
| --output=[file_name] OR -o [file_name]&lt;br /&gt;
|-&lt;br /&gt;
!Standard Error File&lt;br /&gt;
| -e [file_name]&lt;br /&gt;
| --error=[file_name] OR -e [file_name]&lt;br /&gt;
|-&lt;br /&gt;
!Combine stdout/stderr&lt;br /&gt;
| -j oe (both to stdout)&lt;br /&gt;
|(Default if you don&#039;t specify --error)&lt;br /&gt;
|-&lt;br /&gt;
!Job Arrays&lt;br /&gt;
| -t [array_spec]&lt;br /&gt;
| --array=[array_spec] OR -a [array_spec]&lt;br /&gt;
|-&lt;br /&gt;
!Delay Job Start&lt;br /&gt;
| -a [time]&lt;br /&gt;
| --begin=[time]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Wrichman</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=CrowdStrike&amp;diff=9505</id>
		<title>CrowdStrike</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=CrowdStrike&amp;diff=9505"/>
		<updated>2020-12-02T16:53:50Z</updated>

		<summary type="html">&lt;p&gt;Wrichman: Updated antivirus tools and included information about online vs offline scans&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Free malware/virus removal tools ==&lt;br /&gt;
For non-UMIACS supported systems, there are several virus and malware protection and removal tools available. This page lists and describes some of them. &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;b&amp;gt;Note:&amp;lt;/b&amp;gt; these are just a few of the many available anti-virus removal tools. One can always search the internet for a well-reputed anti-virus program and use that instead.&lt;br /&gt;
&lt;br /&gt;
== Online vs Offline Scanners ==&lt;br /&gt;
There are two types of malware/virus removal tools, online and offline. The main difference between the two is that offline scans work outside of your computer&#039;s operating system. This can provide an alternate means for detecting and mitigating threats that may use various tactics to hide from your installed antivirus. On the other hand, online scanners can detect most problems while also being easier to use. Most online scanners have better interfaces than their offline counterparts, and they are usually easier to set up and run. Thus, the best way to remove malware or a virus would be to first try an online scanner and see if the issue still persists. If the online scanner did not fix the problem, then you should try an offline scanner. Below are some links to various online and offline scanners.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Online Scanners&amp;lt;/h3&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
*&amp;lt;li&amp;gt; [http://www.f-secure.com/en/web/home_global/online-scanner F-Secure Online Scan]&amp;lt;/li&amp;gt;&lt;br /&gt;
The online scanner helps to get rid of viruses and spyware that may cause problems on your PC.&lt;br /&gt;
*&amp;lt;li&amp;gt; [https://www.malwarebytes.org/mwb-download/ Malwarebytes]&amp;lt;/li&amp;gt;&lt;br /&gt;
This free version is limited, but can also perform malware and virus scans.&lt;br /&gt;
*&amp;lt;li&amp;gt; [http://www.superantispyware.com/superantispyware.html SUPERAntiSpyware]&amp;lt;/li&amp;gt;&lt;br /&gt;
This free version is also limited but performs multiple different malware scans such as installed or internet spyware/adware.&lt;br /&gt;
*&amp;lt;li&amp;gt; [http://www.kaspersky.com/antivirus-removal-tool?form=1 Kaspersky virus removal tool]&amp;lt;/li&amp;gt;&lt;br /&gt;
This provides a virus scan with updated virus definitions.&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Offline/Bootable Scanners&amp;lt;/h3&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
* &amp;lt;li&amp;gt;[http://support.kaspersky.com/viruses/rescuedisk#downloads Kaspersky Rescue Disk] &amp;lt;/li&amp;gt;&lt;br /&gt;
This is an offline scan that works with all operating systems.&lt;br /&gt;
*&amp;lt;li&amp;gt;[https://support.microsoft.com/en-us/windows/help-protect-my-pc-with-microsoft-defender-offline-9306d528-64bf-4668-5b80-ff533f183d6c Microsoft Defender Offline]&amp;lt;/li&amp;gt;&lt;br /&gt;
This offline scan only works with machines using Windows 7, 8.1, or 10.&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==UMIACS-Supported Machines==&lt;br /&gt;
All UMIACS-supported machines have F-Secure Client Security installed. Follow the steps below to run a manual scan using F-Secure on a UMIACS-supported machine.&lt;br /&gt;
&amp;lt;h3&amp;gt; How to a run manual F-Secure Scan &amp;lt;/h3&amp;gt;&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Open F-Secure Client Security by searching it in the search bar.&lt;br /&gt;
&amp;lt;br&amp;gt;[[Image:FSecureWindowsSearch.png|500px|]]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; This should make a a pop up screen. If it doesn&#039;t, try clicking the up arrow in the bottom right and selecting the F-Secure icon. &amp;lt;br&amp;gt; [[Image:FSecureMenu.png|500px|]]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Click on the Scan option on the right side of the window. This should make a pop up with a list of options.&lt;br /&gt;
&amp;lt;br&amp;gt; [[Image:FSecureIconMenu.png|500px|]]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Click either virus and spyware scan or full computer scan &lt;br /&gt;
&amp;lt;ul&amp;gt;&amp;lt;li&amp;gt;Virus and Spyware Scan: Searches common places where malware can be found. This makes it faster, but it may miss some hidden malware.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Full Computer Scan: Scans all internal/external hard drives for malware. This makes it more thorough than the malware scan, but it can take a long time to complete.&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; After choosing an option, another screen should pop up that shows the progress of the scan. &lt;br /&gt;
&amp;lt;br&amp;gt; [[Image:FSecureScanInProgress.png|500px|]]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Once the scan is complete, it will provide a report stating whether any malware was found. If if did find malware, then it will show you the number of files it removed. &lt;br /&gt;
&amp;lt;br&amp;gt; [[Image:FSecureScanComplete.png|500px|]]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Hit the finish button to close the scan window and complete the scan. &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wrichman</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=CrowdStrike&amp;diff=9504</id>
		<title>CrowdStrike</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=CrowdStrike&amp;diff=9504"/>
		<updated>2020-12-02T16:05:03Z</updated>

		<summary type="html">&lt;p&gt;Wrichman: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Free malware/virus removal tools ==&lt;br /&gt;
For non-UMIACS supported systems, there are several virus and malware protection and removal tools available. This page lists and describes some of them.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; F-Secure&lt;br /&gt;
* [http://www.f-secure.com/en/web/labs_global/removal-tools/-/carousel/view/142 Bootable Scan]&lt;br /&gt;
If your computer no longer starts due to malware corrupting the operating system, or you suspect the security software has been compromised, this bootable cd can securely boot up the computer and check the programs installed. The Rescue CD can also be used for more advanced repair and data recovery operations.&lt;br /&gt;
* [http://www.f-secure.com/en/web/home_global/online-scanner Online Scan]&amp;lt;/li&amp;gt;&lt;br /&gt;
Similar to the bootable scan, the online scanner helps to get rid of viruses and spyware that may cause problems on your PC.&lt;br /&gt;
&amp;lt;li&amp;gt; [https://www.malwarebytes.org/mwb-download/ Malwarebytes]&amp;lt;/li&amp;gt;&lt;br /&gt;
This free version is limited, but can also perform malware and virus scans.&lt;br /&gt;
&amp;lt;li&amp;gt; [http://www.superantispyware.com/superantispyware.html SUPERAntiSpyware]&amp;lt;/li&amp;gt;&lt;br /&gt;
This free version is also limited but performs multiple different malware scans such installed or internet spyware/adware.&lt;br /&gt;
&amp;lt;li&amp;gt; [http://www.kaspersky.com/antivirus-removal-tool?form=1 Kaspersky virus removal tool]&amp;lt;/li&amp;gt;&lt;br /&gt;
This provides a virus scan with updated virus definitions.&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Other &amp;quot;offline&amp;quot; (bootable) Scanners ==&lt;br /&gt;
Offline/bootable scanners work without loading your operating system, and can provide an alternate means for detecting and mitigating threats that may use various tactics to hide from your favorite installed AV suite.&lt;br /&gt;
* [http://support.kaspersky.com/viruses/rescuedisk#downloads Kaspersky Rescue Disk]&lt;br /&gt;
* [http://www.bitdefender.com/support/how-to-create-a-bitdefender-rescue-cd-627.html BitDefender Rescue CD]&lt;br /&gt;
* [https://www.f-secure.com/en/web/labs_global/rescue-cd F-Secure Rescue CD]&lt;br /&gt;
&lt;br /&gt;
== How to a run manual F-Secure Scan ==&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Open F-Secure Client Security by searching it in the windows search bar. Note: only UMIACS-supported machines will have this version of F-Secure Client Security.&lt;br /&gt;
&amp;lt;br&amp;gt;[[Image:FSecureWindowsSearch.png|500px|]]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; This should make a a pop up screen. If it doesn&#039;t, try clicking the up arrow in the bottom right and selecting the F-Secure icon. &amp;lt;br&amp;gt; [[Image:FSecureMenu.png|500px|]]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Click on the Scan option on the right side of the window. This should make a pop up with a list of options.&lt;br /&gt;
&amp;lt;br&amp;gt; [[Image:FSecureIconMenu.png|500px|]]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Click either virus and spyware scan or full computer scan &lt;br /&gt;
&amp;lt;ul&amp;gt;&amp;lt;li&amp;gt;Virus and Spyware Scan: Searches common places where malware can be found. This makes it faster, but it may miss some hidden malware.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Full Computer Scan: Scans all internal/external hard drives for malware. This makes it more thorough than the malware scan, but it can take a long time to complete.&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; After choosing an option, another screen should pop up that shows the progress of the scan. &lt;br /&gt;
&amp;lt;br&amp;gt; [[Image:FSecureScanInProgress.png|500px|]]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Once the scan is complete, it will provide a report stating whether any malware was found. If if did find malware, then it will show you the number of files it removed. &lt;br /&gt;
&amp;lt;br&amp;gt; [[Image:FSecureScanComplete.png|500px|]]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Hit the finish button to close the scan window and complete the scan. &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wrichman</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Snapshots&amp;diff=9502</id>
		<title>Snapshots</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Snapshots&amp;diff=9502"/>
		<updated>2020-11-25T18:52:17Z</updated>

		<summary type="html">&lt;p&gt;Wrichman: Adjusted Isilon filer example&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Snapshots are a implementation of [http://en.wikipedia.org/wiki/Copy-on-write copy-on-write] that allows for a file system to quickly take a point-in-time copy of the file system and provide access to the data through a .snapshot directory. Snapshots provide a fast, user-accessible way to recover data that has been accidentally deleted or corrupted within a recent time window -- rather than having to retrieve the data from comparatively slow tape backups. They also help to span the time gap between full backups.&lt;br /&gt;
&lt;br /&gt;
We provide [[Snapshots]] on our ZFS, [[Snapshots:FluidFS | FluidFS]], and Isilon filers to certain file systems. If you are ever unsure if a particular volume has Snapshots enabled, please contact the [[HelpDesk | Help Desk]].&lt;br /&gt;
&lt;br /&gt;
==Snapshot Retention Policy==&lt;br /&gt;
Our core file systems in the department are on a 4 hour snapshot cycle.  &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Snapshot Name&lt;br /&gt;
!Retention Length&lt;br /&gt;
!When is it taken?&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Hourly&#039;&#039;&#039;&lt;br /&gt;
|24-32 hours&lt;br /&gt;
|Every day 12am, 4am, 8am, 12pm, 4pm&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Daily&#039;&#039;&#039;&lt;br /&gt;
|2 days&lt;br /&gt;
|Every day 8pm or 12am&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Weekly&#039;&#039;&#039;&lt;br /&gt;
|1 week&lt;br /&gt;
|Every Saturday 8pm or Sunday 12am&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
In other words, we retain up to either 6 or 8 hourly snapshots, 2 daily snapshots and 1 weekly snapshot. Hourly snapshots may be superseded by daily snapshots, and daily snapshots may be superseded by the weekly snapshot.&lt;br /&gt;
&lt;br /&gt;
==Snapshot Restoring==&lt;br /&gt;
&lt;br /&gt;
If you have deleted a file by mistake and you need to get it back, you can use the snapshots directory to recopy the file.&lt;br /&gt;
&lt;br /&gt;
This directory can typically be found in your home directory. It generally will not be visible, even when viewing hidden directories.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;It will be either&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
* .snapshots for the [[Snapshots:FluidFS | FluidFS]] filer&lt;br /&gt;
* .zfs/snapshot for the ZFS filer&lt;br /&gt;
* .snapshot for the Isilon filer&lt;br /&gt;
&lt;br /&gt;
The inside of one of these will look something like this on a FluidFS filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~$ pwd&lt;br /&gt;
/nfshomes/sattwood&lt;br /&gt;
sattwood@zaphod:~$ cd .snapshots&lt;br /&gt;
sattwood@zaphod:~/.snapshots$ ls&lt;br /&gt;
daily_2018_06_14__20_00   hourly_2018_06_14__12_00  hourly_2018_06_15__04_00  weekly_2018_06_09__20_00&lt;br /&gt;
daily_2018_06_15__20_00   hourly_2018_06_14__16_00  hourly_2018_06_15__08_00  &lt;br /&gt;
hourly_2018_06_14__08_00  hourly_2018_06_14__00_00  hourly_2018_06_15__12_00  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Or this, on a ZFS filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@swirl:~$ pwd&lt;br /&gt;
/nmhomes/sattwood&lt;br /&gt;
sattwood@swirl:~$ cd .zfs/snapshot&lt;br /&gt;
sattwood@swirl:~/.zfs/snapshot$ ls&lt;br /&gt;
zfs-auto-snap_daily-2018-06-13-01h00   zfs-auto-snap_hourly-2018-05-28-00h00&lt;br /&gt;
zfs-auto-snap_daily-2018-06-14-01h00   zfs-auto-snap_hourly-2018-05-29-00h00&lt;br /&gt;
zfs-auto-snap_daily-2018-06-15-01h00   zfs-auto-snap_hourly-2018-06-03-16h00&lt;br /&gt;
zfs-auto-snap_hourly-2018-05-24-00h00  zfs-auto-snap_hourly-2018-06-10-08h00&lt;br /&gt;
zfs-auto-snap_hourly-2018-05-25-00h00  zfs-auto-snap_hourly-2018-06-15-12h00&lt;br /&gt;
zfs-auto-snap_hourly-2018-05-26-00h00  zfs-auto-snap_weekly-2018-06-09-03h00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Or this, on an Isilon filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@coffee:~$ pwd&lt;br /&gt;
/nfshomes/sattwood&lt;br /&gt;
sattwood@coffee:~$ cd .snapshot&lt;br /&gt;
sattwood@coffee:~/.snapshot$ ls&lt;br /&gt;
nfshomes_2018-06-14_00:00  nfshomes_2018-06-15_04:00&lt;br /&gt;
nfshomes_2018-06-14_16:00  nfshomes_2018-06-15_08:00&lt;br /&gt;
nfshomes_2018-06-14_20:00  nfshomes_2018-06-15_12:00&lt;br /&gt;
nfshomes_2018-06-15_00:00  Weekly_nfshomes_2018-06-10_00:00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
For an example of file restoration, please see [[Snapshots:Example | this page]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Snapshots]]&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Wrichman</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Snapshots&amp;diff=9501</id>
		<title>Snapshots</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Snapshots&amp;diff=9501"/>
		<updated>2020-11-25T18:48:17Z</updated>

		<summary type="html">&lt;p&gt;Wrichman: Fixed Isilon filer details&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Snapshots are a implementation of [http://en.wikipedia.org/wiki/Copy-on-write copy-on-write] that allows for a file system to quickly take a point-in-time copy of the file system and provide access to the data through a .snapshot directory. Snapshots provide a fast, user-accessible way to recover data that has been accidentally deleted or corrupted within a recent time window -- rather than having to retrieve the data from comparatively slow tape backups. They also help to span the time gap between full backups.&lt;br /&gt;
&lt;br /&gt;
We provide [[Snapshots]] on our ZFS, [[Snapshots:FluidFS | FluidFS]], and Isilon filers to certain file systems. If you are ever unsure if a particular volume has Snapshots enabled, please contact the [[HelpDesk | Help Desk]].&lt;br /&gt;
&lt;br /&gt;
==Snapshot Retention Policy==&lt;br /&gt;
Our core file systems in the department are on a 4 hour snapshot cycle.  &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Snapshot Name&lt;br /&gt;
!Retention Length&lt;br /&gt;
!When is it taken?&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Hourly&#039;&#039;&#039;&lt;br /&gt;
|24-32 hours&lt;br /&gt;
|Every day 12am, 4am, 8am, 12pm, 4pm&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Daily&#039;&#039;&#039;&lt;br /&gt;
|2 days&lt;br /&gt;
|Every day 8pm or 12am&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Weekly&#039;&#039;&#039;&lt;br /&gt;
|1 week&lt;br /&gt;
|Every Saturday 8pm or Sunday 12am&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
In other words, we retain up to either 6 or 8 hourly snapshots, 2 daily snapshots and 1 weekly snapshot. Hourly snapshots may be superseded by daily snapshots, and daily snapshots may be superseded by the weekly snapshot.&lt;br /&gt;
&lt;br /&gt;
==Snapshot Restoring==&lt;br /&gt;
&lt;br /&gt;
If you have deleted a file by mistake and you need to get it back, you can use the snapshots directory to recopy the file.&lt;br /&gt;
&lt;br /&gt;
This directory can typically be found in your home directory. It generally will not be visible, even when viewing hidden directories.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;It will be either&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
* .snapshots for the [[Snapshots:FluidFS | FluidFS]] filer&lt;br /&gt;
* .zfs/snapshot for the ZFS filer&lt;br /&gt;
* .snapshot for the Isilon filer&lt;br /&gt;
&lt;br /&gt;
The inside of one of these will look something like this on a FluidFS filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~$ pwd&lt;br /&gt;
/nfshomes/sattwood&lt;br /&gt;
sattwood@zaphod:~$ cd .snapshots&lt;br /&gt;
sattwood@zaphod:~/.snapshots$ ls&lt;br /&gt;
daily_2018_06_14__20_00   hourly_2018_06_14__12_00  hourly_2018_06_15__04_00  weekly_2018_06_09__20_00&lt;br /&gt;
daily_2018_06_15__20_00   hourly_2018_06_14__16_00  hourly_2018_06_15__08_00  &lt;br /&gt;
hourly_2018_06_14__08_00  hourly_2018_06_14__00_00  hourly_2018_06_15__12_00  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Or this, on a ZFS filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@swirl:~$ pwd&lt;br /&gt;
/nmhomes/sattwood&lt;br /&gt;
sattwood@swirl:~$ cd .zfs/snapshot&lt;br /&gt;
sattwood@swirl:~/.zfs/snapshot$ ls&lt;br /&gt;
zfs-auto-snap_daily-2018-06-13-01h00   zfs-auto-snap_hourly-2018-05-28-00h00&lt;br /&gt;
zfs-auto-snap_daily-2018-06-14-01h00   zfs-auto-snap_hourly-2018-05-29-00h00&lt;br /&gt;
zfs-auto-snap_daily-2018-06-15-01h00   zfs-auto-snap_hourly-2018-06-03-16h00&lt;br /&gt;
zfs-auto-snap_hourly-2018-05-24-00h00  zfs-auto-snap_hourly-2018-06-10-08h00&lt;br /&gt;
zfs-auto-snap_hourly-2018-05-25-00h00  zfs-auto-snap_hourly-2018-06-15-12h00&lt;br /&gt;
zfs-auto-snap_hourly-2018-05-26-00h00  zfs-auto-snap_weekly-2018-06-09-03h00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Or this, on an Isilon filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@coffee:~$ pwd&lt;br /&gt;
/nfshomes/sattwood&lt;br /&gt;
sattwood@coffee:~$ cd .snapshot&lt;br /&gt;
sattwood@coffee:~/.snapshot$ ls&lt;br /&gt;
nfshomes_2018-06-14_00:00  nfshomes_2018-06-15_04:00&lt;br /&gt;
nfshomes_2018-06-14_16:00  nfshomes_2018-06-15_08:00&lt;br /&gt;
nfshomes_2018-06-14_20:00  nfshomes_2018-06-15_12:00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
For an example of file restoration, please see [[Snapshots:Example | this page]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Snapshots]]&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Wrichman</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=CrowdStrike&amp;diff=9500</id>
		<title>CrowdStrike</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=CrowdStrike&amp;diff=9500"/>
		<updated>2020-11-25T17:30:51Z</updated>

		<summary type="html">&lt;p&gt;Wrichman: Updated screenshots for instructions for a manual F-secure scan&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Free malware/virus removal tools ==&lt;br /&gt;
For non-UMIACS supported systems, there are several virus and malware protection and removal tools available. This page lists and describes some of them.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; F-Secure&lt;br /&gt;
* [http://www.f-secure.com/en/web/labs_global/removal-tools/-/carousel/view/142 Bootable Scan]&lt;br /&gt;
If your computer no longer starts due to malware corrupting the operating system, or you suspect the security software has been compromised, this bootable cd can securely boot up the computer and check the programs installed. The Rescue CD can also be used for more advanced repair and data recovery operations.&lt;br /&gt;
* [http://www.f-secure.com/en/web/home_global/online-scanner Online Scan]&amp;lt;/li&amp;gt;&lt;br /&gt;
Similar to the bootable scan, the online scanner helps to get rid of viruses and spyware that may cause problems on your PC.&lt;br /&gt;
&amp;lt;li&amp;gt; [https://www.malwarebytes.org/mwb-download/ Malwarebytes]&amp;lt;/li&amp;gt;&lt;br /&gt;
This free version is limited, but can also perform malware and virus scans.&lt;br /&gt;
&amp;lt;li&amp;gt; [http://www.superantispyware.com/superantispyware.html SUPERAntiSpyware]&amp;lt;/li&amp;gt;&lt;br /&gt;
This free version is also limited but performs multiple different malware scans such installed or internet spyware/adware.&lt;br /&gt;
&amp;lt;li&amp;gt; [http://www.kaspersky.com/antivirus-removal-tool?form=1 Kaspersky virus removal tool]&amp;lt;/li&amp;gt;&lt;br /&gt;
This provides a virus scan with updated virus definitions.&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Other &amp;quot;offline&amp;quot; (bootable) Scanners ==&lt;br /&gt;
Offline/bootable scanners work without loading your operating system, and can provide an alternate means for detecting and mitigating threats that may use various tactics to hide from your favorite installed AV suite.&lt;br /&gt;
* [http://support.kaspersky.com/viruses/rescuedisk#downloads Kaspersky Rescue Disk]&lt;br /&gt;
* [http://www.bitdefender.com/support/how-to-create-a-bitdefender-rescue-cd-627.html BitDefender Rescue CD]&lt;br /&gt;
* [https://www.f-secure.com/en/web/labs_global/rescue-cd F-Secure Rescue CD]&lt;br /&gt;
&lt;br /&gt;
== How to a run manual F-Secure Scan ==&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Open F-Secure Server Security by searching it in the windows search bar. &lt;br /&gt;
&amp;lt;br&amp;gt;[[Image:FSecureWindowsSearch.png|500px|]]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; This should make a a pop up screen. If it doesn&#039;t, try clicking the up arrow in the bottom right and selecting the F-Secure icon. &amp;lt;br&amp;gt; [[Image:FSecureMenu.png|500px|]]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Click on the Scan option on the right side of the window. This should make a pop up with a list of options.&lt;br /&gt;
&amp;lt;br&amp;gt; [[Image:FSecureIconMenu.png|500px|]]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Click either virus and spyware scan or full computer scan &lt;br /&gt;
&amp;lt;ul&amp;gt;&amp;lt;li&amp;gt;Virus and Spyware Scan: Searches common places where malware can be found. This makes it faster, but it may miss some hidden malware.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Full Computer Scan: Scans all internal/external hard drives for malware. This makes it more thorough than the malware scan, but it can take a long time to complete.&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; After choosing an option, another screen should pop up that shows the progress of the scan. &lt;br /&gt;
&amp;lt;br&amp;gt; [[Image:FSecureScanInProgress.png|500px|]]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Once the scan is complete, it will provide a report stating whether any malware was found. If if did find malware, then it will show you the number of files it removed. &lt;br /&gt;
&amp;lt;br&amp;gt; [[Image:FSecureScanComplete.png|500px|]]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Hit the finish button to close the scan window and complete the scan. &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wrichman</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=CrowdStrike&amp;diff=9494</id>
		<title>CrowdStrike</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=CrowdStrike&amp;diff=9494"/>
		<updated>2020-11-25T16:58:44Z</updated>

		<summary type="html">&lt;p&gt;Wrichman: Added instructions on how to complete a manual scan using F-Secure&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Free malware/virus removal tools ==&lt;br /&gt;
For non-UMIACS supported systems, there are several virus and malware protection and removal tools available. This page lists and describes some of them.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; F-Secure&lt;br /&gt;
* [http://www.f-secure.com/en/web/labs_global/removal-tools/-/carousel/view/142 Bootable Scan]&lt;br /&gt;
If your computer no longer starts due to malware corrupting the operating system, or you suspect the security software has been compromised, this bootable cd can securely boot up the computer and check the programs installed. The Rescue CD can also be used for more advanced repair and data recovery operations.&lt;br /&gt;
* [http://www.f-secure.com/en/web/home_global/online-scanner Online Scan]&amp;lt;/li&amp;gt;&lt;br /&gt;
Similar to the bootable scan, the online scanner helps to get rid of viruses and spyware that may cause problems on your PC.&lt;br /&gt;
&amp;lt;li&amp;gt; [https://www.malwarebytes.org/mwb-download/ Malwarebytes]&amp;lt;/li&amp;gt;&lt;br /&gt;
This free version is limited, but can also perform malware and virus scans.&lt;br /&gt;
&amp;lt;li&amp;gt; [http://www.superantispyware.com/superantispyware.html SUPERAntiSpyware]&amp;lt;/li&amp;gt;&lt;br /&gt;
This free version is also limited but performs multiple different malware scans such installed or internet spyware/adware.&lt;br /&gt;
&amp;lt;li&amp;gt; [http://www.kaspersky.com/antivirus-removal-tool?form=1 Kaspersky virus removal tool]&amp;lt;/li&amp;gt;&lt;br /&gt;
This provides a virus scan with updated virus definitions.&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Other &amp;quot;offline&amp;quot; (bootable) Scanners ==&lt;br /&gt;
Offline/bootable scanners work without loading your operating system, and can provide an alternate means for detecting and mitigating threats that may use various tactics to hide from your favorite installed AV suite.&lt;br /&gt;
* [http://support.kaspersky.com/viruses/rescuedisk#downloads Kaspersky Rescue Disk]&lt;br /&gt;
* [http://www.bitdefender.com/support/how-to-create-a-bitdefender-rescue-cd-627.html BitDefender Rescue CD]&lt;br /&gt;
* [https://www.f-secure.com/en/web/labs_global/rescue-cd F-Secure Rescue CD]&lt;br /&gt;
&lt;br /&gt;
== How to a run manual F-Secure Scan ==&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Open F-Secure Server Security by searching it in the windows search bar. &lt;br /&gt;
&amp;lt;br&amp;gt;[[Image:FSecureWindowsSearch.png|500px|]]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; This should make a a pop up screen. If it doesn&#039;t, try clicking the up arrow in the bottom left and selecting the F-Secure icon. &amp;lt;br&amp;gt; [[Image:FSecureMenu.png|500px|]]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Click on the icon in the bottom left (the magnifying glass with a bug in the middle). This should make a pop up with a list of options.&lt;br /&gt;
&amp;lt;br&amp;gt; [[Image:FSecureScanMenu.png|500px|]]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Click either malware scan or full computer scan &lt;br /&gt;
&amp;lt;ul&amp;gt;&amp;lt;li&amp;gt;Malware Scan: Searches common places where malware can be found. This makes it faster, but it may miss some hidden malware.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Full Computer Scan: Scans all internal/external hard drives for malware. This makes it more thorough than the malware scan, but it can take a long time to complete.&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; After choosing an option, another screen should pop up that shows the progress of the scan. &lt;br /&gt;
&amp;lt;br&amp;gt; [[Image:FSecureScanInProgress.png|500px|]]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Once the scan is complete, it will either show a green check (as shown below) to indicate that no malware was found or it will show you a list of harmful items. &lt;br /&gt;
&amp;lt;br&amp;gt; [[Image:FSecureScanComplete.png|500px|]]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; If harmful items were found, then select the handle all option to start the cleaning process. &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Once it is done cleaning the files, it will show you a report stating the final results and the number of harmful items that were cleaned. &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wrichman</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Snapshots&amp;diff=9450</id>
		<title>Snapshots</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Snapshots&amp;diff=9450"/>
		<updated>2020-11-16T19:29:59Z</updated>

		<summary type="html">&lt;p&gt;Wrichman: Changed Isilon example to try and better represent what the user would see&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Snapshots are a implementation of [http://en.wikipedia.org/wiki/Copy-on-write copy-on-write] that allows for a file system to quickly take a point-in-time copy of the file system and provide access to the data through a .snapshot directory. Snapshots provide a fast, user-accessible way to recover data that has been accidentally deleted or corrupted within a recent time window -- rather than having to retrieve the data from comparatively slow tape backups. They also help to span the time gap between full backups.&lt;br /&gt;
&lt;br /&gt;
We provide [[Snapshots]] on our ZFS, [[Snapshots:FluidFS | FluidFS]], and Isilon filers to certain file systems. If you are ever unsure if a particular volume has Snapshots enabled, please contact the [[HelpDesk | Help Desk]].&lt;br /&gt;
&lt;br /&gt;
==Snapshot Retention Policy==&lt;br /&gt;
Our core file systems in the department are on a 4 hour snapshot cycle.  &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Snapshot Name&lt;br /&gt;
!Retention Length&lt;br /&gt;
!When is it taken?&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Hourly&#039;&#039;&#039;&lt;br /&gt;
|24-32 hours&lt;br /&gt;
|Every day 12am, 4am, 8am, 12pm, 4pm&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Daily&#039;&#039;&#039;&lt;br /&gt;
|2 days&lt;br /&gt;
|Every day 8pm or 12am&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Weekly&#039;&#039;&#039;&lt;br /&gt;
|1 week&lt;br /&gt;
|Every Saturday 8pm or Sunday 12am&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
In other words, we retain up to either 6 or 8 hourly snapshots, 2 daily snapshots and 1 weekly snapshot. Hourly snapshots may be superseded by daily snapshots, and daily snapshots may be superseded by the weekly snapshot.&lt;br /&gt;
&lt;br /&gt;
==Snapshot Restoring==&lt;br /&gt;
&lt;br /&gt;
If you have deleted a file by mistake and you need to get it back, you can use the snapshots directory to recopy the file.&lt;br /&gt;
&lt;br /&gt;
This directory can typically be found in your home directory. It generally will not be visible, even when viewing hidden directories.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;It will be either&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
* .snapshots for the [[Snapshots:FluidFS | FluidFS]] filer&lt;br /&gt;
* .zfs/snapshot for the ZFS filer&lt;br /&gt;
* ifs/.snapshot for the Isilon filer&lt;br /&gt;
&lt;br /&gt;
The inside of one of these will look something like this on a FluidFS filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~$ pwd&lt;br /&gt;
/nfshomes/sattwood&lt;br /&gt;
sattwood@zaphod:~$ cd .snapshots&lt;br /&gt;
sattwood@zaphod:~/.snapshots$ ls&lt;br /&gt;
daily_2018_06_14__20_00   hourly_2018_06_14__12_00  hourly_2018_06_15__04_00  weekly_2018_06_09__20_00&lt;br /&gt;
daily_2018_06_15__20_00   hourly_2018_06_14__16_00  hourly_2018_06_15__08_00  &lt;br /&gt;
hourly_2018_06_14__08_00  hourly_2018_06_14__00_00  hourly_2018_06_15__12_00  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Or this, on a ZFS filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@swirl:~$ pwd&lt;br /&gt;
/nmhomes/sattwood&lt;br /&gt;
sattwood@swirl:~$ cd .zfs/snapshot&lt;br /&gt;
sattwood@swirl:~/.zfs/snapshot$ ls&lt;br /&gt;
zfs-auto-snap_daily-2018-06-13-01h00   zfs-auto-snap_hourly-2018-05-28-00h00&lt;br /&gt;
zfs-auto-snap_daily-2018-06-14-01h00   zfs-auto-snap_hourly-2018-05-29-00h00&lt;br /&gt;
zfs-auto-snap_daily-2018-06-15-01h00   zfs-auto-snap_hourly-2018-06-03-16h00&lt;br /&gt;
zfs-auto-snap_hourly-2018-05-24-00h00  zfs-auto-snap_hourly-2018-06-10-08h00&lt;br /&gt;
zfs-auto-snap_hourly-2018-05-25-00h00  zfs-auto-snap_hourly-2018-06-15-12h00&lt;br /&gt;
zfs-auto-snap_hourly-2018-05-26-00h00  zfs-auto-snap_weekly-2018-06-09-03h00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Or this, on an Isilon filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@vimur:~$ pwd&lt;br /&gt;
/nfshomes/sattwood&lt;br /&gt;
sattwood@vimur:~$ cd ifs/.snapshot&lt;br /&gt;
sattwood@vimur:~/ifs/.snapshot$ ls&lt;br /&gt;
Snapshot2018Jun04   Snapshot2018Jun05  Snapshot2018Jun06  Snapshot2018Jun07&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
For an example of file restoration, please see [[Snapshots:Example | this page]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Snapshots]]&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Wrichman</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Snapshots:Example&amp;diff=9447</id>
		<title>Snapshots:Example</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Snapshots:Example&amp;diff=9447"/>
		<updated>2020-11-09T19:36:49Z</updated>

		<summary type="html">&lt;p&gt;Wrichman: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Note that in this example, the directory &amp;quot;.snapshots&amp;quot; could also be &amp;quot;.zfs/snapshot&amp;quot; or &amp;quot;ifs/.snapshot&amp;quot; depending on the filer serving your host. You can try to &#039;ls&#039; one and if it doesn&#039;t exist try one of the others.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Changing to my virtual environment directory.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/virtualenv$ pwd&lt;br /&gt;
/nfshomes/sattwood/virtualenv&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
I see that I have a python file called  &#039;&#039;&#039;virtualenv.py&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/virtualenv$ ls&lt;br /&gt;
appveyor.yml  bin               docs         MANIFEST.in  scripts    setup.py  tox.ini              virtualenv.py&lt;br /&gt;
AUTHORS.txt   CONTRIBUTING.rst  LICENSE.txt  README.rst   setup.cfg  tests     virtualenv_embedded  virtualenv_support&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/virtualenv$ ls -lah virtualenv.py&lt;br /&gt;
-rwxrwxr-x. 1 sattwood sattwood 98K Jun 12 13:54 virtualenv.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
I will remove it from the current file system.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/virtualenv$ rm virtualenv.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you can see it no longer is there.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/virtualenv$ ls&lt;br /&gt;
appveyor.yml  bin               docs         MANIFEST.in  scripts    setup.py  tox.ini              virtualenv_support&lt;br /&gt;
AUTHORS.txt   CONTRIBUTING.rst  LICENSE.txt  README.rst   setup.cfg  tests     virtualenv_embedded&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
I am going to go into the most recent hourly snapshot, in this case: &#039;&#039;&#039;hourly_2018_06_15__12_00&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/virtualenv$ ls /nfshomes/sattwood/.snapshots&lt;br /&gt;
daily_2018_06_13__00_00  hourly_2018_06_14__04_00  hourly_2018_06_14__16_00  hourly_2018_06_15__08_00&lt;br /&gt;
daily_2018_06_14__00_00  hourly_2018_06_14__08_00  hourly_2018_06_14__20_00  hourly_2018_06_15__12_00&lt;br /&gt;
daily_2018_06_15__00_00  hourly_2018_06_14__12_00  hourly_2018_06_15__04_00  weekly_2018_06_09__00_00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/virtualenv$ cd /nfshomes/sattwood/.snapshots/hourly_2018_06_15__12_00/virtualenv&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you can see the file is still here.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/.snapshots/hourly_2018_06_15__12_00/virtualenv$ ls&lt;br /&gt;
appveyor.yml  bin               docs         MANIFEST.in  scripts    setup.py  tox.ini              virtualenv.py&lt;br /&gt;
AUTHORS.txt   CONTRIBUTING.rst  LICENSE.txt  README.rst   setup.cfg  tests     virtualenv_embedded  virtualenv_support&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
I copy it back to the original directory.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/.snapshots/hourly_2018_06_15__12_00/virtualenv$ cp virtualenv.py /nfshomes/sattwood/virtualenv/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Change back to the original directory.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/.snapshots/hourly_2018_06_15__12_00/virtualenv$ cd /nfshomes/sattwood/virtualenv/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
And it is back.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/virtualenv$ ls&lt;br /&gt;
appveyor.yml  bin               docs         MANIFEST.in  scripts    setup.py  tox.ini              virtualenv.py&lt;br /&gt;
AUTHORS.txt   CONTRIBUTING.rst  LICENSE.txt  README.rst   setup.cfg  tests     virtualenv_embedded  virtualenv_support&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Snapshots]]&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Wrichman</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Snapshots:Example&amp;diff=9446</id>
		<title>Snapshots:Example</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Snapshots:Example&amp;diff=9446"/>
		<updated>2020-11-09T19:36:09Z</updated>

		<summary type="html">&lt;p&gt;Wrichman: Fixed minor type&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Note that in this example, the directory &amp;quot;.snapshots&amp;quot; could also be &amp;quot;.zfs/snapshot&amp;quot; or &amp;quot;ifs/.snapshot&amp;quot; depending on the filer serving your host. You can try to &#039;ls&#039; one and if it doesn&#039;t exist try the other.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Changing to my virtual environment directory.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/virtualenv$ pwd&lt;br /&gt;
/nfshomes/sattwood/virtualenv&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
I see that I have a python file called  &#039;&#039;&#039;virtualenv.py&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/virtualenv$ ls&lt;br /&gt;
appveyor.yml  bin               docs         MANIFEST.in  scripts    setup.py  tox.ini              virtualenv.py&lt;br /&gt;
AUTHORS.txt   CONTRIBUTING.rst  LICENSE.txt  README.rst   setup.cfg  tests     virtualenv_embedded  virtualenv_support&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/virtualenv$ ls -lah virtualenv.py&lt;br /&gt;
-rwxrwxr-x. 1 sattwood sattwood 98K Jun 12 13:54 virtualenv.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
I will remove it from the current file system.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/virtualenv$ rm virtualenv.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you can see it no longer is there.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/virtualenv$ ls&lt;br /&gt;
appveyor.yml  bin               docs         MANIFEST.in  scripts    setup.py  tox.ini              virtualenv_support&lt;br /&gt;
AUTHORS.txt   CONTRIBUTING.rst  LICENSE.txt  README.rst   setup.cfg  tests     virtualenv_embedded&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
I am going to go into the most recent hourly snapshot, in this case: &#039;&#039;&#039;hourly_2018_06_15__12_00&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/virtualenv$ ls /nfshomes/sattwood/.snapshots&lt;br /&gt;
daily_2018_06_13__00_00  hourly_2018_06_14__04_00  hourly_2018_06_14__16_00  hourly_2018_06_15__08_00&lt;br /&gt;
daily_2018_06_14__00_00  hourly_2018_06_14__08_00  hourly_2018_06_14__20_00  hourly_2018_06_15__12_00&lt;br /&gt;
daily_2018_06_15__00_00  hourly_2018_06_14__12_00  hourly_2018_06_15__04_00  weekly_2018_06_09__00_00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/virtualenv$ cd /nfshomes/sattwood/.snapshots/hourly_2018_06_15__12_00/virtualenv&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you can see the file is still here.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/.snapshots/hourly_2018_06_15__12_00/virtualenv$ ls&lt;br /&gt;
appveyor.yml  bin               docs         MANIFEST.in  scripts    setup.py  tox.ini              virtualenv.py&lt;br /&gt;
AUTHORS.txt   CONTRIBUTING.rst  LICENSE.txt  README.rst   setup.cfg  tests     virtualenv_embedded  virtualenv_support&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
I copy it back to the original directory.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/.snapshots/hourly_2018_06_15__12_00/virtualenv$ cp virtualenv.py /nfshomes/sattwood/virtualenv/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Change back to the original directory.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/.snapshots/hourly_2018_06_15__12_00/virtualenv$ cd /nfshomes/sattwood/virtualenv/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
And it is back.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/virtualenv$ ls&lt;br /&gt;
appveyor.yml  bin               docs         MANIFEST.in  scripts    setup.py  tox.ini              virtualenv.py&lt;br /&gt;
AUTHORS.txt   CONTRIBUTING.rst  LICENSE.txt  README.rst   setup.cfg  tests     virtualenv_embedded  virtualenv_support&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Snapshots]]&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Wrichman</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Snapshots:Example&amp;diff=9445</id>
		<title>Snapshots:Example</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Snapshots:Example&amp;diff=9445"/>
		<updated>2020-11-09T19:35:50Z</updated>

		<summary type="html">&lt;p&gt;Wrichman: Added details on Isilon filer&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Note that in this example, the directory &amp;quot;.snapshots&amp;quot; could also be &amp;quot;.zfs/snapshot&amp;quot; or &amp;quot;ifs/.snapshot/&amp;quot; depending on the filer serving your host. You can try to &#039;ls&#039; one and if it doesn&#039;t exist try the other.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Changing to my virtual environment directory.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/virtualenv$ pwd&lt;br /&gt;
/nfshomes/sattwood/virtualenv&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
I see that I have a python file called  &#039;&#039;&#039;virtualenv.py&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/virtualenv$ ls&lt;br /&gt;
appveyor.yml  bin               docs         MANIFEST.in  scripts    setup.py  tox.ini              virtualenv.py&lt;br /&gt;
AUTHORS.txt   CONTRIBUTING.rst  LICENSE.txt  README.rst   setup.cfg  tests     virtualenv_embedded  virtualenv_support&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/virtualenv$ ls -lah virtualenv.py&lt;br /&gt;
-rwxrwxr-x. 1 sattwood sattwood 98K Jun 12 13:54 virtualenv.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
I will remove it from the current file system.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/virtualenv$ rm virtualenv.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you can see it no longer is there.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/virtualenv$ ls&lt;br /&gt;
appveyor.yml  bin               docs         MANIFEST.in  scripts    setup.py  tox.ini              virtualenv_support&lt;br /&gt;
AUTHORS.txt   CONTRIBUTING.rst  LICENSE.txt  README.rst   setup.cfg  tests     virtualenv_embedded&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
I am going to go into the most recent hourly snapshot, in this case: &#039;&#039;&#039;hourly_2018_06_15__12_00&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/virtualenv$ ls /nfshomes/sattwood/.snapshots&lt;br /&gt;
daily_2018_06_13__00_00  hourly_2018_06_14__04_00  hourly_2018_06_14__16_00  hourly_2018_06_15__08_00&lt;br /&gt;
daily_2018_06_14__00_00  hourly_2018_06_14__08_00  hourly_2018_06_14__20_00  hourly_2018_06_15__12_00&lt;br /&gt;
daily_2018_06_15__00_00  hourly_2018_06_14__12_00  hourly_2018_06_15__04_00  weekly_2018_06_09__00_00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/virtualenv$ cd /nfshomes/sattwood/.snapshots/hourly_2018_06_15__12_00/virtualenv&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you can see the file is still here.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/.snapshots/hourly_2018_06_15__12_00/virtualenv$ ls&lt;br /&gt;
appveyor.yml  bin               docs         MANIFEST.in  scripts    setup.py  tox.ini              virtualenv.py&lt;br /&gt;
AUTHORS.txt   CONTRIBUTING.rst  LICENSE.txt  README.rst   setup.cfg  tests     virtualenv_embedded  virtualenv_support&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
I copy it back to the original directory.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/.snapshots/hourly_2018_06_15__12_00/virtualenv$ cp virtualenv.py /nfshomes/sattwood/virtualenv/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Change back to the original directory.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/.snapshots/hourly_2018_06_15__12_00/virtualenv$ cd /nfshomes/sattwood/virtualenv/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
And it is back.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~/virtualenv$ ls&lt;br /&gt;
appveyor.yml  bin               docs         MANIFEST.in  scripts    setup.py  tox.ini              virtualenv.py&lt;br /&gt;
AUTHORS.txt   CONTRIBUTING.rst  LICENSE.txt  README.rst   setup.cfg  tests     virtualenv_embedded  virtualenv_support&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Snapshots]]&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Wrichman</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Snapshots&amp;diff=9444</id>
		<title>Snapshots</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Snapshots&amp;diff=9444"/>
		<updated>2020-11-09T19:34:17Z</updated>

		<summary type="html">&lt;p&gt;Wrichman: Fixed minor typo&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Snapshots are a implementation of [http://en.wikipedia.org/wiki/Copy-on-write copy-on-write] that allows for a file system to quickly take a point-in-time copy of the file system and provide access to the data through a .snapshot directory. Snapshots provide a fast, user-accessible way to recover data that has been accidentally deleted or corrupted within a recent time window -- rather than having to retrieve the data from comparatively slow tape backups. They also help to span the time gap between full backups.&lt;br /&gt;
&lt;br /&gt;
We provide [[Snapshots]] on our ZFS, [[Snapshots:FluidFS | FluidFS]], and Isilon filers to certain file systems. If you are ever unsure if a particular volume has Snapshots enabled, please contact the [[HelpDesk | Help Desk]].&lt;br /&gt;
&lt;br /&gt;
==Snapshot Retention Policy==&lt;br /&gt;
Our core file systems in the department are on a 4 hour snapshot cycle.  &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Snapshot Name&lt;br /&gt;
!Retention Length&lt;br /&gt;
!When is it taken?&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Hourly&#039;&#039;&#039;&lt;br /&gt;
|24-32 hours&lt;br /&gt;
|Every day 12am, 4am, 8am, 12pm, 4pm&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Daily&#039;&#039;&#039;&lt;br /&gt;
|2 days&lt;br /&gt;
|Every day 8pm or 12am&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Weekly&#039;&#039;&#039;&lt;br /&gt;
|1 week&lt;br /&gt;
|Every Saturday 8pm or Sunday 12am&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
In other words, we retain up to either 6 or 8 hourly snapshots, 2 daily snapshots and 1 weekly snapshot. Hourly snapshots may be superseded by daily snapshots, and daily snapshots may be superseded by the weekly snapshot.&lt;br /&gt;
&lt;br /&gt;
==Snapshot Restoring==&lt;br /&gt;
&lt;br /&gt;
If you have deleted a file by mistake and you need to get it back, you can use the snapshots directory to recopy the file.&lt;br /&gt;
&lt;br /&gt;
This directory can typically be found in your home directory. It generally will not be visible, even when viewing hidden directories.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;It will be either&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
* .snapshots for the [[Snapshots:FluidFS | FluidFS]] filer&lt;br /&gt;
* .zfs/snapshot for the ZFS filer&lt;br /&gt;
* ifs/.snapshot for the Isilon filer&lt;br /&gt;
&lt;br /&gt;
The inside of one of these will look something like this on a FluidFS filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~$ pwd&lt;br /&gt;
/nfshomes/sattwood&lt;br /&gt;
sattwood@zaphod:~$ cd .snapshots&lt;br /&gt;
sattwood@zaphod:~/.snapshots$ ls&lt;br /&gt;
daily_2018_06_14__20_00   hourly_2018_06_14__12_00  hourly_2018_06_15__04_00  weekly_2018_06_09__20_00&lt;br /&gt;
daily_2018_06_15__20_00   hourly_2018_06_14__16_00  hourly_2018_06_15__08_00  &lt;br /&gt;
hourly_2018_06_14__08_00  hourly_2018_06_14__00_00  hourly_2018_06_15__12_00  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Or this, on a ZFS filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@swirl:~$ pwd&lt;br /&gt;
/nmhomes/sattwood&lt;br /&gt;
sattwood@swirl:~$ cd .zfs/snapshot&lt;br /&gt;
sattwood@swirl:~/.zfs/snapshot$ ls&lt;br /&gt;
zfs-auto-snap_daily-2018-06-13-01h00   zfs-auto-snap_hourly-2018-05-28-00h00&lt;br /&gt;
zfs-auto-snap_daily-2018-06-14-01h00   zfs-auto-snap_hourly-2018-05-29-00h00&lt;br /&gt;
zfs-auto-snap_daily-2018-06-15-01h00   zfs-auto-snap_hourly-2018-06-03-16h00&lt;br /&gt;
zfs-auto-snap_hourly-2018-05-24-00h00  zfs-auto-snap_hourly-2018-06-10-08h00&lt;br /&gt;
zfs-auto-snap_hourly-2018-05-25-00h00  zfs-auto-snap_hourly-2018-06-15-12h00&lt;br /&gt;
zfs-auto-snap_hourly-2018-05-26-00h00  zfs-auto-snap_weekly-2018-06-09-03h00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Or this, on an Isilon filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@vimur:~$ pwd&lt;br /&gt;
/nfshomes/sattwood&lt;br /&gt;
sattwood@vimur:~$ cd ifs/.snapshot&lt;br /&gt;
sattwood@vimur:~/ifs/.snapshot$ ls&lt;br /&gt;
daily_2018_06_14__20_00   hourly_2018_06_14__12_00  hourly_2018_06_15__04_00  weekly_2018_06_09__20_00&lt;br /&gt;
daily_2018_06_15__20_00   hourly_2018_06_14__16_00  hourly_2018_06_15__08_00  &lt;br /&gt;
hourly_2018_06_14__08_00  hourly_2018_06_14__00_00  hourly_2018_06_15__12_00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
For an example of file restoration, please see [[Snapshots:Example | this page]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Snapshots]]&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Wrichman</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Snapshots&amp;diff=9443</id>
		<title>Snapshots</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Snapshots&amp;diff=9443"/>
		<updated>2020-11-09T19:33:08Z</updated>

		<summary type="html">&lt;p&gt;Wrichman: Added more details for an Isilon filesystem&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Snapshots are a implementation of [http://en.wikipedia.org/wiki/Copy-on-write copy-on-write] that allows for a file system to quickly take a point-in-time copy of the file system and provide access to the data through a .snapshot directory. Snapshots provide a fast, user-accessible way to recover data that has been accidentally deleted or corrupted within a recent time window -- rather than having to retrieve the data from comparatively slow tape backups. They also help to span the time gap between full backups.&lt;br /&gt;
&lt;br /&gt;
We provide [[Snapshots]] on our ZFS, [[Snapshots:FluidFS | FluidFS]], and Isilon filers to certain file systems. If you are ever unsure if a particular volume has Snapshots enabled, please contact the [[HelpDesk | Help Desk]].&lt;br /&gt;
&lt;br /&gt;
==Snapshot Retention Policy==&lt;br /&gt;
Our core file systems in the department are on a 4 hour snapshot cycle.  &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Snapshot Name&lt;br /&gt;
!Retention Length&lt;br /&gt;
!When is it taken?&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Hourly&#039;&#039;&#039;&lt;br /&gt;
|24-32 hours&lt;br /&gt;
|Every day 12am, 4am, 8am, 12pm, 4pm&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Daily&#039;&#039;&#039;&lt;br /&gt;
|2 days&lt;br /&gt;
|Every day 8pm or 12am&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Weekly&#039;&#039;&#039;&lt;br /&gt;
|1 week&lt;br /&gt;
|Every Saturday 8pm or Sunday 12am&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
In other words, we retain up to either 6 or 8 hourly snapshots, 2 daily snapshots and 1 weekly snapshot. Hourly snapshots may be superseded by daily snapshots, and daily snapshots may be superseded by the weekly snapshot.&lt;br /&gt;
&lt;br /&gt;
==Snapshot Restoring==&lt;br /&gt;
&lt;br /&gt;
If you have deleted a file by mistake and you need to get it back, you can use the snapshots directory to recopy the file.&lt;br /&gt;
&lt;br /&gt;
This directory can typically be found in your home directory. It generally will not be visible, even when viewing hidden directories.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;It will be either&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
* .snapshots for the [[Snapshots:FluidFS | FluidFS]] filer&lt;br /&gt;
* .zfs/snapshot for the ZFS filer&lt;br /&gt;
* ifs/.snapshot for the Isilon filer&lt;br /&gt;
&lt;br /&gt;
The inside of one of these will look something like this on a FluidFS filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@zaphod:~$ pwd&lt;br /&gt;
/nfshomes/sattwood&lt;br /&gt;
sattwood@zaphod:~$ cd .snapshots&lt;br /&gt;
sattwood@zaphod:~/.snapshots$ ls&lt;br /&gt;
daily_2018_06_14__20_00   hourly_2018_06_14__12_00  hourly_2018_06_15__04_00  weekly_2018_06_09__20_00&lt;br /&gt;
daily_2018_06_15__20_00   hourly_2018_06_14__16_00  hourly_2018_06_15__08_00  &lt;br /&gt;
hourly_2018_06_14__08_00  hourly_2018_06_14__00_00  hourly_2018_06_15__12_00  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Or this, on a ZFS filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@swirl:~$ pwd&lt;br /&gt;
/nmhomes/sattwood&lt;br /&gt;
sattwood@swirl:~$ cd .zfs/snapshot&lt;br /&gt;
sattwood@swirl:~/.zfs/snapshot$ ls&lt;br /&gt;
zfs-auto-snap_daily-2018-06-13-01h00   zfs-auto-snap_hourly-2018-05-28-00h00&lt;br /&gt;
zfs-auto-snap_daily-2018-06-14-01h00   zfs-auto-snap_hourly-2018-05-29-00h00&lt;br /&gt;
zfs-auto-snap_daily-2018-06-15-01h00   zfs-auto-snap_hourly-2018-06-03-16h00&lt;br /&gt;
zfs-auto-snap_hourly-2018-05-24-00h00  zfs-auto-snap_hourly-2018-06-10-08h00&lt;br /&gt;
zfs-auto-snap_hourly-2018-05-25-00h00  zfs-auto-snap_hourly-2018-06-15-12h00&lt;br /&gt;
zfs-auto-snap_hourly-2018-05-26-00h00  zfs-auto-snap_weekly-2018-06-09-03h00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Or this, on an Isilon filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sattwood@vimur:~$ pwd&lt;br /&gt;
/nfshomes/sattwood&lt;br /&gt;
sattwood@vimur:~$ cd ifs/.snapshot&lt;br /&gt;
sattwood@vimur:~/.snapshots$ ls&lt;br /&gt;
daily_2018_06_14__20_00   hourly_2018_06_14__12_00  hourly_2018_06_15__04_00  weekly_2018_06_09__20_00&lt;br /&gt;
daily_2018_06_15__20_00   hourly_2018_06_14__16_00  hourly_2018_06_15__08_00  &lt;br /&gt;
hourly_2018_06_14__08_00  hourly_2018_06_14__00_00  hourly_2018_06_15__12_00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
For an example of file restoration, please see [[Snapshots:Example | this page]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Snapshots]]&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Wrichman</name></author>
	</entry>
</feed>