https://wiki.umiacs.umd.edu/umiacs/api.php?action=feedcontributions&user=Jayid07&feedformat=atomUMIACS - User contributions [en]2024-03-29T06:50:39ZUser contributionsMediaWiki 1.39.6https://wiki.umiacs.umd.edu/umiacs/index.php?title=Email/Barracuda&diff=10582Email/Barracuda2022-08-10T17:38:17Z<p>Jayid07: </p>
<hr />
<div>The Barracuda spam firewalls manage all inbound and outbound email traffic to provide additional virus and spam filtering capabilities.<br />
==Common Tasks==<br />
*[https://campus.barracuda.com/product/essentials/doc/3211272/barracuda-email-security-service-user-guide/#h4_4fdc73de Manage your spam quarantine.]<br />
*[https://campus.barracuda.com/product/essentials/doc/3211272/barracuda-email-security-service-user-guide/ Manage your Allow list, Block lists, and Bayesian filtering.]</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=Email/Barracuda&diff=10581Email/Barracuda2022-08-10T17:37:56Z<p>Jayid07: /* Common Tasks */</p>
<hr />
<div>The Barracuda spam firewalls manages all inbound and outbound email traffic to provide additional virus and spam filtering capabilities.<br />
==Common Tasks==<br />
*[https://campus.barracuda.com/product/essentials/doc/3211272/barracuda-email-security-service-user-guide/#h4_4fdc73de Manage your spam quarantine.]<br />
*https://campus.barracuda.com/product/essentials/doc/3211272/barracuda-email-security-service-user-guide/ Manage your Allow list, Block lists, and Bayesian filtering.]</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=MacOSPrinting&diff=10445MacOSPrinting2022-05-20T18:56:42Z<p>Jayid07: /* Printing Stored Jobs */</p>
<hr />
<div>__NOTOC__<br />
We support printing from user managed macOS 10.8 or later machines. <br />
<br />
'''Note: You must be on a UMIACS network directly or connected to the [[VPN]] in order to print.'''<br />
<br />
[[Image:BigSur_Print00.png|right|100px]]<br />
=System Preferences=<br />
To start, open your System Preferences from the Dock or Applications folder. Once you have opened it, click '''Printers & Scanners'''.<br />
<br />
[[Image:BigSur_Print01.png|right|100px]]<br />
<br />
=Print & Fax=<br />
When the '''Printers & Scanners''' window appears, create a new local printer by clicking the '''+''' icon in the lower left corner of the first pane in the window.<br />
<br />
=Add Printer=<br />
This will bring up a Add Printer dialog.<br />
<br />
<br />
'''Note:''' If you just want basic printing, use the steps below. If you would like to enable all the advanced options for the printer, jump over this section to '''"Enabling Advanced Printer Options"'''.<br />
<br />
<br />
* Jump over to the '''IP''' tab<br />
* Set Protocol to '''Internet Printing Protocol - IPP'''<br />
* Set Address to '''print.umiacs.umd.edu'''<br />
* Set the Queue to printers/queue; in this example for cps432-3208 it would be '''printers/cps432-3208'''. You have to make sure the queue is prefixed by '''printers/'''. For clarification, the queue is typically the printer name.<br />
* Set Name to the name of the printer you are trying to use. This makes it easily Identifiable in your list of printers.<br />
* It will always select '''Generic Postscript Printer'''. If you need to access the more advanced features of a queue/printer or you were '''not able to print by choosing Generic Postscript Printer''', you will need to take extra steps, please see the Advanced section at the bottom of this page.<br />
* Select Add<br />
* You will be asked about enabling duplex. If you know the printer has the option, which is true for most of our printers, go ahead and enable it. Then hit OK. If you're not sure, just leave it disabled. You can always enable it after the queue is added.<br />
<br />
[[Image:AddPrinter_BigSur.png]]<br />
<br />
You should now be able to print to this printer/queue from any macOS print menu.<br />
<br />
=Enabling Advanced Printer Features=<br />
Some printers may have features not accessible with the default drivers provided above. If this is the case, follow the guide below to identify, and install your print drivers from their manufacturer.<br />
<br />
<br />
'''Note:''' If you already installed the printer with generic drivers, you will need to highlight the printer and click '''-''', before attempting to add it again. You may also need a system update to get the latest drivers from Apple, if your printer does not have software drivers listed in the next steps.<br />
<br />
<br />
'''Step 1:''' Locate your printers name and search for its Make and Model here: http://print.umiacs.umd.edu/printers/<br />
<br />
<br />
'''Step 2:''' Follow the '''Add Printer''' steps above until you get to '''Generic Postscript Printer'''. Select '''Software''', then '''Add'''.<br />
<br />
<br />
[[Image:SelectSoftware_BigSur.png]]<br />
<br />
<br />
'''Step 3:''' A window labeled "Printer Software" will now pop up. Scroll thru the list to find your specific make and model of printer. Click on it, and hit Ok.<br />
<br />
[[Image:PrinterSoftware_BigSur.png]]<br />
<br />
'''Step 4:''' Click Add on the next window and it should install your printers software suite. You should have a list of options (if they are available) that looks similar this:<br />
<br />
[[Image:InstallOptions_BigSur.png]]<br />
<br />
'''Final Step:''' Enable the Options you wish to use, Click '''OK'''. You should now be able to print to this printer/queue from any macOS print menu.<br />
<br />
=Printing With Stapler=<br />
For print jobs using the stapler, follow the guide below. Printers with staplers are located in Iribe, in rooms 3149, 3208, 4149, 4208, and 5208.<br />
<br />
To add/ connect to a printer with stapling capabilities:<br />
<br />
<br />
'''Step 1:''' Follow the "Enabling Advanced Printer Features" guide until you reach Step 4.<br />
<br />
<br />
'''Step 2:''' In the window that appears labeled "Setting up [your printer name]" set "HP 3-Bin Stapler/Stacker" to "Mailbox Mode".<br />
<br />
<br />
[[Image:Mailbox_Mode.png|500px|]]<br />
<br />
<br />
'''Step 3:''' Select "OK" in the bottom right corner.<br />
<br />
<br />
'''To Print:''' To complete the print job using the stapler once the printer has been added:<br />
<br />
<br />
'''Step 1:''' In Preview, when trying to print, select the "Show Details" button in the bottom left corner.<br />
<br />
<br />
[[Image:Show_Settings.png|500px|]]<br />
<br />
<br />
'''Step 2:''' In the middle right, where it says "Preview," select "Printer Features" instead of "Preview".<br />
<br />
<br />
[[Image:Printer_Features.png|500px|]]<br />
<br />
<br />
'''Step 3:''' Set "Feature Sets" to "Finishing".<br />
<br />
<br />
[[Image:Finishing.png|500px|]]<br />
<br />
<br />
'''Step 4:''' A "Staple" dropdown list will appear at the bottom of the window. Select your preferred staple option.<br />
<br />
<br />
[[Image:Staple.png|500px|]]<br />
<br />
<br />
'''Step 5:''' Press "Print" in the bottom right corner.<br />
<br />
=Printing Stored Jobs=<br />
'''Pre-requisite:''' Requires [[MacOSPrinting#Enabling_Advanced_Printer_Features | advanced print features]] to be enabled.<br />
If you are printing a sensitive document and do not want the printer to print it right away, you can configure a stored job. The stored job lets you hold a print job until you enter a PIN to release the job. You can configure a stored job following the instructions below:<br />
<br />
# Go to File > Print or Press Command + P to open the Printer menu.<br />
# Select the printer where you want to send the print job.<br />
# Click the dropdown list under Copies & Pages and select '''Job Storage'''.<br />
#:[[File:Step03.png|400px]] <br />
# From the Mode dropdown list, select '''Personal Job''' (or '''Stored Job''' if you want to share the document with others).<br />
#:[[File:Step04.png|400px]] <br />
# Leave the username to default or specify a custom username.<br />
# Check the box '''Use PIN to Print''' and enter a 4-digit number. Hit '''Print'''.<br />
#:[[File:Step06.png|400px]]<br />
<br />
'''To print the stored jobs:'''<br />
# From the Home screen of the printer tap on '''Print from Job Storage''' (or '''Print''' > '''Print from Job Storage''').<br />
# In the '''Stored Job to Print''' screen, select the name of the folder where the job is stored. By default, the name of the folder will be your username unless you customized it earlier.<br />
# Select the name of the document and enter the PIN.<br />
# Hit '''Print'''.</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=MacOSPrinting&diff=10444MacOSPrinting2022-05-20T18:55:07Z<p>Jayid07: /* Printing Stored Jobs */</p>
<hr />
<div>__NOTOC__<br />
We support printing from user managed macOS 10.8 or later machines. <br />
<br />
'''Note: You must be on a UMIACS network directly or connected to the [[VPN]] in order to print.'''<br />
<br />
[[Image:BigSur_Print00.png|right|100px]]<br />
=System Preferences=<br />
To start, open your System Preferences from the Dock or Applications folder. Once you have opened it, click '''Printers & Scanners'''.<br />
<br />
[[Image:BigSur_Print01.png|right|100px]]<br />
<br />
=Print & Fax=<br />
When the '''Printers & Scanners''' window appears, create a new local printer by clicking the '''+''' icon in the lower left corner of the first pane in the window.<br />
<br />
=Add Printer=<br />
This will bring up a Add Printer dialog.<br />
<br />
<br />
'''Note:''' If you just want basic printing, use the steps below. If you would like to enable all the advanced options for the printer, jump over this section to '''"Enabling Advanced Printer Options"'''.<br />
<br />
<br />
* Jump over to the '''IP''' tab<br />
* Set Protocol to '''Internet Printing Protocol - IPP'''<br />
* Set Address to '''print.umiacs.umd.edu'''<br />
* Set the Queue to printers/queue; in this example for cps432-3208 it would be '''printers/cps432-3208'''. You have to make sure the queue is prefixed by '''printers/'''. For clarification, the queue is typically the printer name.<br />
* Set Name to the name of the printer you are trying to use. This makes it easily Identifiable in your list of printers.<br />
* It will always select '''Generic Postscript Printer'''. If you need to access the more advanced features of a queue/printer or you were '''not able to print by choosing Generic Postscript Printer''', you will need to take extra steps, please see the Advanced section at the bottom of this page.<br />
* Select Add<br />
* You will be asked about enabling duplex. If you know the printer has the option, which is true for most of our printers, go ahead and enable it. Then hit OK. If you're not sure, just leave it disabled. You can always enable it after the queue is added.<br />
<br />
[[Image:AddPrinter_BigSur.png]]<br />
<br />
You should now be able to print to this printer/queue from any macOS print menu.<br />
<br />
=Enabling Advanced Printer Features=<br />
Some printers may have features not accessible with the default drivers provided above. If this is the case, follow the guide below to identify, and install your print drivers from their manufacturer.<br />
<br />
<br />
'''Note:''' If you already installed the printer with generic drivers, you will need to highlight the printer and click '''-''', before attempting to add it again. You may also need a system update to get the latest drivers from Apple, if your printer does not have software drivers listed in the next steps.<br />
<br />
<br />
'''Step 1:''' Locate your printers name and search for its Make and Model here: http://print.umiacs.umd.edu/printers/<br />
<br />
<br />
'''Step 2:''' Follow the '''Add Printer''' steps above until you get to '''Generic Postscript Printer'''. Select '''Software''', then '''Add'''.<br />
<br />
<br />
[[Image:SelectSoftware_BigSur.png]]<br />
<br />
<br />
'''Step 3:''' A window labeled "Printer Software" will now pop up. Scroll thru the list to find your specific make and model of printer. Click on it, and hit Ok.<br />
<br />
[[Image:PrinterSoftware_BigSur.png]]<br />
<br />
'''Step 4:''' Click Add on the next window and it should install your printers software suite. You should have a list of options (if they are available) that looks similar this:<br />
<br />
[[Image:InstallOptions_BigSur.png]]<br />
<br />
'''Final Step:''' Enable the Options you wish to use, Click '''OK'''. You should now be able to print to this printer/queue from any macOS print menu.<br />
<br />
=Printing With Stapler=<br />
For print jobs using the stapler, follow the guide below. Printers with staplers are located in Iribe, in rooms 3149, 3208, 4149, 4208, and 5208.<br />
<br />
To add/ connect to a printer with stapling capabilities:<br />
<br />
<br />
'''Step 1:''' Follow the "Enabling Advanced Printer Features" guide until you reach Step 4.<br />
<br />
<br />
'''Step 2:''' In the window that appears labeled "Setting up [your printer name]" set "HP 3-Bin Stapler/Stacker" to "Mailbox Mode".<br />
<br />
<br />
[[Image:Mailbox_Mode.png|500px|]]<br />
<br />
<br />
'''Step 3:''' Select "OK" in the bottom right corner.<br />
<br />
<br />
'''To Print:''' To complete the print job using the stapler once the printer has been added:<br />
<br />
<br />
'''Step 1:''' In Preview, when trying to print, select the "Show Details" button in the bottom left corner.<br />
<br />
<br />
[[Image:Show_Settings.png|500px|]]<br />
<br />
<br />
'''Step 2:''' In the middle right, where it says "Preview," select "Printer Features" instead of "Preview".<br />
<br />
<br />
[[Image:Printer_Features.png|500px|]]<br />
<br />
<br />
'''Step 3:''' Set "Feature Sets" to "Finishing".<br />
<br />
<br />
[[Image:Finishing.png|500px|]]<br />
<br />
<br />
'''Step 4:''' A "Staple" dropdown list will appear at the bottom of the window. Select your preferred staple option.<br />
<br />
<br />
[[Image:Staple.png|500px|]]<br />
<br />
<br />
'''Step 5:''' Press "Print" in the bottom right corner.<br />
<br />
=Printing Stored Jobs=<br />
'''Pre-requisite:''' Requires [[MacOSPrinting#Enabling_Advanced_Printer_Features | advanced print features]] to be enabled.<br />
If you are printing a sensitive document and do not want the printer to print it right away, you can configure a stored job. The stored job lets you hold a print job until you enter a PIN to release the job. You can configure a stored job following the instructions below:<br />
<br />
# Go to File > Print or Press Command + P to open the Printer menu.<br />
# Select the printer where you want to send the print job.<br />
# Click the dropdown list under Copies & Pages and select '''Job Storage'''.<br />
#:[[File:Step03.png|400px]] <br />
# From the Mode dropdown list, select '''Personal Job''' (or '''Stored Job''' if you want others to print the document).<br />
#:[[File:Step04.png|400px]] <br />
# Leave the username to default or specify a custom username.<br />
# Check the box '''Use PIN to Print''' and enter a 4-digit number. Hit '''Print'''.<br />
#:[[File:Step06.png|400px]]<br />
<br />
'''To print the stored jobs:'''<br />
# From the Home screen of the printer tap on '''Print from Job Storage''' (or '''Print''' > '''Print from Job Storage''').<br />
# In the '''Stored Job to Print''' screen, select the name of the folder where the job is stored. By default, the name of the folder will be your username unless you customized it earlier.<br />
# Select the name of the document and enter the PIN.<br />
# Hit '''Print'''.</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=MacOSPrinting&diff=10443MacOSPrinting2022-05-20T18:54:28Z<p>Jayid07: /* Printing Stored Jobs */</p>
<hr />
<div>__NOTOC__<br />
We support printing from user managed macOS 10.8 or later machines. <br />
<br />
'''Note: You must be on a UMIACS network directly or connected to the [[VPN]] in order to print.'''<br />
<br />
[[Image:BigSur_Print00.png|right|100px]]<br />
=System Preferences=<br />
To start, open your System Preferences from the Dock or Applications folder. Once you have opened it, click '''Printers & Scanners'''.<br />
<br />
[[Image:BigSur_Print01.png|right|100px]]<br />
<br />
=Print & Fax=<br />
When the '''Printers & Scanners''' window appears, create a new local printer by clicking the '''+''' icon in the lower left corner of the first pane in the window.<br />
<br />
=Add Printer=<br />
This will bring up a Add Printer dialog.<br />
<br />
<br />
'''Note:''' If you just want basic printing, use the steps below. If you would like to enable all the advanced options for the printer, jump over this section to '''"Enabling Advanced Printer Options"'''.<br />
<br />
<br />
* Jump over to the '''IP''' tab<br />
* Set Protocol to '''Internet Printing Protocol - IPP'''<br />
* Set Address to '''print.umiacs.umd.edu'''<br />
* Set the Queue to printers/queue; in this example for cps432-3208 it would be '''printers/cps432-3208'''. You have to make sure the queue is prefixed by '''printers/'''. For clarification, the queue is typically the printer name.<br />
* Set Name to the name of the printer you are trying to use. This makes it easily Identifiable in your list of printers.<br />
* It will always select '''Generic Postscript Printer'''. If you need to access the more advanced features of a queue/printer or you were '''not able to print by choosing Generic Postscript Printer''', you will need to take extra steps, please see the Advanced section at the bottom of this page.<br />
* Select Add<br />
* You will be asked about enabling duplex. If you know the printer has the option, which is true for most of our printers, go ahead and enable it. Then hit OK. If you're not sure, just leave it disabled. You can always enable it after the queue is added.<br />
<br />
[[Image:AddPrinter_BigSur.png]]<br />
<br />
You should now be able to print to this printer/queue from any macOS print menu.<br />
<br />
=Enabling Advanced Printer Features=<br />
Some printers may have features not accessible with the default drivers provided above. If this is the case, follow the guide below to identify, and install your print drivers from their manufacturer.<br />
<br />
<br />
'''Note:''' If you already installed the printer with generic drivers, you will need to highlight the printer and click '''-''', before attempting to add it again. You may also need a system update to get the latest drivers from Apple, if your printer does not have software drivers listed in the next steps.<br />
<br />
<br />
'''Step 1:''' Locate your printers name and search for its Make and Model here: http://print.umiacs.umd.edu/printers/<br />
<br />
<br />
'''Step 2:''' Follow the '''Add Printer''' steps above until you get to '''Generic Postscript Printer'''. Select '''Software''', then '''Add'''.<br />
<br />
<br />
[[Image:SelectSoftware_BigSur.png]]<br />
<br />
<br />
'''Step 3:''' A window labeled "Printer Software" will now pop up. Scroll thru the list to find your specific make and model of printer. Click on it, and hit Ok.<br />
<br />
[[Image:PrinterSoftware_BigSur.png]]<br />
<br />
'''Step 4:''' Click Add on the next window and it should install your printers software suite. You should have a list of options (if they are available) that looks similar this:<br />
<br />
[[Image:InstallOptions_BigSur.png]]<br />
<br />
'''Final Step:''' Enable the Options you wish to use, Click '''OK'''. You should now be able to print to this printer/queue from any macOS print menu.<br />
<br />
=Printing With Stapler=<br />
For print jobs using the stapler, follow the guide below. Printers with staplers are located in Iribe, in rooms 3149, 3208, 4149, 4208, and 5208.<br />
<br />
To add/ connect to a printer with stapling capabilities:<br />
<br />
<br />
'''Step 1:''' Follow the "Enabling Advanced Printer Features" guide until you reach Step 4.<br />
<br />
<br />
'''Step 2:''' In the window that appears labeled "Setting up [your printer name]" set "HP 3-Bin Stapler/Stacker" to "Mailbox Mode".<br />
<br />
<br />
[[Image:Mailbox_Mode.png|500px|]]<br />
<br />
<br />
'''Step 3:''' Select "OK" in the bottom right corner.<br />
<br />
<br />
'''To Print:''' To complete the print job using the stapler once the printer has been added:<br />
<br />
<br />
'''Step 1:''' In Preview, when trying to print, select the "Show Details" button in the bottom left corner.<br />
<br />
<br />
[[Image:Show_Settings.png|500px|]]<br />
<br />
<br />
'''Step 2:''' In the middle right, where it says "Preview," select "Printer Features" instead of "Preview".<br />
<br />
<br />
[[Image:Printer_Features.png|500px|]]<br />
<br />
<br />
'''Step 3:''' Set "Feature Sets" to "Finishing".<br />
<br />
<br />
[[Image:Finishing.png|500px|]]<br />
<br />
<br />
'''Step 4:''' A "Staple" dropdown list will appear at the bottom of the window. Select your preferred staple option.<br />
<br />
<br />
[[Image:Staple.png|500px|]]<br />
<br />
<br />
'''Step 5:''' Press "Print" in the bottom right corner.<br />
<br />
=Printing Stored Jobs=<br />
'''Pre-requisite:''' Requires [[MacOSPrinting#Enabling_Advanced_Printer_Features | advanced print features]] to be enabled.<br />
If you are printing a sensitive document and do not want the printer to print it right away, you can configure a stored job. The stored job lets you hold a print job until you enter a PIN to release the job. You can configure a stored job following the instructions below:<br />
<br />
# Go to File > Print or Press Command + P to open the Printer menu.<br />
# Select the printer where you want to send the print job.<br />
# Click the dropdown list under Copies & Pages and select '''Job Storage'''.<br />
#:[[File:Step03.png|400px]] <br />
# From the Mode dropdown list, select '''Personal Job''' (or '''Stored Job''' if you want others to print the document).<br />
#:[[File:Step04.png|400px]] <br />
# Leave the username to default or specify a custom username.<br />
# Check the box '''Use PIN to Print''' and enter a 4-digit number. Hit '''Print'''.<br />
#:[[File:Step06.png|400px]]<br />
<br />
To print the stored jobs:<br />
# From the Home screen of the printer tap on '''Print from Job Storage''' (or '''Print''' > '''Print from Job Storage''').<br />
# In the '''Stored Job to Print''' screen, select the name of the folder where the job is stored. By default, the name of the folder will be your username unless you customized it earlier.<br />
# Select the name of the document and enter the PIN.<br />
# Hit '''Print'''.</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=MacOSPrinting&diff=10442MacOSPrinting2022-05-20T18:50:00Z<p>Jayid07: /* Printing Stored Jobs */</p>
<hr />
<div>__NOTOC__<br />
We support printing from user managed macOS 10.8 or later machines. <br />
<br />
'''Note: You must be on a UMIACS network directly or connected to the [[VPN]] in order to print.'''<br />
<br />
[[Image:BigSur_Print00.png|right|100px]]<br />
=System Preferences=<br />
To start, open your System Preferences from the Dock or Applications folder. Once you have opened it, click '''Printers & Scanners'''.<br />
<br />
[[Image:BigSur_Print01.png|right|100px]]<br />
<br />
=Print & Fax=<br />
When the '''Printers & Scanners''' window appears, create a new local printer by clicking the '''+''' icon in the lower left corner of the first pane in the window.<br />
<br />
=Add Printer=<br />
This will bring up a Add Printer dialog.<br />
<br />
<br />
'''Note:''' If you just want basic printing, use the steps below. If you would like to enable all the advanced options for the printer, jump over this section to '''"Enabling Advanced Printer Options"'''.<br />
<br />
<br />
* Jump over to the '''IP''' tab<br />
* Set Protocol to '''Internet Printing Protocol - IPP'''<br />
* Set Address to '''print.umiacs.umd.edu'''<br />
* Set the Queue to printers/queue; in this example for cps432-3208 it would be '''printers/cps432-3208'''. You have to make sure the queue is prefixed by '''printers/'''. For clarification, the queue is typically the printer name.<br />
* Set Name to the name of the printer you are trying to use. This makes it easily Identifiable in your list of printers.<br />
* It will always select '''Generic Postscript Printer'''. If you need to access the more advanced features of a queue/printer or you were '''not able to print by choosing Generic Postscript Printer''', you will need to take extra steps, please see the Advanced section at the bottom of this page.<br />
* Select Add<br />
* You will be asked about enabling duplex. If you know the printer has the option, which is true for most of our printers, go ahead and enable it. Then hit OK. If you're not sure, just leave it disabled. You can always enable it after the queue is added.<br />
<br />
[[Image:AddPrinter_BigSur.png]]<br />
<br />
You should now be able to print to this printer/queue from any macOS print menu.<br />
<br />
=Enabling Advanced Printer Features=<br />
Some printers may have features not accessible with the default drivers provided above. If this is the case, follow the guide below to identify, and install your print drivers from their manufacturer.<br />
<br />
<br />
'''Note:''' If you already installed the printer with generic drivers, you will need to highlight the printer and click '''-''', before attempting to add it again. You may also need a system update to get the latest drivers from Apple, if your printer does not have software drivers listed in the next steps.<br />
<br />
<br />
'''Step 1:''' Locate your printers name and search for its Make and Model here: http://print.umiacs.umd.edu/printers/<br />
<br />
<br />
'''Step 2:''' Follow the '''Add Printer''' steps above until you get to '''Generic Postscript Printer'''. Select '''Software''', then '''Add'''.<br />
<br />
<br />
[[Image:SelectSoftware_BigSur.png]]<br />
<br />
<br />
'''Step 3:''' A window labeled "Printer Software" will now pop up. Scroll thru the list to find your specific make and model of printer. Click on it, and hit Ok.<br />
<br />
[[Image:PrinterSoftware_BigSur.png]]<br />
<br />
'''Step 4:''' Click Add on the next window and it should install your printers software suite. You should have a list of options (if they are available) that looks similar this:<br />
<br />
[[Image:InstallOptions_BigSur.png]]<br />
<br />
'''Final Step:''' Enable the Options you wish to use, Click '''OK'''. You should now be able to print to this printer/queue from any macOS print menu.<br />
<br />
=Printing With Stapler=<br />
For print jobs using the stapler, follow the guide below. Printers with staplers are located in Iribe, in rooms 3149, 3208, 4149, 4208, and 5208.<br />
<br />
To add/ connect to a printer with stapling capabilities:<br />
<br />
<br />
'''Step 1:''' Follow the "Enabling Advanced Printer Features" guide until you reach Step 4.<br />
<br />
<br />
'''Step 2:''' In the window that appears labeled "Setting up [your printer name]" set "HP 3-Bin Stapler/Stacker" to "Mailbox Mode".<br />
<br />
<br />
[[Image:Mailbox_Mode.png|500px|]]<br />
<br />
<br />
'''Step 3:''' Select "OK" in the bottom right corner.<br />
<br />
<br />
'''To Print:''' To complete the print job using the stapler once the printer has been added:<br />
<br />
<br />
'''Step 1:''' In Preview, when trying to print, select the "Show Details" button in the bottom left corner.<br />
<br />
<br />
[[Image:Show_Settings.png|500px|]]<br />
<br />
<br />
'''Step 2:''' In the middle right, where it says "Preview," select "Printer Features" instead of "Preview".<br />
<br />
<br />
[[Image:Printer_Features.png|500px|]]<br />
<br />
<br />
'''Step 3:''' Set "Feature Sets" to "Finishing".<br />
<br />
<br />
[[Image:Finishing.png|500px|]]<br />
<br />
<br />
'''Step 4:''' A "Staple" dropdown list will appear at the bottom of the window. Select your preferred staple option.<br />
<br />
<br />
[[Image:Staple.png|500px|]]<br />
<br />
<br />
'''Step 5:''' Press "Print" in the bottom right corner.<br />
<br />
=Printing Stored Jobs=<br />
'''Pre-requisite:''' Requires [[MacOSPrinting#Enabling_Advanced_Printer_Features | advanced print features]] to be enabled.<br />
If you are printing a sensitive document and do not want the printer to print it right away, you can configure a stored job. The stored job lets you hold a print job until you enter a PIN to release the job. You can configure a stored job following the instructions below:<br />
<br />
# Go to File > Print or Press Command + P to open the Printer menu.<br />
# Select the printer where you want to send the print job.<br />
# Click the dropdown list under Copies & Pages and select '''Job Storage'''.<br />
#:[[File:Step03.png|400px]] <br />
# From the Mode dropdown list, select '''Personal Job''' (or '''Stored Job''' if you want others to print the document).<br />
#:[[File:Step04.png|400px]] <br />
# Leave the username to default or specify a custom username.<br />
# Check the box '''Use PIN to Print''' and enter a 4-digit number. Hit '''Print'''.<br />
#:[[File:Step06.png|400px]]</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=File:Step06.png&diff=10441File:Step06.png2022-05-20T18:33:12Z<p>Jayid07: </p>
<hr />
<div></div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=File:Step04.png&diff=10440File:Step04.png2022-05-20T18:33:02Z<p>Jayid07: </p>
<hr />
<div></div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=File:Step03.png&diff=10439File:Step03.png2022-05-20T18:32:41Z<p>Jayid07: Stored job on macOS</p>
<hr />
<div>== Summary ==<br />
Stored job on macOS</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=MacOSPrinting&diff=10438MacOSPrinting2022-05-20T18:28:46Z<p>Jayid07: /* Printing Stored Jobs */</p>
<hr />
<div>__NOTOC__<br />
We support printing from user managed macOS 10.8 or later machines. <br />
<br />
'''Note: You must be on a UMIACS network directly or connected to the [[VPN]] in order to print.'''<br />
<br />
[[Image:BigSur_Print00.png|right|100px]]<br />
=System Preferences=<br />
To start, open your System Preferences from the Dock or Applications folder. Once you have opened it, click '''Printers & Scanners'''.<br />
<br />
[[Image:BigSur_Print01.png|right|100px]]<br />
<br />
=Print & Fax=<br />
When the '''Printers & Scanners''' window appears, create a new local printer by clicking the '''+''' icon in the lower left corner of the first pane in the window.<br />
<br />
=Add Printer=<br />
This will bring up a Add Printer dialog.<br />
<br />
<br />
'''Note:''' If you just want basic printing, use the steps below. If you would like to enable all the advanced options for the printer, jump over this section to '''"Enabling Advanced Printer Options"'''.<br />
<br />
<br />
* Jump over to the '''IP''' tab<br />
* Set Protocol to '''Internet Printing Protocol - IPP'''<br />
* Set Address to '''print.umiacs.umd.edu'''<br />
* Set the Queue to printers/queue; in this example for cps432-3208 it would be '''printers/cps432-3208'''. You have to make sure the queue is prefixed by '''printers/'''. For clarification, the queue is typically the printer name.<br />
* Set Name to the name of the printer you are trying to use. This makes it easily Identifiable in your list of printers.<br />
* It will always select '''Generic Postscript Printer'''. If you need to access the more advanced features of a queue/printer or you were '''not able to print by choosing Generic Postscript Printer''', you will need to take extra steps, please see the Advanced section at the bottom of this page.<br />
* Select Add<br />
* You will be asked about enabling duplex. If you know the printer has the option, which is true for most of our printers, go ahead and enable it. Then hit OK. If you're not sure, just leave it disabled. You can always enable it after the queue is added.<br />
<br />
[[Image:AddPrinter_BigSur.png]]<br />
<br />
You should now be able to print to this printer/queue from any macOS print menu.<br />
<br />
=Enabling Advanced Printer Features=<br />
Some printers may have features not accessible with the default drivers provided above. If this is the case, follow the guide below to identify, and install your print drivers from their manufacturer.<br />
<br />
<br />
'''Note:''' If you already installed the printer with generic drivers, you will need to highlight the printer and click '''-''', before attempting to add it again. You may also need a system update to get the latest drivers from Apple, if your printer does not have software drivers listed in the next steps.<br />
<br />
<br />
'''Step 1:''' Locate your printers name and search for its Make and Model here: http://print.umiacs.umd.edu/printers/<br />
<br />
<br />
'''Step 2:''' Follow the '''Add Printer''' steps above until you get to '''Generic Postscript Printer'''. Select '''Software''', then '''Add'''.<br />
<br />
<br />
[[Image:SelectSoftware_BigSur.png]]<br />
<br />
<br />
'''Step 3:''' A window labeled "Printer Software" will now pop up. Scroll thru the list to find your specific make and model of printer. Click on it, and hit Ok.<br />
<br />
[[Image:PrinterSoftware_BigSur.png]]<br />
<br />
'''Step 4:''' Click Add on the next window and it should install your printers software suite. You should have a list of options (if they are available) that looks similar this:<br />
<br />
[[Image:InstallOptions_BigSur.png]]<br />
<br />
'''Final Step:''' Enable the Options you wish to use, Click '''OK'''. You should now be able to print to this printer/queue from any macOS print menu.<br />
<br />
=Printing With Stapler=<br />
For print jobs using the stapler, follow the guide below. Printers with staplers are located in Iribe, in rooms 3149, 3208, 4149, 4208, and 5208.<br />
<br />
To add/ connect to a printer with stapling capabilities:<br />
<br />
<br />
'''Step 1:''' Follow the "Enabling Advanced Printer Features" guide until you reach Step 4.<br />
<br />
<br />
'''Step 2:''' In the window that appears labeled "Setting up [your printer name]" set "HP 3-Bin Stapler/Stacker" to "Mailbox Mode".<br />
<br />
<br />
[[Image:Mailbox_Mode.png|500px|]]<br />
<br />
<br />
'''Step 3:''' Select "OK" in the bottom right corner.<br />
<br />
<br />
'''To Print:''' To complete the print job using the stapler once the printer has been added:<br />
<br />
<br />
'''Step 1:''' In Preview, when trying to print, select the "Show Details" button in the bottom left corner.<br />
<br />
<br />
[[Image:Show_Settings.png|500px|]]<br />
<br />
<br />
'''Step 2:''' In the middle right, where it says "Preview," select "Printer Features" instead of "Preview".<br />
<br />
<br />
[[Image:Printer_Features.png|500px|]]<br />
<br />
<br />
'''Step 3:''' Set "Feature Sets" to "Finishing".<br />
<br />
<br />
[[Image:Finishing.png|500px|]]<br />
<br />
<br />
'''Step 4:''' A "Staple" dropdown list will appear at the bottom of the window. Select your preferred staple option.<br />
<br />
<br />
[[Image:Staple.png|500px|]]<br />
<br />
<br />
'''Step 5:''' Press "Print" in the bottom right corner.<br />
<br />
=Printing Stored Jobs=<br />
'''Pre-requisite:''' Requires [[MacOSPrinting#Enabling_Advanced_Printer_Features | advanced print features]] to be enabled.<br />
If you are printing a sensitive document and do not want the printer to print it right away, you can configure a stored job. The stored job lets you hold a print job until you enter a PIN to release the job. You can configure a stored job following the instructions below:<br />
<br />
# Go to File > Print or Press Command + P to open the Printer menu.<br />
# Select the printer where you want to send the print job.<br />
# Click the dropdown list under Copies & Pages and select '''Job Storage'''.<br />
# From the Mode dropdown list, select '''Personal Job''' (or '''Stored Job''' if you want others to print the document).<br />
# Leave the username to default or specify a custom username.<br />
# Check the box '''Use PIN to Print''' and enter a 4-digit number. Hit '''Print'''.</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=HPC&diff=10436HPC2022-05-16T18:37:29Z<p>Jayid07: </p>
<hr />
<div>* [[SLURM | Slurm Scheduler Documentation]]<br />
* [[CBCB]]<br />
* [[CML]]<br />
* [[Nexus]]<br />
* [[MBRC]]<br />
* [https://wiki.umiacs.umd.edu/cfar/index.php/Vulcan Vulcan]</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=CoreServices&diff=10435CoreServices2022-05-16T18:25:19Z<p>Jayid07: </p>
<hr />
<div>{| border="0" cellpadding="5" cellspacing="0" <br />
|- style="vertical-align:top;"<br />
|style="width: 33%"|'''[[Accounts|Authentication]]:''' User accounts such as UMIACS accounts, and collaborator accounts, how to request an account, and how to reset account password. <br />
|style="width: 33%"|'''[[OSSupport|Operating Systems Support]]:''' List of operating systems supported at UMIACS.<br />
|style="width: 33%"|'''[[Email]]:''' Email offering at UMIACS and configuring the spam filter.<br />
|- style="vertical-align:top;"<br />
|style="width: 33%"|'''[[MailingLists|Mailing Lists]]:''' Mail aliases and Mailman mailing list.<br />
|style="width: 33%"|'''[[Printing]]:''' Printing guide for different operating systems and locations of UMIACS public printers.<br />
|style="width: 33%"|'''[[Backups]]:''' Instruction on how to backup personal devices using Google Drive.<br />
|- style="vertical-align:top;"<br />
|style="width: 33%"|'''[[NAS|Network Attached Storage]]:''' NFS home directories and project directories.<br />
|style="width: 33%"|'''[[LocalDataStorage|Local Data Storage]]:''' Data storage options such as local scratch, network scratch, nfshomes, etc.<br />
|style="width: 33%"|'''[[Web|Web Services]]:''' UMIACS web hosting services, i.e. hosting a personal website.<br />
|- style="vertical-align:top;"<br />
|style="width: 33%"|'''[[OBJ|UMIACS Object Store]]:''' Get started on UMIACS Object Store; learn about buckets, keys, and how to access the object store.<br />
|style="width: 33%"|'''[[RevisionControl|Revision Control (Git)]]:''' Brief intro to GitLab and Subversion (Legacy).<br />
|style="width: 33%"|'''[[Programming|Programming Languages]]:''' List of commonly used programming languages at UMIACS by the faculty and students.<br />
|- style="vertical-align:top;"<br />
|style="width: 33%"|'''[[HPC|High Performance Computing]]:''' Information regarding different clusters such as Nexus, CML, Vulcan, etc. and the SLURM Scheduler.<br />
|style="width: 33%"|'''[[VPN|Virtual Private Networking]]:''' How to install and configure pulseSecure VPN.<br />
|style="width: 33%"|'''[[MediaSanitization|Storage Device Destruction]]:''' Make a request to securely destroy data. <br />
|}</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=CoreServices&diff=10434CoreServices2022-05-16T18:06:05Z<p>Jayid07: </p>
<hr />
<div>{| border="0" cellpadding="5" cellspacing="0" <br />
|- style="vertical-align:top;"<br />
|style="width: 33%"|'''[[Accounts|Authentication]]:''' User accounts such as UMIACS accounts, and collaborator accounts, how to request an account, and how to reset account password. <br />
|style="width: 33%"|'''[[OSSupport|Operating Systems Support]]:''' List of operating systems supported at UMIACS.<br />
|style="width: 33%"|'''[[Email]]:''' Email offering at UMIACS and configuring the spam filter.<br />
|- style="vertical-align:top;"<br />
|style="width: 33%"|'''[[MailingLists|Mailing Lists]]:'''<br />
|style="width: 33%"|'''[[Printing]]:''' Printing guide for different operating systems and locations of UMIACS public printers.<br />
|style="width: 33%"|'''[[Backups]]:''' Instruction on how to backup personal devices using Google Drive.<br />
|- style="vertical-align:top;"<br />
|style="width: 33%"|'''[[NAS|Network Attached Storage]]:'''<br />
|style="width: 33%"|'''[[LocalDataStorage|Local Data Storage]]:''' Data storage options such as local scratch, network scratch, nfshomes, etc.<br />
|style="width: 33%"|'''[[Web|Web Services]]:''' UMIACS web hosting services, i.e. hosting a personal website.<br />
|- style="vertical-align:top;"<br />
|style="width: 33%"|'''[[OBJ|UMIACS Object Store]]:''' Get started on UMIACS Object Store; learn about buckets, keys, and how to access the object store.<br />
|style="width: 33%"|'''[[RevisionControl|Revision Control (Git)]]:'''<br />
|style="width: 33%"|'''[[Programming|Programming Languages]]:'''<br />
|- style="vertical-align:top;"<br />
|style="width: 33%"|'''[[HPC|High Performance Computing]]:'''<br />
|style="width: 33%"|'''[[VPN|Virtual Private Networking]]:'''<br />
|style="width: 33%"|'''[[MediaSanitization|Storage Device Destruction]]:'''<br />
|}</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=CoreServices&diff=10433CoreServices2022-05-16T17:55:10Z<p>Jayid07: </p>
<hr />
<div>{| border="1" cellpadding="5" cellspacing="0" <br />
|- style="vertical-align:top;"<br />
|style="width: 33%"|'''[[Accounts|Authentication]]:''' User accounts such as UMIACS accounts, and collaborator accounts, how to request an account, and how to reset account password. <br />
|style="width: 33%"|'''[[OSSupport|Operating Systems Support]]:''' List of operating systems supported at UMIACS.<br />
|style="width: 33%"|'''[[Email]]:''' Email offering at UMIACS and configuring the spam filter.<br />
|- style="vertical-align:top;"<br />
|style="width: 33%"|'''[[MailingLists|Mailing Lists]]:'''<br />
|style="width: 33%"|'''[[Printing]]:''' Printing guide for different operating systems and locations of UMIACS public printers.<br />
|style="width: 33%"|'''[[Backups]]:''' Instruction on how to backup personal devices using Google Drive.<br />
|- style="vertical-align:top;"<br />
|style="width: 33%"|'''[[NAS|Network Attached Storage]]:'''<br />
|style="width: 33%"|'''[[LocalDataStorage|Local Data Storage]]:'''<br />
|style="width: 33%"|'''[[Web|Web Services]]:'''<br />
|- style="vertical-align:top;"<br />
|style="width: 33%"|'''[[OBJ|UMIACS Object Store]]:'''<br />
|style="width: 33%"|'''[[RevisionControl|Revision Control (Git)]]:'''<br />
|style="width: 33%"|'''[[Programming|Programming Languages]]:'''<br />
|- style="vertical-align:top;"<br />
|style="width: 33%"|'''[[HPC|High Performance Computing]]:'''<br />
|style="width: 33%"|'''[[VPN|Virtual Private Networking]]:'''<br />
|style="width: 33%"|'''[[MediaSanitization|Storage Device Destruction]]:'''<br />
|}</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=MacOSPrinting&diff=10425MacOSPrinting2022-04-30T18:32:39Z<p>Jayid07: /* Printing Stored Jobs */</p>
<hr />
<div>__NOTOC__<br />
We support printing from user managed macOS 10.8 or later machines. <br />
<br />
'''Note: You must be on a UMIACS network directly or connected to the [[VPN]] in order to print.'''<br />
<br />
[[Image:BigSur_Print00.png|right|100px]]<br />
=System Preferences=<br />
To start, open your System Preferences from the Dock or Applications folder. Once you have opened it, click '''Printers & Scanners'''.<br />
<br />
[[Image:BigSur_Print01.png|right|100px]]<br />
<br />
=Print & Fax=<br />
When the '''Printers & Scanners''' window appears, create a new local printer by clicking the '''+''' icon in the lower left corner of the first pane in the window.<br />
<br />
=Add Printer=<br />
This will bring up a Add Printer dialog.<br />
<br />
<br />
'''Note:''' If you just want basic printing, use the steps below. If you would like to enable all the advanced options for the printer, jump over this section to '''"Enabling Advanced Printer Options"'''.<br />
<br />
<br />
* Jump over to the '''IP''' tab<br />
* Set Protocol to '''Internet Printing Protocol - IPP'''<br />
* Set Address to '''print.umiacs.umd.edu'''<br />
* Set the Queue to printers/queue; in this example for cps432-3208 it would be '''printers/cps432-3208'''. You have to make sure the queue is prefixed by '''printers/'''. For clarification, the queue is typically the printer name.<br />
* Set Name to the name of the printer you are trying to use. This makes it easily Identifiable in your list of printers.<br />
* It will always select '''Generic Postscript Printer'''. If you need to access the more advanced features of a queue/printer or you were '''not able to print by choosing Generic Postscript Printer''', you will need to take extra steps, please see the Advanced section at the bottom of this page.<br />
* Select Add<br />
* You will be asked about enabling duplex. If you know the printer has the option, which is true for most of our printers, go ahead and enable it. Then hit OK. If you're not sure, just leave it disabled. You can always enable it after the queue is added.<br />
<br />
[[Image:AddPrinter_BigSur.png]]<br />
<br />
You should now be able to print to this printer/queue from any macOS print menu.<br />
<br />
=Enabling Advanced Printer Features=<br />
Some printers may have features not accessible with the default drivers provided above. If this is the case, follow the guide below to identify, and install your print drivers from their manufacturer.<br />
<br />
<br />
'''Note:''' If you already installed the printer with generic drivers, you will need to highlight the printer and click '''-''', before attempting to add it again. You may also need a system update to get the latest drivers from Apple, if your printer does not have software drivers listed in the next steps.<br />
<br />
<br />
'''Step 1:''' Locate your printers name and search for its Make and Model here: http://print.umiacs.umd.edu/printers/<br />
<br />
<br />
'''Step 2:''' Follow the '''Add Printer''' steps above until you get to '''Generic Postscript Printer'''. Select '''Software''', then '''Add'''.<br />
<br />
<br />
[[Image:SelectSoftware_BigSur.png]]<br />
<br />
<br />
'''Step 3:''' A window labeled "Printer Software" will now pop up. Scroll thru the list to find your specific make and model of printer. Click on it, and hit Ok.<br />
<br />
[[Image:PrinterSoftware_BigSur.png]]<br />
<br />
'''Step 4:''' Click Add on the next window and it should install your printers software suite. You should have a list of options (if they are available) that looks similar this:<br />
<br />
[[Image:InstallOptions_BigSur.png]]<br />
<br />
'''Final Step:''' Enable the Options you wish to use, Click '''OK'''. You should now be able to print to this printer/queue from any macOS print menu.<br />
<br />
=Printing With Stapler=<br />
For print jobs using the stapler, follow the guide below. Printers with staplers are located in Iribe, in rooms 3149, 3208, 4149, 4208, and 5208.<br />
<br />
To add/ connect to a printer with stapling capabilities:<br />
<br />
<br />
'''Step 1:''' Follow the "Enabling Advanced Printer Features" guide until you reach Step 4.<br />
<br />
<br />
'''Step 2:''' In the window that appears labeled "Setting up [your printer name]" set "HP 3-Bin Stapler/Stacker" to "Mailbox Mode".<br />
<br />
<br />
[[Image:Mailbox_Mode.png|500px|]]<br />
<br />
<br />
'''Step 3:''' Select "OK" in the bottom right corner.<br />
<br />
<br />
'''To Print:''' To complete the print job using the stapler once the printer has been added:<br />
<br />
<br />
'''Step 1:''' In Preview, when trying to print, select the "Show Details" button in the bottom left corner.<br />
<br />
<br />
[[Image:Show_Settings.png|500px|]]<br />
<br />
<br />
'''Step 2:''' In the middle right, where it says "Preview," select "Printer Features" instead of "Preview".<br />
<br />
<br />
[[Image:Printer_Features.png|500px|]]<br />
<br />
<br />
'''Step 3:''' Set "Feature Sets" to "Finishing".<br />
<br />
<br />
[[Image:Finishing.png|500px|]]<br />
<br />
<br />
'''Step 4:''' A "Staple" dropdown list will appear at the bottom of the window. Select your preferred staple option.<br />
<br />
<br />
[[Image:Staple.png|500px|]]<br />
<br />
<br />
'''Step 5:''' Press "Print" in the bottom right corner.<br />
<br />
=Printing Stored Jobs=<br />
'''Pre-requisite:''' Requires [[MacOSPrinting#Enabling_Advanced_Printer_Features | advanced print features]] to be enabled.<br />
If you are printing a sensitive document and do not want the printer to print it right away, you can configure a stored job. The stored job lets you hold a print job until you enter a PIN to release the job. You can configure a stored job following the instructions below:</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=MacOSPrinting&diff=10423MacOSPrinting2022-04-26T19:55:46Z<p>Jayid07: </p>
<hr />
<div>__NOTOC__<br />
We support printing from user managed macOS 10.8 or later machines. <br />
<br />
'''Note: You must be on a UMIACS network directly or connected to the [[VPN]] in order to print.'''<br />
<br />
[[Image:BigSur_Print00.png|right|100px]]<br />
=System Preferences=<br />
To start, open your System Preferences from the Dock or Applications folder. Once you have opened it, click '''Printers & Scanners'''.<br />
<br />
[[Image:BigSur_Print01.png|right|100px]]<br />
<br />
=Print & Fax=<br />
When the '''Printers & Scanners''' window appears, create a new local printer by clicking the '''+''' icon in the lower left corner of the first pane in the window.<br />
<br />
=Add Printer=<br />
This will bring up a Add Printer dialog.<br />
<br />
<br />
'''Note:''' If you just want basic printing, use the steps below. If you would like to enable all the advanced options for the printer, jump over this section to '''"Enabling Advanced Printer Options"'''.<br />
<br />
<br />
* Jump over to the '''IP''' tab<br />
* Set Protocol to '''Internet Printing Protocol - IPP'''<br />
* Set Address to '''print.umiacs.umd.edu'''<br />
* Set the Queue to printers/queue; in this example for cps432-3208 it would be '''printers/cps432-3208'''. You have to make sure the queue is prefixed by '''printers/'''. For clarification, the queue is typically the printer name.<br />
* Set Name to the name of the printer you are trying to use. This makes it easily Identifiable in your list of printers.<br />
* It will always select '''Generic Postscript Printer'''. If you need to access the more advanced features of a queue/printer or you were '''not able to print by choosing Generic Postscript Printer''', you will need to take extra steps, please see the Advanced section at the bottom of this page.<br />
* Select Add<br />
* You will be asked about enabling duplex. If you know the printer has the option, which is true for most of our printers, go ahead and enable it. Then hit OK. If you're not sure, just leave it disabled. You can always enable it after the queue is added.<br />
<br />
[[Image:AddPrinter_BigSur.png]]<br />
<br />
You should now be able to print to this printer/queue from any macOS print menu.<br />
<br />
=Enabling Advanced Printer Features=<br />
Some printers may have features not accessible with the default drivers provided above. If this is the case, follow the guide below to identify, and install your print drivers from their manufacturer.<br />
<br />
<br />
'''Note:''' If you already installed the printer with generic drivers, you will need to highlight the printer and click '''-''', before attempting to add it again. You may also need a system update to get the latest drivers from Apple, if your printer does not have software drivers listed in the next steps.<br />
<br />
<br />
'''Step 1:''' Locate your printers name and search for its Make and Model here: http://print.umiacs.umd.edu/printers/<br />
<br />
<br />
'''Step 2:''' Follow the '''Add Printer''' steps above until you get to '''Generic Postscript Printer'''. Select '''Software''', then '''Add'''.<br />
<br />
<br />
[[Image:SelectSoftware_BigSur.png]]<br />
<br />
<br />
'''Step 3:''' A window labeled "Printer Software" will now pop up. Scroll thru the list to find your specific make and model of printer. Click on it, and hit Ok.<br />
<br />
[[Image:PrinterSoftware_BigSur.png]]<br />
<br />
'''Step 4:''' Click Add on the next window and it should install your printers software suite. You should have a list of options (if they are available) that looks similar this:<br />
<br />
[[Image:InstallOptions_BigSur.png]]<br />
<br />
'''Final Step:''' Enable the Options you wish to use, Click '''OK'''. You should now be able to print to this printer/queue from any macOS print menu.<br />
<br />
=Printing With Stapler=<br />
For print jobs using the stapler, follow the guide below. Printers with staplers are located in Iribe, in rooms 3149, 3208, 4149, 4208, and 5208.<br />
<br />
To add/ connect to a printer with stapling capabilities:<br />
<br />
<br />
'''Step 1:''' Follow the "Enabling Advanced Printer Features" guide until you reach Step 4.<br />
<br />
<br />
'''Step 2:''' In the window that appears labeled "Setting up [your printer name]" set "HP 3-Bin Stapler/Stacker" to "Mailbox Mode".<br />
<br />
<br />
[[Image:Mailbox_Mode.png|500px|]]<br />
<br />
<br />
'''Step 3:''' Select "OK" in the bottom right corner.<br />
<br />
<br />
'''To Print:''' To complete the print job using the stapler once the printer has been added:<br />
<br />
<br />
'''Step 1:''' In Preview, when trying to print, select the "Show Details" button in the bottom left corner.<br />
<br />
<br />
[[Image:Show_Settings.png|500px|]]<br />
<br />
<br />
'''Step 2:''' In the middle right, where it says "Preview," select "Printer Features" instead of "Preview".<br />
<br />
<br />
[[Image:Printer_Features.png|500px|]]<br />
<br />
<br />
'''Step 3:''' Set "Feature Sets" to "Finishing".<br />
<br />
<br />
[[Image:Finishing.png|500px|]]<br />
<br />
<br />
'''Step 4:''' A "Staple" dropdown list will appear at the bottom of the window. Select your preferred staple option.<br />
<br />
<br />
[[Image:Staple.png|500px|]]<br />
<br />
<br />
'''Step 5:''' Press "Print" in the bottom right corner.<br />
<br />
=Printing Stored Jobs=<br />
If you are printing a sensitive document and do not want the printer to print it right away, you can configure a stored job. The stored job lets you hold a print job until you enter a PIN to release the job. You can configure a stored job following the instructions below:</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=SecureCopy&diff=10419SecureCopy2022-04-25T17:35:47Z<p>Jayid07: /* Using SCP */</p>
<hr />
<div>Secure Copy (or [http://en.wikipedia.org/wiki/Secure_Shell SCP]) is a way of copying data between two computers using [[SSH]].<br />
<br />
==Using SCP==<br />
SCP (secure copy) is a command-line utility that allows you to securely copy files and directories between two locations. The following commands work under Red Hat Enterprise Linux, Ubuntu Linux, and macOS. <br />
<br />
This command, when run from a terminal, will copy the file "source_file0.txt" from the local machine to the home directory of nexusstaff00 and give it the name "target_file0.txt".<br />
[jayid07@dedsec ~]$ scp source_file.txt jayid07@nexusstaff00:target_file.txt<br />
This command, when run from a terminal, will copy the file "source_file1.txt" from the user's NFS homedirectory on nexustaff00 into the current local directory and give it the name "target_file1.txt".<br />
[jayid07@dedsec ~]$ scp jayid07@nexusstaff00.umiacs.umd.edu:source_file1.txt target_file1.txt<br />
<br />
Note how the syntax of scp is very similar to that of the UNIX command cp with the addition of a hostname and username.<br />
<br />
For UMIACS supported Windows hosts, WinSCP (available for download [http://winscp.net/eng/download.php here]) is already installed.<br />
<br />
==Further Information==<br />
* [http://www.openssh.org/ OpenSSH]<br />
* [http://winscp.net WinSCP]</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=SecureCopy&diff=10418SecureCopy2022-04-25T17:35:20Z<p>Jayid07: /* Using SCP */</p>
<hr />
<div>Secure Copy (or [http://en.wikipedia.org/wiki/Secure_Shell SCP]) is a way of copying data between two computers using [[SSH]].<br />
<br />
==Using SCP==<br />
SCP (secure copy) is a command-line utility that allows you to securely copy files and directories between two locations. The following commands work under Red Hat Enterprise Linux, Ubuntu Linux, and macOS. <br />
<br />
This command, when run from a terminal, will copy the file "source_file0.txt" from the local machine to the home directory of nexusstaff00 and give it the name "target_file0.txt".<br />
[jayid07@dedsec jayid07]$ scp source_file.txt jayid07@nexusstaff00:target_file.txt<br />
This command, when run from a terminal, will copy the file "source_file1.txt" from the user's NFS homedirectory on nexustaff00 into the current local directory and give it the name "target_file1.txt".<br />
[jayid07@dedsec jayid07]$ scp jayid07@nexusstaff00.umiacs.umd.edu:source_file1.txt target_file1.txt<br />
<br />
Note how the syntax of scp is very similar to that of the UNIX command cp with the addition of a hostname and username.<br />
<br />
For UMIACS supported Windows hosts, WinSCP (available for download [http://winscp.net/eng/download.php here]) is already installed.<br />
<br />
==Further Information==<br />
* [http://www.openssh.org/ OpenSSH]<br />
* [http://winscp.net WinSCP]</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=SecureShellTunneling&diff=10417SecureShellTunneling2022-04-25T17:11:31Z<p>Jayid07: /* OpenSSH */</p>
<hr />
<div>==Port Forwarding==<br />
<br />
===OpenSSH===<br />
<br />
When you want to just forward a specific port locally to a remote port with Linux or macOS systems. <br />
<ul><br />
<li>This example will create a local port 9999 that will be forwarded to the remote host webserver.umiacs.umd.edu and its port 8000 through the host nexusclip00.umiacs.umd.edu.<br />
<pre>ssh -NfL 9999:webserver.umiacs.umd.edu:8000 nexusclip00.umiacs.umd.edu</pre> <br />
</li><br />
<br />
<li>This example will create a local port 13389 that will be forwarded to a remote host that is running a [[Remote Desktop |RDP]] client like Windows through the host nexusclip00.umiacs.umd.edu<br />
<pre>ssh -L 13389:my-desktop.ad.umiacs.umd.edu:3389 nexusclip00.umiacs.umd.edu</pre><br />
</li><br />
<br />
<li>The following example outlines how to use SSH tunnel for printing to the UMIACS CUPS server.<br />
<pre>ssh $<USERNAME>@nexusclip00.umiacs.umd.edu -T -N -L 3631:print.umiacs.umd.edu:631</pre><br />
</li><br />
<li>Once the tunnel is established you can follow the normal [[Printing | Printing]] instructions substitution 'print.umiacs.umd.edu' for 'localhost:3631', or print via the a command such as the following:<br />
<pre>lpr -H 127.0.0.1:3631 -P $<PRINTER NAME> $<FILENAME></pre><br />
</li><br />
</ul><br />
<br />
===PuTTY===<br />
<br />
Windows users can achieve the same types of tunnels using PuTTY or a similar SSH client. In PuTTY, the port forwarding configuration dialogue can be found under "Connection>SSH>Tunnels".<br />
<br />
[[Image:PuTTYWin7Tunnel.png]]<br />
<br />
This example will create a local port '''8889''' that is attached to the remote host '''clipsm301.umiacs.umd.edu''' on its port '''8000'''.<br />
<br />
==SOCKS Proxy==<br />
<br />
===OpenSSH===<br />
<br />
[[SSH]] can also tunnel all traffic coming into a certain port through a SOCKS v5 proxy. Many browsers and some operating systems can be setup to then connect to this proxy to allow them again to look like they are coming from the host name you specify in your [[SSH]] command. <br />
<br />
<pre>ssh -ND 7777 nexusclip00.umiacs.umd.edu</pre><br />
<br />
Please note: when you configure proxy settings for a browser (or your whole operating system) all the traffic for that browser (or the OS) will be sent through the proxy. This can have performance implications.<br />
<br />
===PuTTY===<br />
<br />
Windows users can tunnel traffic coming into a certain port through a SOCKS v5 proxy by using PuTTY or a similar SSH client. Many browsers and some operating systems can be setup to connect to this proxy to allow them to look like they are coming from the host name you specify.<br />
<br />
In PuTTY under '''Sessions''' set the Host Name:<br />
<br />
[[Image:Putty1.png]]<br />
<br />
Then under <code>Connection > SSH > Tunnels</code> enter a port number and set the type of forwarding to "Dynamic" and press add:<br />
<br />
[[Image:Putty2.png]]<br />
<br />
Click Open and log into the host.<br />
As long as this PuTTY window is open and you are logged in, you can use the SOCKS proxy.<br />
<br />
Please note that when you configure proxy settings for a browser or your whole operating system, all the traffic for that browser or your OS will be sent through the proxy. This can have performance implications.<br />
<br />
=== Example: SOCKS proxy, Browser configuration ===<br />
[[Image:FF_Proxy_1.png|thumb|]] [[Image:FF_Proxy_2.png|thumb|]]<br />
There are too many variations here to cover them all, but they all follow the same general pattern and the following example should be generally applicable. We'll use Firefox for this example. Screenshots are from FF 37.0.2<br />
<br />
* Under <code> Preferences > Advanced > Network > Connection > Settings... </code> <br />
* choose <code>Manually proxy configuration:</code>,<br />
* enter <code>127.0.0.1</code> for the proxy<br />
* enter the port you chose earlier for dynamic forwarding (7777 in the example above.) <br />
* Check <code>Use this proxy server for all protocols</code>, and then click OK. <br />
<br />
'''NOTE:''' this will continue to send all browser traffic through your SSH tunnel until the configuration is reverted. The SSH connection must be established for traffic to pass through to the destination network. Firefox has a very useful plugin called "FoxyProxy" that allows conditional proxies to be set up, if you're interested in adding some intelligence/complexity to your proxy configuration.</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=SecureShellTunneling&diff=10416SecureShellTunneling2022-04-25T17:11:01Z<p>Jayid07: /* Port Forwarding */</p>
<hr />
<div>==Port Forwarding==<br />
<br />
===OpenSSH===<br />
<br />
When you want to just forward a specific port locally to a remote port with Linux or macOS systems. <br />
<ul><br />
<li>This example will create a local port 9999 that will be forwarded to the remote host webserver.umiacs.umd.edu and its port 8000 through the host nexusclip00.umiacs.umd.edu.<br />
<pre>ssh -NfL 9999:webserver.umiacs.umd.edu:8000 nexusclip00.umiacs.umd.edu</pre> <br />
</li><br />
<br />
<li>This example will create a local port 13389 that will be forwarded to a remote host that is running a [[Remote Desktop |RDP]] client like Windows through the host nexusclip00.umiacs.umd.edu<br />
<pre>ssh -L 13389:my-desktop.ad.umiacs.umd.edu:3389 nexusclip00.umiacs.umd.edu</pre><br />
</li><br />
<br />
<li>The following example outlines how to use SSH tunnel for printing to the UMIACS CUPS server.<br />
<pre>ssh $<USERNAME>@nexusclip00.umiacs.umd.edu -T -N -L 3631:print.umiacs.umd.edu:631</pre><br />
</li><br />
<li>Once the tunnel is established you can follow the normal [[Printing | Printing]] instructions substitution 'print.umiacs.umd.edu' for 'localhost:3631', or print via the a command such as the following:<br />
<pre>lpr -H 127.0.0.1:3631 -P $<PRINTER NAME> $<FILENAME></pre><br />
</li><br />
</ul><br />
<br />
===PuTTY===<br />
<br />
Windows users can achieve the same types of tunnels using PuTTY or a similar SSH client. In PuTTY, the port forwarding configuration dialogue can be found under "Connection>SSH>Tunnels".<br />
<br />
[[Image:PuTTYWin7Tunnel.png]]<br />
<br />
This example will create a local port '''8889''' that is attached to the remote host '''clipsm301.umiacs.umd.edu''' on its port '''8000'''.<br />
<br />
==SOCKS Proxy==<br />
<br />
===OpenSSH===<br />
<br />
[[SSH]] can also tunnel all traffic coming into a certain port through a SOCKS v5 proxy. Many browsers and some operating systems can be setup to then connect to this proxy to allow them again to look like they are coming from the host name you specify in your [[SSH]] command. <br />
<br />
<pre>ssh -ND 7777 openlab.umiacs.umd.edu</pre><br />
<br />
Please note: when you configure proxy settings for a browser (or your whole operating system) all the traffic for that browser (or the OS) will be sent through the proxy. This can have performance implications.<br />
<br />
===PuTTY===<br />
<br />
Windows users can tunnel traffic coming into a certain port through a SOCKS v5 proxy by using PuTTY or a similar SSH client. Many browsers and some operating systems can be setup to connect to this proxy to allow them to look like they are coming from the host name you specify.<br />
<br />
In PuTTY under '''Sessions''' set the Host Name:<br />
<br />
[[Image:Putty1.png]]<br />
<br />
Then under <code>Connection > SSH > Tunnels</code> enter a port number and set the type of forwarding to "Dynamic" and press add:<br />
<br />
[[Image:Putty2.png]]<br />
<br />
Click Open and log into the host.<br />
As long as this PuTTY window is open and you are logged in, you can use the SOCKS proxy.<br />
<br />
Please note that when you configure proxy settings for a browser or your whole operating system, all the traffic for that browser or your OS will be sent through the proxy. This can have performance implications.<br />
<br />
=== Example: SOCKS proxy, Browser configuration ===<br />
[[Image:FF_Proxy_1.png|thumb|]] [[Image:FF_Proxy_2.png|thumb|]]<br />
There are too many variations here to cover them all, but they all follow the same general pattern and the following example should be generally applicable. We'll use Firefox for this example. Screenshots are from FF 37.0.2<br />
<br />
* Under <code> Preferences > Advanced > Network > Connection > Settings... </code> <br />
* choose <code>Manually proxy configuration:</code>,<br />
* enter <code>127.0.0.1</code> for the proxy<br />
* enter the port you chose earlier for dynamic forwarding (7777 in the example above.) <br />
* Check <code>Use this proxy server for all protocols</code>, and then click OK. <br />
<br />
'''NOTE:''' this will continue to send all browser traffic through your SSH tunnel until the configuration is reverted. The SSH connection must be established for traffic to pass through to the destination network. Firefox has a very useful plugin called "FoxyProxy" that allows conditional proxies to be set up, if you're interested in adding some intelligence/complexity to your proxy configuration.</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=NASUsers&diff=10415NASUsers2022-04-22T23:49:21Z<p>Jayid07: </p>
<hr />
<div>{{Note|'''This service is currently being deprecated.'''}}<br />
===Web Pages===<br />
<br />
Please see [[WebSpace#Personal%20Web%20Space | Personal Web Space]].<br />
<br />
===Personal FTP Sites for Distributing Data===<br />
<br />
Your ftp site is online at<br />
<br />
ftp://ftp.umiacs.umd.edu/pub/username<br />
<br />
On any supported UNIX workstation, you can access your ftp site as<br />
<br />
/fs/ftp/pub/username<br />
<br />
Windows users can map it as a network drive from<br />
<br />
\\fluidfs.ad.umiacs.umd.edu\ftp-umiacs\pub<br />
<br />
Please note that anyone with an internet connection can log in and download these files so please to do not use your ftp site to store confidential data.<br />
<br />
This file system has regular backups with our [[TSM]] service and has [[Snapshots]] for easy user restores.<br />
<br />
===Usage Guidelines===<br />
<br />
Personal NAS is configured to be highly available and modest in both size and usage. Please store large or heavily accessed datasets in a dedicated project storage directory that is tuned for your application.<br />
<br />
Please avoid storing shared project data in personal storage allocations. Separating project data from personal data will simplify administration and data management for both researchers and staff.</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=NASUsers&diff=10414NASUsers2022-04-22T23:48:57Z<p>Jayid07: </p>
<hr />
<div>{{Note|'''This service is currently being deprecated'''}}<br />
===Web Pages===<br />
<br />
Please see [[WebSpace#Personal%20Web%20Space | Personal Web Space]].<br />
<br />
===Personal FTP Sites for Distributing Data===<br />
<br />
Your ftp site is online at<br />
<br />
ftp://ftp.umiacs.umd.edu/pub/username<br />
<br />
On any supported UNIX workstation, you can access your ftp site as<br />
<br />
/fs/ftp/pub/username<br />
<br />
Windows users can map it as a network drive from<br />
<br />
\\fluidfs.ad.umiacs.umd.edu\ftp-umiacs\pub<br />
<br />
Please note that anyone with an internet connection can log in and download these files so please to do not use your ftp site to store confidential data.<br />
<br />
This file system has regular backups with our [[TSM]] service and has [[Snapshots]] for easy user restores.<br />
<br />
===Usage Guidelines===<br />
<br />
Personal NAS is configured to be highly available and modest in both size and usage. Please store large or heavily accessed datasets in a dedicated project storage directory that is tuned for your application.<br />
<br />
Please avoid storing shared project data in personal storage allocations. Separating project data from personal data will simplify administration and data management for both researchers and staff.</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM/JobSubmission&diff=10413SLURM/JobSubmission2022-04-22T23:47:15Z<p>Jayid07: /* Requesting GPUs */</p>
<hr />
<div>=Job Submission=<br />
SLURM offers a variety of ways to run jobs. It is important to understand the different options available and how to request the resources required for a job in order for it to run successfully. All job submission should be done from submit nodes; any computational code should be run in a job allocation on compute nodes. The following commands outline how to allocate resources on the compute nodes and submit processes to be run on the allocated nodes.<br />
<br />
Please note that the hard maximum number of jobs that the SLURM scheduler can handle is 10000. It is best to limit your number of submitted jobs at any given time to less than half this amount in the case that another user also wants to submit a large number of jobs.<br />
<br />
'''An important notice: computational jobs run on submission nodes will be terminated. Please use the compute nodes for that purpose.'''<br />
<br />
==srun==<br />
<code>srun</code> is the command used to run a process on the compute nodes in the cluster. It works by passing it a command (this could be a script) which will be run on a compute node and then <code>srun</code> will return. <code>srun</code> accepts many command line options to specify the resources required by the command passed to it. Some common command line arguments are listed below and full documentation of all available options is available in the man page for <code>srun</code>, which can be accessed by running <code>man srun</code>.<br />
<br />
<pre><br />
username@nexuscml01:srun --qos=dpart --mem=100mb --time=1:00:00 bash -c 'echo "Hello World from" `hostname`'<br />
Hello World from tron33.umiacs.umd.edu<br />
</pre><br />
<br />
It is important to understand that <code>srun</code> is an interactive command. By default input to <code>srun</code> is broadcast to all compute nodes running your process and output from the compute nodes is redirected to <code>srun</code>. This behavior can be changed; however, '''srun will always wait for the command passed to finish before exiting, so if you start a long running process and end your terminal session, your process will stop running on the compute nodes and your job will end'''. To run a non-interactive submission that will remain running after you logout, you will need to wrap your <code>srun</code> commands in a batch script and submit it with [[#sbatch | sbatch]].<br />
<br />
===Common srun arguments===<br />
* <code>--mem=1gb</code> ''if no unit is given MB is assumed''<br />
* <code>--nodes=2</code> ''if passed to srun, the given command will be run concurrently on each node''<br />
* <code>--qos=dpart</code> ''to see the available QOS options on a cluster, run'' <code>show_qos</code><br />
* <code>--time=hh:mm:ss</code> ''time needed to run your job''<br />
* <code>--job-name=helloWorld</code><br />
* <code>--output=filename</code> ''file to redirect stdout to''<br />
* <code>--error=filename</code> ''file to redirect stderr''<br />
* <code>--partition=$PNAME</code> ''request job run in the $PNAME partition''<br />
* <code>--ntasks=2</code> ''request 2 "tasks" which map to cores on a CPU, if passed to srun the given command will be run concurrently on each core''<br />
* <code>--account=accountname</code> ''use qos specific to an account''<br />
<br />
===Interactive Shell Sessions===<br />
An interactive shell session on a compute node can be useful for debugging or developing code that isn't ready to be run as a batch job. To get an interactive shell on a node, use <code>srun</code> to invoke a shell:<br />
<pre><br />
username@nexuscml00:srun --pty --qos=dpart --mem 1gb --time=01:00:00 bash<br />
username@tron33:<br />
</pre><br />
'''Please do not leave interactive shells running for long periods of time when you are not working. This blocks resources from being used by everyone else.'''<br />
<br />
==salloc==<br />
The salloc command can also be used to request resources be allocated without needing a batch script. Running salloc with a list of resources will allocate the resources you requested, create a job, and drop you into a subshell with the environment variables necessary to run commands in the newly created job allocation. When your time is up or you exit the subshell, your job allocation will be relinquished.<br />
<br />
<pre><br />
username@nexuscml00:salloc --qos=dpart -N 1 --mem=2gb --time=01:00:00<br />
salloc: Granted job allocation 159<br />
username@nexuscml00:srun /usr/bin/hostname<br />
tron33.umiacs.umd.edu<br />
username@nexuscml00:exit<br />
exit<br />
salloc: Relinquishing job allocation 159<br />
</pre><br />
<br />
'''Please note that any commands not invoked with srun will be run locally on the submit node. Please be careful when using salloc.'''<br />
<br />
==sbatch==<br />
The sbatch command allows you to write a batch script to be submitted and run non-interactively on the compute nodes. To run a simple Hello World command on the compute nodes you could write a file, helloWorld.sh with the following contents:<br />
<br />
<pre><br />
#!/bin/bash<br />
<br />
srun bash -c 'echo Hello World from `hostname`'<br />
</pre><br />
<br />
Then you need to submit the script with sbatch and request resources:<br />
<br />
<pre><br />
username@nexuscml00:sbatch --qos=dpart --mem=1gb --time=1:00:00 helloWorld.sh<br />
Submitted batch job 121<br />
</pre><br />
<br />
SLURM will return a job number that you can use to check the status of your job with squeue:<br />
<br />
<pre><br />
username@nexuscml00:squeue<br />
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)<br />
121 dpart helloWor username R 0:01 2 tron[32-33]<br />
</pre><br />
<br />
====Advanced Batch Scripts====<br />
You can also write a batch script with all of your resources/options defined in the script itself. This is useful for jobs that need to be run 10s/100s/1000s of times. You can then handle any necessary environment setup and run commands on the resources you requested by invoking commands with srun. The srun commands can also be more complex and be told to only use portions of your entire job allocation, each of these distinct srun commands makes up one "job step". The batch script will be run on the first node allocated as part of your job allocation and each job step will be run on whatever resources you tell them to. In the following example, we have a batch job that will request 2 nodes in the cluster. We then load a specific version of Python into my environment and submit two job steps, each one using one node. Since srun is blocks until the command finishes, we use the '&' operator to background the process so that both job steps can run at once; however, this means that we then need to use the wait command to block processing until all background processes have finished.<br />
<br />
<pre><br />
#!/bin/bash<br />
<br />
# Lines that begin with #SBATCH specify commands to be used by SLURM for scheduling<br />
<br />
#SBATCH --job-name=helloWorld # sets the job name<br />
#SBATCH --output=helloWorld.out.%j # indicates a file to redirect STDOUT to; %j is the jobid. Must be set to a file instead of a directory or else submission will fail.<br />
#SBATCH --error=helloWorld.out.%j # indicates a file to redirect STDERR to; %j is the jobid. Must be set to a file instead of a directory or else submission will fail.<br />
#SBATCH --time=00:05:00 # how long you think your job will take to complete; format=hh:mm:ss<br />
#SBATCH --qos=dpart # set QOS, this will determine what resources can be requested<br />
#SBATCH --nodes=2 # number of nodes to allocate for your job<br />
#SBATCH --ntasks=4 # request 4 cpu cores be reserved for your node total<br />
#SBATCH --ntasks-per-node=2 # request 2 cpu cores be reserved per node<br />
#SBATCH --mem=1gb # memory required by job; if unit is not specified MB will be assumed<br />
<br />
module load Python/2.7.9 # run any commands necessary to setup your environment<br />
<br />
srun -N 1 --mem=512mb bash -c "hostname; python --version" & # use srun to invoke commands within your job; using an '&'<br />
srun -N 1 --mem=512mb bash -c "hostname; python --version" & # will background the process allowing them to run concurrently<br />
wait # wait for any background processes to complete<br />
<br />
# once the end of the batch script is reached your job allocation will be revoked<br />
</pre><br />
<br />
Another useful thing to know is that you can pass additional arguments into your sbatch scripts on the command line and reference them as <code>${1}</code> for the first argument and so on.<br />
<br />
====More Examples====<br />
* [[SLURM/ArrayJobs]]<br />
<br />
===scancel===<br />
The scancel command can be used to cancel job allocations or job steps that are no longer needed. It can be passed individual job IDs or an option to delete all of your jobs or jobs that meet certain criteria.<br />
*<code>scancel 255</code> ''cancel job 255''<br />
*<code>scancel 255.3</code> ''cancel job step 3 of job 255''<br />
*<code>scancel --user username --partition=dpart</code> ''cancel all jobs for username in the dpart partition''<br />
<br />
=Identifying Resources and Features=<br />
The sinfo can show you additional features of nodes in the cluster but you need to ask it to show some non-default options using a command like this <br />
<code>sinfo -o "%15N %10c %10m %25f %10G"</code>.<br />
<br />
<pre><br />
$ sinfo -o "%40N %8c %8m %20f %25G"<br />
NODELIST CPUS MEMORY AVAIL_FEATURES GRES<br />
gammagpu[01-03] 32 257546 rhel8,Zen,EPYC-7313 gpu:rtxa5000:8<br />
tron[00-05] 32 257538 rhel8,AMD,EPYC-7302 gpu:rtxa6000:8<br />
tron[06-09,12-15,21] 16 128525 rhel8,AMD,EPYC-7302P gpu:rtxa4000:4<br />
tron[10-11,16-20,34] 16 128525 rhel8,Zen,EPYC-7313P gpu:rtxa4000:4<br />
tron[46-53] 48 257544 rhel8,Zen,EPYC-7352 gpu:rtxa5000:8<br />
tron[22-33,35-45] 16 128525 rhel8,AMD,EPYC-7302 gpu:rtxa4000:4<br />
</pre><br />
<br />
There is also a prewritten alias <code>show_nodes</code> on all of our SLURM computing clusters that shows each node's name, number of CPUs, memory, processor type (as AVAIL_FEATURES), GRES, State, and partitions that can submit to it. <br />
<br />
You can identify further specific information about a node using [https://wiki.umiacs.umd.edu/umiacs/index.php/SLURM/ClusterStatus#scontrol scontrol] with various flags.<br />
<br />
=Requesting GPUs=<br />
If you need to do processing on a GPU, you will need to request that your job have access to GPUs just as you need to request processors or CPU cores. You will also need to make sure that you submit your job to the correct partition since nodes with GPUs are often put into their own partition to prevent the nodes from being tied up by jobs that don't utilize GPUs. In SLURM, GPUs are considered "generic resources" also known as GRES. To request some number of GPUs be reserved/available for your job you can use the flag <code>--gres=gpu:2</code> or if there are multiple types of GPUs available in the cluster and you need a specific type, you can provide the type option to the gres flag e.g. <code>--gres=gpu:k20:1</code><br />
<br />
<pre><br />
username@nexuscml00:srun --pty --partition=gpu --qos=gpu --gres=gpu:2 nvidia-smi<br />
Wed Jul 13 15:33:18 2016<br />
+------------------------------------------------------+<br />
| NVIDIA-SMI 361.28 Driver Version: 361.28 |<br />
|-------------------------------+----------------------+----------------------+<br />
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |<br />
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |<br />
|===============================+======================+======================|<br />
| 0 Tesla K20c Off | 0000:03:00.0 Off | 0 |<br />
| 30% 24C P0 48W / 225W | 11MiB / 4799MiB | 0% Default |<br />
+-------------------------------+----------------------+----------------------+<br />
| 1 Tesla K20c Off | 0000:84:00.0 Off | 0 |<br />
| 30% 23C P0 52W / 225W | 11MiB / 4799MiB | 93% Default |<br />
+-------------------------------+----------------------+----------------------+<br />
<br />
+-----------------------------------------------------------------------------+<br />
| Processes: GPU Memory |<br />
| GPU PID Type Process name Usage |<br />
|=============================================================================|<br />
| No running processes found |<br />
+-----------------------------------------------------------------------------+<br />
</pre><br />
<br />
Please note that your job will only be able to see/access the GPUs you requested. If you only need 1 GPU, please request only 1 GPU and the other one will be left available for other users:<br />
<br />
<pre><br />
username@nexuscml00:srun --pty --partition=gpu --qos=gpu --gres=gpu:k20:1 nvidia-smi<br />
Wed Jul 13 15:31:29 2016<br />
+------------------------------------------------------+<br />
| NVIDIA-SMI 361.28 Driver Version: 361.28 |<br />
|-------------------------------+----------------------+----------------------+<br />
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |<br />
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |<br />
|===============================+======================+======================|<br />
| 0 Tesla K20c Off | 0000:03:00.0 Off | 0 |<br />
| 30% 24C P0 50W / 225W | 11MiB / 4799MiB | 92% Default |<br />
+-------------------------------+----------------------+----------------------+<br />
<br />
+-----------------------------------------------------------------------------+<br />
| Processes: GPU Memory |<br />
| GPU PID Type Process name Usage |<br />
|=============================================================================|<br />
| No running processes found |<br />
+-----------------------------------------------------------------------------+<br />
</pre><br />
<br />
As with all other flags, the <code>--gres</code> flag may also be passed to [[#sbatch | sbatch]] and [[#salloc | salloc]] rather than directly to [[#srun | srun]].<br />
<br />
=MPI example=<br />
<pre><br />
#!/usr/bin/bash <br />
#SBATCH --job-name=mpi_test # Job name <br />
#SBATCH --nodes=4 # Number of nodes <br />
#SBATCH --ntasks=8 # Number of MPI ranks <br />
#SBATCH --ntasks-per-node=2 # Number of MPI ranks per node <br />
#SBATCH --ntasks-per-socket=1 # Number of tasks per processor socket on the node <br />
#SBATCH --time=00:30:00 # Time limit hrs:min:sec <br />
<br />
module load mpi <br />
<br />
srun --mpi=openmpi /nfshomes/username/testing/mpi/a.out <br />
</pre></div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM/JobSubmission&diff=10412SLURM/JobSubmission2022-04-22T23:46:47Z<p>Jayid07: /* Identifying Resources and Features */</p>
<hr />
<div>=Job Submission=<br />
SLURM offers a variety of ways to run jobs. It is important to understand the different options available and how to request the resources required for a job in order for it to run successfully. All job submission should be done from submit nodes; any computational code should be run in a job allocation on compute nodes. The following commands outline how to allocate resources on the compute nodes and submit processes to be run on the allocated nodes.<br />
<br />
Please note that the hard maximum number of jobs that the SLURM scheduler can handle is 10000. It is best to limit your number of submitted jobs at any given time to less than half this amount in the case that another user also wants to submit a large number of jobs.<br />
<br />
'''An important notice: computational jobs run on submission nodes will be terminated. Please use the compute nodes for that purpose.'''<br />
<br />
==srun==<br />
<code>srun</code> is the command used to run a process on the compute nodes in the cluster. It works by passing it a command (this could be a script) which will be run on a compute node and then <code>srun</code> will return. <code>srun</code> accepts many command line options to specify the resources required by the command passed to it. Some common command line arguments are listed below and full documentation of all available options is available in the man page for <code>srun</code>, which can be accessed by running <code>man srun</code>.<br />
<br />
<pre><br />
username@nexuscml01:srun --qos=dpart --mem=100mb --time=1:00:00 bash -c 'echo "Hello World from" `hostname`'<br />
Hello World from tron33.umiacs.umd.edu<br />
</pre><br />
<br />
It is important to understand that <code>srun</code> is an interactive command. By default input to <code>srun</code> is broadcast to all compute nodes running your process and output from the compute nodes is redirected to <code>srun</code>. This behavior can be changed; however, '''srun will always wait for the command passed to finish before exiting, so if you start a long running process and end your terminal session, your process will stop running on the compute nodes and your job will end'''. To run a non-interactive submission that will remain running after you logout, you will need to wrap your <code>srun</code> commands in a batch script and submit it with [[#sbatch | sbatch]].<br />
<br />
===Common srun arguments===<br />
* <code>--mem=1gb</code> ''if no unit is given MB is assumed''<br />
* <code>--nodes=2</code> ''if passed to srun, the given command will be run concurrently on each node''<br />
* <code>--qos=dpart</code> ''to see the available QOS options on a cluster, run'' <code>show_qos</code><br />
* <code>--time=hh:mm:ss</code> ''time needed to run your job''<br />
* <code>--job-name=helloWorld</code><br />
* <code>--output=filename</code> ''file to redirect stdout to''<br />
* <code>--error=filename</code> ''file to redirect stderr''<br />
* <code>--partition=$PNAME</code> ''request job run in the $PNAME partition''<br />
* <code>--ntasks=2</code> ''request 2 "tasks" which map to cores on a CPU, if passed to srun the given command will be run concurrently on each core''<br />
* <code>--account=accountname</code> ''use qos specific to an account''<br />
<br />
===Interactive Shell Sessions===<br />
An interactive shell session on a compute node can be useful for debugging or developing code that isn't ready to be run as a batch job. To get an interactive shell on a node, use <code>srun</code> to invoke a shell:<br />
<pre><br />
username@nexuscml00:srun --pty --qos=dpart --mem 1gb --time=01:00:00 bash<br />
username@tron33:<br />
</pre><br />
'''Please do not leave interactive shells running for long periods of time when you are not working. This blocks resources from being used by everyone else.'''<br />
<br />
==salloc==<br />
The salloc command can also be used to request resources be allocated without needing a batch script. Running salloc with a list of resources will allocate the resources you requested, create a job, and drop you into a subshell with the environment variables necessary to run commands in the newly created job allocation. When your time is up or you exit the subshell, your job allocation will be relinquished.<br />
<br />
<pre><br />
username@nexuscml00:salloc --qos=dpart -N 1 --mem=2gb --time=01:00:00<br />
salloc: Granted job allocation 159<br />
username@nexuscml00:srun /usr/bin/hostname<br />
tron33.umiacs.umd.edu<br />
username@nexuscml00:exit<br />
exit<br />
salloc: Relinquishing job allocation 159<br />
</pre><br />
<br />
'''Please note that any commands not invoked with srun will be run locally on the submit node. Please be careful when using salloc.'''<br />
<br />
==sbatch==<br />
The sbatch command allows you to write a batch script to be submitted and run non-interactively on the compute nodes. To run a simple Hello World command on the compute nodes you could write a file, helloWorld.sh with the following contents:<br />
<br />
<pre><br />
#!/bin/bash<br />
<br />
srun bash -c 'echo Hello World from `hostname`'<br />
</pre><br />
<br />
Then you need to submit the script with sbatch and request resources:<br />
<br />
<pre><br />
username@nexuscml00:sbatch --qos=dpart --mem=1gb --time=1:00:00 helloWorld.sh<br />
Submitted batch job 121<br />
</pre><br />
<br />
SLURM will return a job number that you can use to check the status of your job with squeue:<br />
<br />
<pre><br />
username@nexuscml00:squeue<br />
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)<br />
121 dpart helloWor username R 0:01 2 tron[32-33]<br />
</pre><br />
<br />
====Advanced Batch Scripts====<br />
You can also write a batch script with all of your resources/options defined in the script itself. This is useful for jobs that need to be run 10s/100s/1000s of times. You can then handle any necessary environment setup and run commands on the resources you requested by invoking commands with srun. The srun commands can also be more complex and be told to only use portions of your entire job allocation, each of these distinct srun commands makes up one "job step". The batch script will be run on the first node allocated as part of your job allocation and each job step will be run on whatever resources you tell them to. In the following example, we have a batch job that will request 2 nodes in the cluster. We then load a specific version of Python into my environment and submit two job steps, each one using one node. Since srun is blocks until the command finishes, we use the '&' operator to background the process so that both job steps can run at once; however, this means that we then need to use the wait command to block processing until all background processes have finished.<br />
<br />
<pre><br />
#!/bin/bash<br />
<br />
# Lines that begin with #SBATCH specify commands to be used by SLURM for scheduling<br />
<br />
#SBATCH --job-name=helloWorld # sets the job name<br />
#SBATCH --output=helloWorld.out.%j # indicates a file to redirect STDOUT to; %j is the jobid. Must be set to a file instead of a directory or else submission will fail.<br />
#SBATCH --error=helloWorld.out.%j # indicates a file to redirect STDERR to; %j is the jobid. Must be set to a file instead of a directory or else submission will fail.<br />
#SBATCH --time=00:05:00 # how long you think your job will take to complete; format=hh:mm:ss<br />
#SBATCH --qos=dpart # set QOS, this will determine what resources can be requested<br />
#SBATCH --nodes=2 # number of nodes to allocate for your job<br />
#SBATCH --ntasks=4 # request 4 cpu cores be reserved for your node total<br />
#SBATCH --ntasks-per-node=2 # request 2 cpu cores be reserved per node<br />
#SBATCH --mem=1gb # memory required by job; if unit is not specified MB will be assumed<br />
<br />
module load Python/2.7.9 # run any commands necessary to setup your environment<br />
<br />
srun -N 1 --mem=512mb bash -c "hostname; python --version" & # use srun to invoke commands within your job; using an '&'<br />
srun -N 1 --mem=512mb bash -c "hostname; python --version" & # will background the process allowing them to run concurrently<br />
wait # wait for any background processes to complete<br />
<br />
# once the end of the batch script is reached your job allocation will be revoked<br />
</pre><br />
<br />
Another useful thing to know is that you can pass additional arguments into your sbatch scripts on the command line and reference them as <code>${1}</code> for the first argument and so on.<br />
<br />
====More Examples====<br />
* [[SLURM/ArrayJobs]]<br />
<br />
===scancel===<br />
The scancel command can be used to cancel job allocations or job steps that are no longer needed. It can be passed individual job IDs or an option to delete all of your jobs or jobs that meet certain criteria.<br />
*<code>scancel 255</code> ''cancel job 255''<br />
*<code>scancel 255.3</code> ''cancel job step 3 of job 255''<br />
*<code>scancel --user username --partition=dpart</code> ''cancel all jobs for username in the dpart partition''<br />
<br />
=Identifying Resources and Features=<br />
The sinfo can show you additional features of nodes in the cluster but you need to ask it to show some non-default options using a command like this <br />
<code>sinfo -o "%15N %10c %10m %25f %10G"</code>.<br />
<br />
<pre><br />
$ sinfo -o "%40N %8c %8m %20f %25G"<br />
NODELIST CPUS MEMORY AVAIL_FEATURES GRES<br />
gammagpu[01-03] 32 257546 rhel8,Zen,EPYC-7313 gpu:rtxa5000:8<br />
tron[00-05] 32 257538 rhel8,AMD,EPYC-7302 gpu:rtxa6000:8<br />
tron[06-09,12-15,21] 16 128525 rhel8,AMD,EPYC-7302P gpu:rtxa4000:4<br />
tron[10-11,16-20,34] 16 128525 rhel8,Zen,EPYC-7313P gpu:rtxa4000:4<br />
tron[46-53] 48 257544 rhel8,Zen,EPYC-7352 gpu:rtxa5000:8<br />
tron[22-33,35-45] 16 128525 rhel8,AMD,EPYC-7302 gpu:rtxa4000:4<br />
</pre><br />
<br />
There is also a prewritten alias <code>show_nodes</code> on all of our SLURM computing clusters that shows each node's name, number of CPUs, memory, processor type (as AVAIL_FEATURES), GRES, State, and partitions that can submit to it. <br />
<br />
You can identify further specific information about a node using [https://wiki.umiacs.umd.edu/umiacs/index.php/SLURM/ClusterStatus#scontrol scontrol] with various flags.<br />
<br />
=Requesting GPUs=<br />
If you need to do processing on a GPU, you will need to request that your job have access to GPUs just as you need to request processors or CPU cores. You will also need to make sure that you submit your job to the correct partition since nodes with GPUs are often put into their own partition to prevent the nodes from being tied up by jobs that don't utilize GPUs. In SLURM, GPUs are considered "generic resources" also known as GRES. To request some number of GPUs be reserved/available for your job you can use the flag <code>--gres=gpu:2</code> or if there are multiple types of GPUs available in the cluster and you need a specific type, you can provide the type option to the gres flag e.g. <code>--gres=gpu:k20:1</code><br />
<br />
<pre><br />
username@opensub02:srun --pty --partition=gpu --qos=gpu --gres=gpu:2 nvidia-smi<br />
Wed Jul 13 15:33:18 2016<br />
+------------------------------------------------------+<br />
| NVIDIA-SMI 361.28 Driver Version: 361.28 |<br />
|-------------------------------+----------------------+----------------------+<br />
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |<br />
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |<br />
|===============================+======================+======================|<br />
| 0 Tesla K20c Off | 0000:03:00.0 Off | 0 |<br />
| 30% 24C P0 48W / 225W | 11MiB / 4799MiB | 0% Default |<br />
+-------------------------------+----------------------+----------------------+<br />
| 1 Tesla K20c Off | 0000:84:00.0 Off | 0 |<br />
| 30% 23C P0 52W / 225W | 11MiB / 4799MiB | 93% Default |<br />
+-------------------------------+----------------------+----------------------+<br />
<br />
+-----------------------------------------------------------------------------+<br />
| Processes: GPU Memory |<br />
| GPU PID Type Process name Usage |<br />
|=============================================================================|<br />
| No running processes found |<br />
+-----------------------------------------------------------------------------+<br />
</pre><br />
<br />
Please note that your job will only be able to see/access the GPUs you requested. If you only need 1 GPU, please request only 1 GPU and the other one will be left available for other users:<br />
<br />
<pre><br />
username@opensub02:srun --pty --partition=gpu --qos=gpu --gres=gpu:k20:1 nvidia-smi<br />
Wed Jul 13 15:31:29 2016<br />
+------------------------------------------------------+<br />
| NVIDIA-SMI 361.28 Driver Version: 361.28 |<br />
|-------------------------------+----------------------+----------------------+<br />
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |<br />
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |<br />
|===============================+======================+======================|<br />
| 0 Tesla K20c Off | 0000:03:00.0 Off | 0 |<br />
| 30% 24C P0 50W / 225W | 11MiB / 4799MiB | 92% Default |<br />
+-------------------------------+----------------------+----------------------+<br />
<br />
+-----------------------------------------------------------------------------+<br />
| Processes: GPU Memory |<br />
| GPU PID Type Process name Usage |<br />
|=============================================================================|<br />
| No running processes found |<br />
+-----------------------------------------------------------------------------+<br />
</pre><br />
<br />
As with all other flags, the <code>--gres</code> flag may also be passed to [[#sbatch | sbatch]] and [[#salloc | salloc]] rather than directly to [[#srun | srun]].<br />
<br />
=MPI example=<br />
<pre><br />
#!/usr/bin/bash <br />
#SBATCH --job-name=mpi_test # Job name <br />
#SBATCH --nodes=4 # Number of nodes <br />
#SBATCH --ntasks=8 # Number of MPI ranks <br />
#SBATCH --ntasks-per-node=2 # Number of MPI ranks per node <br />
#SBATCH --ntasks-per-socket=1 # Number of tasks per processor socket on the node <br />
#SBATCH --time=00:30:00 # Time limit hrs:min:sec <br />
<br />
module load mpi <br />
<br />
srun --mpi=openmpi /nfshomes/username/testing/mpi/a.out <br />
</pre></div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM/JobSubmission&diff=10411SLURM/JobSubmission2022-04-22T23:45:41Z<p>Jayid07: /* sbatch */</p>
<hr />
<div>=Job Submission=<br />
SLURM offers a variety of ways to run jobs. It is important to understand the different options available and how to request the resources required for a job in order for it to run successfully. All job submission should be done from submit nodes; any computational code should be run in a job allocation on compute nodes. The following commands outline how to allocate resources on the compute nodes and submit processes to be run on the allocated nodes.<br />
<br />
Please note that the hard maximum number of jobs that the SLURM scheduler can handle is 10000. It is best to limit your number of submitted jobs at any given time to less than half this amount in the case that another user also wants to submit a large number of jobs.<br />
<br />
'''An important notice: computational jobs run on submission nodes will be terminated. Please use the compute nodes for that purpose.'''<br />
<br />
==srun==<br />
<code>srun</code> is the command used to run a process on the compute nodes in the cluster. It works by passing it a command (this could be a script) which will be run on a compute node and then <code>srun</code> will return. <code>srun</code> accepts many command line options to specify the resources required by the command passed to it. Some common command line arguments are listed below and full documentation of all available options is available in the man page for <code>srun</code>, which can be accessed by running <code>man srun</code>.<br />
<br />
<pre><br />
username@nexuscml01:srun --qos=dpart --mem=100mb --time=1:00:00 bash -c 'echo "Hello World from" `hostname`'<br />
Hello World from tron33.umiacs.umd.edu<br />
</pre><br />
<br />
It is important to understand that <code>srun</code> is an interactive command. By default input to <code>srun</code> is broadcast to all compute nodes running your process and output from the compute nodes is redirected to <code>srun</code>. This behavior can be changed; however, '''srun will always wait for the command passed to finish before exiting, so if you start a long running process and end your terminal session, your process will stop running on the compute nodes and your job will end'''. To run a non-interactive submission that will remain running after you logout, you will need to wrap your <code>srun</code> commands in a batch script and submit it with [[#sbatch | sbatch]].<br />
<br />
===Common srun arguments===<br />
* <code>--mem=1gb</code> ''if no unit is given MB is assumed''<br />
* <code>--nodes=2</code> ''if passed to srun, the given command will be run concurrently on each node''<br />
* <code>--qos=dpart</code> ''to see the available QOS options on a cluster, run'' <code>show_qos</code><br />
* <code>--time=hh:mm:ss</code> ''time needed to run your job''<br />
* <code>--job-name=helloWorld</code><br />
* <code>--output=filename</code> ''file to redirect stdout to''<br />
* <code>--error=filename</code> ''file to redirect stderr''<br />
* <code>--partition=$PNAME</code> ''request job run in the $PNAME partition''<br />
* <code>--ntasks=2</code> ''request 2 "tasks" which map to cores on a CPU, if passed to srun the given command will be run concurrently on each core''<br />
* <code>--account=accountname</code> ''use qos specific to an account''<br />
<br />
===Interactive Shell Sessions===<br />
An interactive shell session on a compute node can be useful for debugging or developing code that isn't ready to be run as a batch job. To get an interactive shell on a node, use <code>srun</code> to invoke a shell:<br />
<pre><br />
username@nexuscml00:srun --pty --qos=dpart --mem 1gb --time=01:00:00 bash<br />
username@tron33:<br />
</pre><br />
'''Please do not leave interactive shells running for long periods of time when you are not working. This blocks resources from being used by everyone else.'''<br />
<br />
==salloc==<br />
The salloc command can also be used to request resources be allocated without needing a batch script. Running salloc with a list of resources will allocate the resources you requested, create a job, and drop you into a subshell with the environment variables necessary to run commands in the newly created job allocation. When your time is up or you exit the subshell, your job allocation will be relinquished.<br />
<br />
<pre><br />
username@nexuscml00:salloc --qos=dpart -N 1 --mem=2gb --time=01:00:00<br />
salloc: Granted job allocation 159<br />
username@nexuscml00:srun /usr/bin/hostname<br />
tron33.umiacs.umd.edu<br />
username@nexuscml00:exit<br />
exit<br />
salloc: Relinquishing job allocation 159<br />
</pre><br />
<br />
'''Please note that any commands not invoked with srun will be run locally on the submit node. Please be careful when using salloc.'''<br />
<br />
==sbatch==<br />
The sbatch command allows you to write a batch script to be submitted and run non-interactively on the compute nodes. To run a simple Hello World command on the compute nodes you could write a file, helloWorld.sh with the following contents:<br />
<br />
<pre><br />
#!/bin/bash<br />
<br />
srun bash -c 'echo Hello World from `hostname`'<br />
</pre><br />
<br />
Then you need to submit the script with sbatch and request resources:<br />
<br />
<pre><br />
username@nexuscml00:sbatch --qos=dpart --mem=1gb --time=1:00:00 helloWorld.sh<br />
Submitted batch job 121<br />
</pre><br />
<br />
SLURM will return a job number that you can use to check the status of your job with squeue:<br />
<br />
<pre><br />
username@nexuscml00:squeue<br />
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)<br />
121 dpart helloWor username R 0:01 2 tron[32-33]<br />
</pre><br />
<br />
====Advanced Batch Scripts====<br />
You can also write a batch script with all of your resources/options defined in the script itself. This is useful for jobs that need to be run 10s/100s/1000s of times. You can then handle any necessary environment setup and run commands on the resources you requested by invoking commands with srun. The srun commands can also be more complex and be told to only use portions of your entire job allocation, each of these distinct srun commands makes up one "job step". The batch script will be run on the first node allocated as part of your job allocation and each job step will be run on whatever resources you tell them to. In the following example, we have a batch job that will request 2 nodes in the cluster. We then load a specific version of Python into my environment and submit two job steps, each one using one node. Since srun is blocks until the command finishes, we use the '&' operator to background the process so that both job steps can run at once; however, this means that we then need to use the wait command to block processing until all background processes have finished.<br />
<br />
<pre><br />
#!/bin/bash<br />
<br />
# Lines that begin with #SBATCH specify commands to be used by SLURM for scheduling<br />
<br />
#SBATCH --job-name=helloWorld # sets the job name<br />
#SBATCH --output=helloWorld.out.%j # indicates a file to redirect STDOUT to; %j is the jobid. Must be set to a file instead of a directory or else submission will fail.<br />
#SBATCH --error=helloWorld.out.%j # indicates a file to redirect STDERR to; %j is the jobid. Must be set to a file instead of a directory or else submission will fail.<br />
#SBATCH --time=00:05:00 # how long you think your job will take to complete; format=hh:mm:ss<br />
#SBATCH --qos=dpart # set QOS, this will determine what resources can be requested<br />
#SBATCH --nodes=2 # number of nodes to allocate for your job<br />
#SBATCH --ntasks=4 # request 4 cpu cores be reserved for your node total<br />
#SBATCH --ntasks-per-node=2 # request 2 cpu cores be reserved per node<br />
#SBATCH --mem=1gb # memory required by job; if unit is not specified MB will be assumed<br />
<br />
module load Python/2.7.9 # run any commands necessary to setup your environment<br />
<br />
srun -N 1 --mem=512mb bash -c "hostname; python --version" & # use srun to invoke commands within your job; using an '&'<br />
srun -N 1 --mem=512mb bash -c "hostname; python --version" & # will background the process allowing them to run concurrently<br />
wait # wait for any background processes to complete<br />
<br />
# once the end of the batch script is reached your job allocation will be revoked<br />
</pre><br />
<br />
Another useful thing to know is that you can pass additional arguments into your sbatch scripts on the command line and reference them as <code>${1}</code> for the first argument and so on.<br />
<br />
====More Examples====<br />
* [[SLURM/ArrayJobs]]<br />
<br />
===scancel===<br />
The scancel command can be used to cancel job allocations or job steps that are no longer needed. It can be passed individual job IDs or an option to delete all of your jobs or jobs that meet certain criteria.<br />
*<code>scancel 255</code> ''cancel job 255''<br />
*<code>scancel 255.3</code> ''cancel job step 3 of job 255''<br />
*<code>scancel --user username --partition=dpart</code> ''cancel all jobs for username in the dpart partition''<br />
<br />
=Identifying Resources and Features=<br />
The sinfo can show you additional features of nodes in the cluster but you need to ask it to show some non-default options using a command like this <br />
<code>sinfo -o "%15N %10c %10m %25f %10G"</code>.<br />
<br />
<pre><br />
$ sinfo -o "%40N %8c %8m %20f %25G"<br />
NODELIST CPUS MEMORY AVAIL_FEATURES GRES<br />
openlab08 32 128718 Xeon,E5-2690,rhel7 gpu:m40:1,gpu:k20:2<br />
thalesgpu[00,07-08] 32 257588 rhel8 gpu:teslak80:2<br />
thalesgpu01 32 257588 rhel8 gpu:teslak40m:2<br />
thalesgpu[02-03,05-06] 40 257557+ rhel8 gpu:titanX:4<br />
thalesgpu09 88 515588 rhel8 gpu:gtx1080ti:4<br />
openlab[20-23,25,27-28] 8+ 23937 Xeon,x5560,rhel7 (null)<br />
openlab[31-33] 64 257757 Opteron,6274,rhel7 (null)<br />
openlab[39-48,50,52-61] 16 23936+ Xeon,E5530,rhel7 (null)<br />
rinzler00 48 128253 AMD,EPYC-7402,rhel8 (null)<br />
thalesgpu04 40 257557 rhel8 gpu:titanXp:4<br />
thalesgpu10 40 515635 rhel8 gpu:m40:2<br />
</pre><br />
<br />
There is also a prewritten alias <code>show_nodes</code> on all of our SLURM computing clusters that shows each node's name, number of CPUs, memory, processor type (as AVAIL_FEATURES), GRES, State, and partitions that can submit to it. <br />
<br />
You can identify further specific information about a node using [https://wiki.umiacs.umd.edu/umiacs/index.php/SLURM/ClusterStatus#scontrol scontrol] with various flags.<br />
<br />
=Requesting GPUs=<br />
If you need to do processing on a GPU, you will need to request that your job have access to GPUs just as you need to request processors or CPU cores. You will also need to make sure that you submit your job to the correct partition since nodes with GPUs are often put into their own partition to prevent the nodes from being tied up by jobs that don't utilize GPUs. In SLURM, GPUs are considered "generic resources" also known as GRES. To request some number of GPUs be reserved/available for your job you can use the flag <code>--gres=gpu:2</code> or if there are multiple types of GPUs available in the cluster and you need a specific type, you can provide the type option to the gres flag e.g. <code>--gres=gpu:k20:1</code><br />
<br />
<pre><br />
username@opensub02:srun --pty --partition=gpu --qos=gpu --gres=gpu:2 nvidia-smi<br />
Wed Jul 13 15:33:18 2016<br />
+------------------------------------------------------+<br />
| NVIDIA-SMI 361.28 Driver Version: 361.28 |<br />
|-------------------------------+----------------------+----------------------+<br />
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |<br />
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |<br />
|===============================+======================+======================|<br />
| 0 Tesla K20c Off | 0000:03:00.0 Off | 0 |<br />
| 30% 24C P0 48W / 225W | 11MiB / 4799MiB | 0% Default |<br />
+-------------------------------+----------------------+----------------------+<br />
| 1 Tesla K20c Off | 0000:84:00.0 Off | 0 |<br />
| 30% 23C P0 52W / 225W | 11MiB / 4799MiB | 93% Default |<br />
+-------------------------------+----------------------+----------------------+<br />
<br />
+-----------------------------------------------------------------------------+<br />
| Processes: GPU Memory |<br />
| GPU PID Type Process name Usage |<br />
|=============================================================================|<br />
| No running processes found |<br />
+-----------------------------------------------------------------------------+<br />
</pre><br />
<br />
Please note that your job will only be able to see/access the GPUs you requested. If you only need 1 GPU, please request only 1 GPU and the other one will be left available for other users:<br />
<br />
<pre><br />
username@opensub02:srun --pty --partition=gpu --qos=gpu --gres=gpu:k20:1 nvidia-smi<br />
Wed Jul 13 15:31:29 2016<br />
+------------------------------------------------------+<br />
| NVIDIA-SMI 361.28 Driver Version: 361.28 |<br />
|-------------------------------+----------------------+----------------------+<br />
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |<br />
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |<br />
|===============================+======================+======================|<br />
| 0 Tesla K20c Off | 0000:03:00.0 Off | 0 |<br />
| 30% 24C P0 50W / 225W | 11MiB / 4799MiB | 92% Default |<br />
+-------------------------------+----------------------+----------------------+<br />
<br />
+-----------------------------------------------------------------------------+<br />
| Processes: GPU Memory |<br />
| GPU PID Type Process name Usage |<br />
|=============================================================================|<br />
| No running processes found |<br />
+-----------------------------------------------------------------------------+<br />
</pre><br />
<br />
As with all other flags, the <code>--gres</code> flag may also be passed to [[#sbatch | sbatch]] and [[#salloc | salloc]] rather than directly to [[#srun | srun]].<br />
<br />
=MPI example=<br />
<pre><br />
#!/usr/bin/bash <br />
#SBATCH --job-name=mpi_test # Job name <br />
#SBATCH --nodes=4 # Number of nodes <br />
#SBATCH --ntasks=8 # Number of MPI ranks <br />
#SBATCH --ntasks-per-node=2 # Number of MPI ranks per node <br />
#SBATCH --ntasks-per-socket=1 # Number of tasks per processor socket on the node <br />
#SBATCH --time=00:30:00 # Time limit hrs:min:sec <br />
<br />
module load mpi <br />
<br />
srun --mpi=openmpi /nfshomes/username/testing/mpi/a.out <br />
</pre></div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM/JobSubmission&diff=10410SLURM/JobSubmission2022-04-22T23:45:14Z<p>Jayid07: /* salloc */</p>
<hr />
<div>=Job Submission=<br />
SLURM offers a variety of ways to run jobs. It is important to understand the different options available and how to request the resources required for a job in order for it to run successfully. All job submission should be done from submit nodes; any computational code should be run in a job allocation on compute nodes. The following commands outline how to allocate resources on the compute nodes and submit processes to be run on the allocated nodes.<br />
<br />
Please note that the hard maximum number of jobs that the SLURM scheduler can handle is 10000. It is best to limit your number of submitted jobs at any given time to less than half this amount in the case that another user also wants to submit a large number of jobs.<br />
<br />
'''An important notice: computational jobs run on submission nodes will be terminated. Please use the compute nodes for that purpose.'''<br />
<br />
==srun==<br />
<code>srun</code> is the command used to run a process on the compute nodes in the cluster. It works by passing it a command (this could be a script) which will be run on a compute node and then <code>srun</code> will return. <code>srun</code> accepts many command line options to specify the resources required by the command passed to it. Some common command line arguments are listed below and full documentation of all available options is available in the man page for <code>srun</code>, which can be accessed by running <code>man srun</code>.<br />
<br />
<pre><br />
username@nexuscml01:srun --qos=dpart --mem=100mb --time=1:00:00 bash -c 'echo "Hello World from" `hostname`'<br />
Hello World from tron33.umiacs.umd.edu<br />
</pre><br />
<br />
It is important to understand that <code>srun</code> is an interactive command. By default input to <code>srun</code> is broadcast to all compute nodes running your process and output from the compute nodes is redirected to <code>srun</code>. This behavior can be changed; however, '''srun will always wait for the command passed to finish before exiting, so if you start a long running process and end your terminal session, your process will stop running on the compute nodes and your job will end'''. To run a non-interactive submission that will remain running after you logout, you will need to wrap your <code>srun</code> commands in a batch script and submit it with [[#sbatch | sbatch]].<br />
<br />
===Common srun arguments===<br />
* <code>--mem=1gb</code> ''if no unit is given MB is assumed''<br />
* <code>--nodes=2</code> ''if passed to srun, the given command will be run concurrently on each node''<br />
* <code>--qos=dpart</code> ''to see the available QOS options on a cluster, run'' <code>show_qos</code><br />
* <code>--time=hh:mm:ss</code> ''time needed to run your job''<br />
* <code>--job-name=helloWorld</code><br />
* <code>--output=filename</code> ''file to redirect stdout to''<br />
* <code>--error=filename</code> ''file to redirect stderr''<br />
* <code>--partition=$PNAME</code> ''request job run in the $PNAME partition''<br />
* <code>--ntasks=2</code> ''request 2 "tasks" which map to cores on a CPU, if passed to srun the given command will be run concurrently on each core''<br />
* <code>--account=accountname</code> ''use qos specific to an account''<br />
<br />
===Interactive Shell Sessions===<br />
An interactive shell session on a compute node can be useful for debugging or developing code that isn't ready to be run as a batch job. To get an interactive shell on a node, use <code>srun</code> to invoke a shell:<br />
<pre><br />
username@nexuscml00:srun --pty --qos=dpart --mem 1gb --time=01:00:00 bash<br />
username@tron33:<br />
</pre><br />
'''Please do not leave interactive shells running for long periods of time when you are not working. This blocks resources from being used by everyone else.'''<br />
<br />
==salloc==<br />
The salloc command can also be used to request resources be allocated without needing a batch script. Running salloc with a list of resources will allocate the resources you requested, create a job, and drop you into a subshell with the environment variables necessary to run commands in the newly created job allocation. When your time is up or you exit the subshell, your job allocation will be relinquished.<br />
<br />
<pre><br />
username@nexuscml00:salloc --qos=dpart -N 1 --mem=2gb --time=01:00:00<br />
salloc: Granted job allocation 159<br />
username@nexuscml00:srun /usr/bin/hostname<br />
tron33.umiacs.umd.edu<br />
username@nexuscml00:exit<br />
exit<br />
salloc: Relinquishing job allocation 159<br />
</pre><br />
<br />
'''Please note that any commands not invoked with srun will be run locally on the submit node. Please be careful when using salloc.'''<br />
<br />
==sbatch==<br />
The sbatch command allows you to write a batch script to be submitted and run non-interactively on the compute nodes. To run a simple Hello World command on the compute nodes you could write a file, helloWorld.sh with the following contents:<br />
<br />
<pre><br />
#!/bin/bash<br />
<br />
srun bash -c 'echo Hello World from `hostname`'<br />
</pre><br />
<br />
Then you need to submit the script with sbatch and request resources:<br />
<br />
<pre><br />
username@opensub02:sbatch --qos=dpart --mem=1gb --time=1:00:00 helloWorld.sh<br />
Submitted batch job 121<br />
</pre><br />
<br />
SLURM will return a job number that you can use to check the status of your job with squeue:<br />
<br />
<pre><br />
username@opensub02:squeue<br />
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)<br />
121 dpart helloWor username R 0:01 2 openlab[32-33]<br />
</pre><br />
<br />
====Advanced Batch Scripts====<br />
You can also write a batch script with all of your resources/options defined in the script itself. This is useful for jobs that need to be run 10s/100s/1000s of times. You can then handle any necessary environment setup and run commands on the resources you requested by invoking commands with srun. The srun commands can also be more complex and be told to only use portions of your entire job allocation, each of these distinct srun commands makes up one "job step". The batch script will be run on the first node allocated as part of your job allocation and each job step will be run on whatever resources you tell them to. In the following example, we have a batch job that will request 2 nodes in the cluster. We then load a specific version of Python into my environment and submit two job steps, each one using one node. Since srun is blocks until the command finishes, we use the '&' operator to background the process so that both job steps can run at once; however, this means that we then need to use the wait command to block processing until all background processes have finished.<br />
<br />
<pre><br />
#!/bin/bash<br />
<br />
# Lines that begin with #SBATCH specify commands to be used by SLURM for scheduling<br />
<br />
#SBATCH --job-name=helloWorld # sets the job name<br />
#SBATCH --output=helloWorld.out.%j # indicates a file to redirect STDOUT to; %j is the jobid. Must be set to a file instead of a directory or else submission will fail.<br />
#SBATCH --error=helloWorld.out.%j # indicates a file to redirect STDERR to; %j is the jobid. Must be set to a file instead of a directory or else submission will fail.<br />
#SBATCH --time=00:05:00 # how long you think your job will take to complete; format=hh:mm:ss<br />
#SBATCH --qos=dpart # set QOS, this will determine what resources can be requested<br />
#SBATCH --nodes=2 # number of nodes to allocate for your job<br />
#SBATCH --ntasks=4 # request 4 cpu cores be reserved for your node total<br />
#SBATCH --ntasks-per-node=2 # request 2 cpu cores be reserved per node<br />
#SBATCH --mem=1gb # memory required by job; if unit is not specified MB will be assumed<br />
<br />
module load Python/2.7.9 # run any commands necessary to setup your environment<br />
<br />
srun -N 1 --mem=512mb bash -c "hostname; python --version" & # use srun to invoke commands within your job; using an '&'<br />
srun -N 1 --mem=512mb bash -c "hostname; python --version" & # will background the process allowing them to run concurrently<br />
wait # wait for any background processes to complete<br />
<br />
# once the end of the batch script is reached your job allocation will be revoked<br />
</pre><br />
<br />
Another useful thing to know is that you can pass additional arguments into your sbatch scripts on the command line and reference them as <code>${1}</code> for the first argument and so on.<br />
<br />
====More Examples====<br />
* [[SLURM/ArrayJobs]]<br />
<br />
===scancel===<br />
The scancel command can be used to cancel job allocations or job steps that are no longer needed. It can be passed individual job IDs or an option to delete all of your jobs or jobs that meet certain criteria.<br />
*<code>scancel 255</code> ''cancel job 255''<br />
*<code>scancel 255.3</code> ''cancel job step 3 of job 255''<br />
*<code>scancel --user username --partition=dpart</code> ''cancel all jobs for username in the dpart partition''<br />
<br />
=Identifying Resources and Features=<br />
The sinfo can show you additional features of nodes in the cluster but you need to ask it to show some non-default options using a command like this <br />
<code>sinfo -o "%15N %10c %10m %25f %10G"</code>.<br />
<br />
<pre><br />
$ sinfo -o "%40N %8c %8m %20f %25G"<br />
NODELIST CPUS MEMORY AVAIL_FEATURES GRES<br />
openlab08 32 128718 Xeon,E5-2690,rhel7 gpu:m40:1,gpu:k20:2<br />
thalesgpu[00,07-08] 32 257588 rhel8 gpu:teslak80:2<br />
thalesgpu01 32 257588 rhel8 gpu:teslak40m:2<br />
thalesgpu[02-03,05-06] 40 257557+ rhel8 gpu:titanX:4<br />
thalesgpu09 88 515588 rhel8 gpu:gtx1080ti:4<br />
openlab[20-23,25,27-28] 8+ 23937 Xeon,x5560,rhel7 (null)<br />
openlab[31-33] 64 257757 Opteron,6274,rhel7 (null)<br />
openlab[39-48,50,52-61] 16 23936+ Xeon,E5530,rhel7 (null)<br />
rinzler00 48 128253 AMD,EPYC-7402,rhel8 (null)<br />
thalesgpu04 40 257557 rhel8 gpu:titanXp:4<br />
thalesgpu10 40 515635 rhel8 gpu:m40:2<br />
</pre><br />
<br />
There is also a prewritten alias <code>show_nodes</code> on all of our SLURM computing clusters that shows each node's name, number of CPUs, memory, processor type (as AVAIL_FEATURES), GRES, State, and partitions that can submit to it. <br />
<br />
You can identify further specific information about a node using [https://wiki.umiacs.umd.edu/umiacs/index.php/SLURM/ClusterStatus#scontrol scontrol] with various flags.<br />
<br />
=Requesting GPUs=<br />
If you need to do processing on a GPU, you will need to request that your job have access to GPUs just as you need to request processors or CPU cores. You will also need to make sure that you submit your job to the correct partition since nodes with GPUs are often put into their own partition to prevent the nodes from being tied up by jobs that don't utilize GPUs. In SLURM, GPUs are considered "generic resources" also known as GRES. To request some number of GPUs be reserved/available for your job you can use the flag <code>--gres=gpu:2</code> or if there are multiple types of GPUs available in the cluster and you need a specific type, you can provide the type option to the gres flag e.g. <code>--gres=gpu:k20:1</code><br />
<br />
<pre><br />
username@opensub02:srun --pty --partition=gpu --qos=gpu --gres=gpu:2 nvidia-smi<br />
Wed Jul 13 15:33:18 2016<br />
+------------------------------------------------------+<br />
| NVIDIA-SMI 361.28 Driver Version: 361.28 |<br />
|-------------------------------+----------------------+----------------------+<br />
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |<br />
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |<br />
|===============================+======================+======================|<br />
| 0 Tesla K20c Off | 0000:03:00.0 Off | 0 |<br />
| 30% 24C P0 48W / 225W | 11MiB / 4799MiB | 0% Default |<br />
+-------------------------------+----------------------+----------------------+<br />
| 1 Tesla K20c Off | 0000:84:00.0 Off | 0 |<br />
| 30% 23C P0 52W / 225W | 11MiB / 4799MiB | 93% Default |<br />
+-------------------------------+----------------------+----------------------+<br />
<br />
+-----------------------------------------------------------------------------+<br />
| Processes: GPU Memory |<br />
| GPU PID Type Process name Usage |<br />
|=============================================================================|<br />
| No running processes found |<br />
+-----------------------------------------------------------------------------+<br />
</pre><br />
<br />
Please note that your job will only be able to see/access the GPUs you requested. If you only need 1 GPU, please request only 1 GPU and the other one will be left available for other users:<br />
<br />
<pre><br />
username@opensub02:srun --pty --partition=gpu --qos=gpu --gres=gpu:k20:1 nvidia-smi<br />
Wed Jul 13 15:31:29 2016<br />
+------------------------------------------------------+<br />
| NVIDIA-SMI 361.28 Driver Version: 361.28 |<br />
|-------------------------------+----------------------+----------------------+<br />
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |<br />
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |<br />
|===============================+======================+======================|<br />
| 0 Tesla K20c Off | 0000:03:00.0 Off | 0 |<br />
| 30% 24C P0 50W / 225W | 11MiB / 4799MiB | 92% Default |<br />
+-------------------------------+----------------------+----------------------+<br />
<br />
+-----------------------------------------------------------------------------+<br />
| Processes: GPU Memory |<br />
| GPU PID Type Process name Usage |<br />
|=============================================================================|<br />
| No running processes found |<br />
+-----------------------------------------------------------------------------+<br />
</pre><br />
<br />
As with all other flags, the <code>--gres</code> flag may also be passed to [[#sbatch | sbatch]] and [[#salloc | salloc]] rather than directly to [[#srun | srun]].<br />
<br />
=MPI example=<br />
<pre><br />
#!/usr/bin/bash <br />
#SBATCH --job-name=mpi_test # Job name <br />
#SBATCH --nodes=4 # Number of nodes <br />
#SBATCH --ntasks=8 # Number of MPI ranks <br />
#SBATCH --ntasks-per-node=2 # Number of MPI ranks per node <br />
#SBATCH --ntasks-per-socket=1 # Number of tasks per processor socket on the node <br />
#SBATCH --time=00:30:00 # Time limit hrs:min:sec <br />
<br />
module load mpi <br />
<br />
srun --mpi=openmpi /nfshomes/username/testing/mpi/a.out <br />
</pre></div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM/JobSubmission&diff=10409SLURM/JobSubmission2022-04-22T23:44:43Z<p>Jayid07: /* srun */</p>
<hr />
<div>=Job Submission=<br />
SLURM offers a variety of ways to run jobs. It is important to understand the different options available and how to request the resources required for a job in order for it to run successfully. All job submission should be done from submit nodes; any computational code should be run in a job allocation on compute nodes. The following commands outline how to allocate resources on the compute nodes and submit processes to be run on the allocated nodes.<br />
<br />
Please note that the hard maximum number of jobs that the SLURM scheduler can handle is 10000. It is best to limit your number of submitted jobs at any given time to less than half this amount in the case that another user also wants to submit a large number of jobs.<br />
<br />
'''An important notice: computational jobs run on submission nodes will be terminated. Please use the compute nodes for that purpose.'''<br />
<br />
==srun==<br />
<code>srun</code> is the command used to run a process on the compute nodes in the cluster. It works by passing it a command (this could be a script) which will be run on a compute node and then <code>srun</code> will return. <code>srun</code> accepts many command line options to specify the resources required by the command passed to it. Some common command line arguments are listed below and full documentation of all available options is available in the man page for <code>srun</code>, which can be accessed by running <code>man srun</code>.<br />
<br />
<pre><br />
username@nexuscml01:srun --qos=dpart --mem=100mb --time=1:00:00 bash -c 'echo "Hello World from" `hostname`'<br />
Hello World from tron33.umiacs.umd.edu<br />
</pre><br />
<br />
It is important to understand that <code>srun</code> is an interactive command. By default input to <code>srun</code> is broadcast to all compute nodes running your process and output from the compute nodes is redirected to <code>srun</code>. This behavior can be changed; however, '''srun will always wait for the command passed to finish before exiting, so if you start a long running process and end your terminal session, your process will stop running on the compute nodes and your job will end'''. To run a non-interactive submission that will remain running after you logout, you will need to wrap your <code>srun</code> commands in a batch script and submit it with [[#sbatch | sbatch]].<br />
<br />
===Common srun arguments===<br />
* <code>--mem=1gb</code> ''if no unit is given MB is assumed''<br />
* <code>--nodes=2</code> ''if passed to srun, the given command will be run concurrently on each node''<br />
* <code>--qos=dpart</code> ''to see the available QOS options on a cluster, run'' <code>show_qos</code><br />
* <code>--time=hh:mm:ss</code> ''time needed to run your job''<br />
* <code>--job-name=helloWorld</code><br />
* <code>--output=filename</code> ''file to redirect stdout to''<br />
* <code>--error=filename</code> ''file to redirect stderr''<br />
* <code>--partition=$PNAME</code> ''request job run in the $PNAME partition''<br />
* <code>--ntasks=2</code> ''request 2 "tasks" which map to cores on a CPU, if passed to srun the given command will be run concurrently on each core''<br />
* <code>--account=accountname</code> ''use qos specific to an account''<br />
<br />
===Interactive Shell Sessions===<br />
An interactive shell session on a compute node can be useful for debugging or developing code that isn't ready to be run as a batch job. To get an interactive shell on a node, use <code>srun</code> to invoke a shell:<br />
<pre><br />
username@nexuscml00:srun --pty --qos=dpart --mem 1gb --time=01:00:00 bash<br />
username@tron33:<br />
</pre><br />
'''Please do not leave interactive shells running for long periods of time when you are not working. This blocks resources from being used by everyone else.'''<br />
<br />
==salloc==<br />
The salloc command can also be used to request resources be allocated without needing a batch script. Running salloc with a list of resources will allocate the resources you requested, create a job, and drop you into a subshell with the environment variables necessary to run commands in the newly created job allocation. When your time is up or you exit the subshell, your job allocation will be relinquished.<br />
<br />
<pre><br />
username@opensub02:salloc --qos=dpart -N 1 --mem=2gb --time=01:00:00<br />
salloc: Granted job allocation 159<br />
username@opensub02:srun /usr/bin/hostname<br />
openlab33.umiacs.umd.edu<br />
username@opensub02:exit<br />
exit<br />
salloc: Relinquishing job allocation 159<br />
</pre><br />
<br />
'''Please note that any commands not invoked with srun will be run locally on the submit node. Please be careful when using salloc.'''<br />
<br />
==sbatch==<br />
The sbatch command allows you to write a batch script to be submitted and run non-interactively on the compute nodes. To run a simple Hello World command on the compute nodes you could write a file, helloWorld.sh with the following contents:<br />
<br />
<pre><br />
#!/bin/bash<br />
<br />
srun bash -c 'echo Hello World from `hostname`'<br />
</pre><br />
<br />
Then you need to submit the script with sbatch and request resources:<br />
<br />
<pre><br />
username@opensub02:sbatch --qos=dpart --mem=1gb --time=1:00:00 helloWorld.sh<br />
Submitted batch job 121<br />
</pre><br />
<br />
SLURM will return a job number that you can use to check the status of your job with squeue:<br />
<br />
<pre><br />
username@opensub02:squeue<br />
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)<br />
121 dpart helloWor username R 0:01 2 openlab[32-33]<br />
</pre><br />
<br />
====Advanced Batch Scripts====<br />
You can also write a batch script with all of your resources/options defined in the script itself. This is useful for jobs that need to be run 10s/100s/1000s of times. You can then handle any necessary environment setup and run commands on the resources you requested by invoking commands with srun. The srun commands can also be more complex and be told to only use portions of your entire job allocation, each of these distinct srun commands makes up one "job step". The batch script will be run on the first node allocated as part of your job allocation and each job step will be run on whatever resources you tell them to. In the following example, we have a batch job that will request 2 nodes in the cluster. We then load a specific version of Python into my environment and submit two job steps, each one using one node. Since srun is blocks until the command finishes, we use the '&' operator to background the process so that both job steps can run at once; however, this means that we then need to use the wait command to block processing until all background processes have finished.<br />
<br />
<pre><br />
#!/bin/bash<br />
<br />
# Lines that begin with #SBATCH specify commands to be used by SLURM for scheduling<br />
<br />
#SBATCH --job-name=helloWorld # sets the job name<br />
#SBATCH --output=helloWorld.out.%j # indicates a file to redirect STDOUT to; %j is the jobid. Must be set to a file instead of a directory or else submission will fail.<br />
#SBATCH --error=helloWorld.out.%j # indicates a file to redirect STDERR to; %j is the jobid. Must be set to a file instead of a directory or else submission will fail.<br />
#SBATCH --time=00:05:00 # how long you think your job will take to complete; format=hh:mm:ss<br />
#SBATCH --qos=dpart # set QOS, this will determine what resources can be requested<br />
#SBATCH --nodes=2 # number of nodes to allocate for your job<br />
#SBATCH --ntasks=4 # request 4 cpu cores be reserved for your node total<br />
#SBATCH --ntasks-per-node=2 # request 2 cpu cores be reserved per node<br />
#SBATCH --mem=1gb # memory required by job; if unit is not specified MB will be assumed<br />
<br />
module load Python/2.7.9 # run any commands necessary to setup your environment<br />
<br />
srun -N 1 --mem=512mb bash -c "hostname; python --version" & # use srun to invoke commands within your job; using an '&'<br />
srun -N 1 --mem=512mb bash -c "hostname; python --version" & # will background the process allowing them to run concurrently<br />
wait # wait for any background processes to complete<br />
<br />
# once the end of the batch script is reached your job allocation will be revoked<br />
</pre><br />
<br />
Another useful thing to know is that you can pass additional arguments into your sbatch scripts on the command line and reference them as <code>${1}</code> for the first argument and so on.<br />
<br />
====More Examples====<br />
* [[SLURM/ArrayJobs]]<br />
<br />
===scancel===<br />
The scancel command can be used to cancel job allocations or job steps that are no longer needed. It can be passed individual job IDs or an option to delete all of your jobs or jobs that meet certain criteria.<br />
*<code>scancel 255</code> ''cancel job 255''<br />
*<code>scancel 255.3</code> ''cancel job step 3 of job 255''<br />
*<code>scancel --user username --partition=dpart</code> ''cancel all jobs for username in the dpart partition''<br />
<br />
=Identifying Resources and Features=<br />
The sinfo can show you additional features of nodes in the cluster but you need to ask it to show some non-default options using a command like this <br />
<code>sinfo -o "%15N %10c %10m %25f %10G"</code>.<br />
<br />
<pre><br />
$ sinfo -o "%40N %8c %8m %20f %25G"<br />
NODELIST CPUS MEMORY AVAIL_FEATURES GRES<br />
openlab08 32 128718 Xeon,E5-2690,rhel7 gpu:m40:1,gpu:k20:2<br />
thalesgpu[00,07-08] 32 257588 rhel8 gpu:teslak80:2<br />
thalesgpu01 32 257588 rhel8 gpu:teslak40m:2<br />
thalesgpu[02-03,05-06] 40 257557+ rhel8 gpu:titanX:4<br />
thalesgpu09 88 515588 rhel8 gpu:gtx1080ti:4<br />
openlab[20-23,25,27-28] 8+ 23937 Xeon,x5560,rhel7 (null)<br />
openlab[31-33] 64 257757 Opteron,6274,rhel7 (null)<br />
openlab[39-48,50,52-61] 16 23936+ Xeon,E5530,rhel7 (null)<br />
rinzler00 48 128253 AMD,EPYC-7402,rhel8 (null)<br />
thalesgpu04 40 257557 rhel8 gpu:titanXp:4<br />
thalesgpu10 40 515635 rhel8 gpu:m40:2<br />
</pre><br />
<br />
There is also a prewritten alias <code>show_nodes</code> on all of our SLURM computing clusters that shows each node's name, number of CPUs, memory, processor type (as AVAIL_FEATURES), GRES, State, and partitions that can submit to it. <br />
<br />
You can identify further specific information about a node using [https://wiki.umiacs.umd.edu/umiacs/index.php/SLURM/ClusterStatus#scontrol scontrol] with various flags.<br />
<br />
=Requesting GPUs=<br />
If you need to do processing on a GPU, you will need to request that your job have access to GPUs just as you need to request processors or CPU cores. You will also need to make sure that you submit your job to the correct partition since nodes with GPUs are often put into their own partition to prevent the nodes from being tied up by jobs that don't utilize GPUs. In SLURM, GPUs are considered "generic resources" also known as GRES. To request some number of GPUs be reserved/available for your job you can use the flag <code>--gres=gpu:2</code> or if there are multiple types of GPUs available in the cluster and you need a specific type, you can provide the type option to the gres flag e.g. <code>--gres=gpu:k20:1</code><br />
<br />
<pre><br />
username@opensub02:srun --pty --partition=gpu --qos=gpu --gres=gpu:2 nvidia-smi<br />
Wed Jul 13 15:33:18 2016<br />
+------------------------------------------------------+<br />
| NVIDIA-SMI 361.28 Driver Version: 361.28 |<br />
|-------------------------------+----------------------+----------------------+<br />
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |<br />
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |<br />
|===============================+======================+======================|<br />
| 0 Tesla K20c Off | 0000:03:00.0 Off | 0 |<br />
| 30% 24C P0 48W / 225W | 11MiB / 4799MiB | 0% Default |<br />
+-------------------------------+----------------------+----------------------+<br />
| 1 Tesla K20c Off | 0000:84:00.0 Off | 0 |<br />
| 30% 23C P0 52W / 225W | 11MiB / 4799MiB | 93% Default |<br />
+-------------------------------+----------------------+----------------------+<br />
<br />
+-----------------------------------------------------------------------------+<br />
| Processes: GPU Memory |<br />
| GPU PID Type Process name Usage |<br />
|=============================================================================|<br />
| No running processes found |<br />
+-----------------------------------------------------------------------------+<br />
</pre><br />
<br />
Please note that your job will only be able to see/access the GPUs you requested. If you only need 1 GPU, please request only 1 GPU and the other one will be left available for other users:<br />
<br />
<pre><br />
username@opensub02:srun --pty --partition=gpu --qos=gpu --gres=gpu:k20:1 nvidia-smi<br />
Wed Jul 13 15:31:29 2016<br />
+------------------------------------------------------+<br />
| NVIDIA-SMI 361.28 Driver Version: 361.28 |<br />
|-------------------------------+----------------------+----------------------+<br />
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |<br />
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |<br />
|===============================+======================+======================|<br />
| 0 Tesla K20c Off | 0000:03:00.0 Off | 0 |<br />
| 30% 24C P0 50W / 225W | 11MiB / 4799MiB | 92% Default |<br />
+-------------------------------+----------------------+----------------------+<br />
<br />
+-----------------------------------------------------------------------------+<br />
| Processes: GPU Memory |<br />
| GPU PID Type Process name Usage |<br />
|=============================================================================|<br />
| No running processes found |<br />
+-----------------------------------------------------------------------------+<br />
</pre><br />
<br />
As with all other flags, the <code>--gres</code> flag may also be passed to [[#sbatch | sbatch]] and [[#salloc | salloc]] rather than directly to [[#srun | srun]].<br />
<br />
=MPI example=<br />
<pre><br />
#!/usr/bin/bash <br />
#SBATCH --job-name=mpi_test # Job name <br />
#SBATCH --nodes=4 # Number of nodes <br />
#SBATCH --ntasks=8 # Number of MPI ranks <br />
#SBATCH --ntasks-per-node=2 # Number of MPI ranks per node <br />
#SBATCH --ntasks-per-socket=1 # Number of tasks per processor socket on the node <br />
#SBATCH --time=00:30:00 # Time limit hrs:min:sec <br />
<br />
module load mpi <br />
<br />
srun --mpi=openmpi /nfshomes/username/testing/mpi/a.out <br />
</pre></div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM/JobStatus&diff=10408SLURM/JobStatus2022-04-22T23:43:22Z<p>Jayid07: /* sacct */</p>
<hr />
<div>=Job Status=<br />
SLURM offers a variety of tools to check the status of your jobs before, during, and after execution. When you first submit your job, SLURM should give you a job ID which represents the resources allocated to your job. Individual calls to srun will spawn job steps which can also be queried individually.<br />
<br />
==squeue==<br />
The squeue command shows job status in the queue. Helpful flags:<br />
* <code>-u username</code> to show only your jobs (replace username with your UMIACS username)<br />
* <code>--start</code> to estimate start time for a job that has not yet started and the reason why it is waiting<br />
* <code>-s</code> to show the status of individual job steps for a job (e.g. batch jobs)<br />
<br />
Examples:<br />
<pre><br />
username@nexusclip00:squeue -u username<br />
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)<br />
162 test2 helloWor username R 0:03 2 tron[00-01]<br />
</pre><br />
<br />
<pre><br />
username@nexusclip00:squeue --start -u username<br />
JOBID PARTITION NAME USER ST START_TIME NODES SCHEDNODES NODELIST(REASON)<br />
163 test2 helloWo2 username PD 2020-05-11T18:36:49 1 tron02 (Priority)<br />
</pre><br />
<br />
<pre><br />
username@nexusclip00:squeue -s -u username<br />
STEPID NAME PARTITION USER TIME NODELIST<br />
162.0 sleep test2 username 0:05 tron00<br />
162.1 sleep test2 username 0:05 tron01<br />
</pre><br />
<br />
==sstat==<br />
The sstat command shows metrics from currently running job steps. If you don't specify a job step, the lowest job step is displayed.<br />
<pre><br />
sstat --format JobID,NTasks,nodelist,MaxRSS,MaxVMSize,AveRSS,AveVMSize <$JOBID>.<$JOBSTEP><br />
</pre><br />
<pre><br />
username@nexusclip00: sstat --format JobID,NTasks,nodelist,MaxRSS,MaxVMSize,AveRSS,AveVMSize 171<br />
JobID NTasks Nodelist MaxRSS MaxVMSize AveRSS AveVMSize <br />
------------ -------- -------------------- ---------- ---------- ---------- ---------- <br />
171.0 1 tron00 0 186060K 0 107900K <br />
username@nexusclip00: sstat --format JobID,NTasks,nodelist,MaxRSS,MaxVMSize,AveRSS,AveVMSize 171.1<br />
JobID NTasks Nodelist MaxRSS MaxVMSize AveRSS AveVMSize <br />
------------ -------- -------------------- ---------- ---------- ---------- ---------- <br />
171.1 1 tron01 0 186060K 0 107900K <br />
</pre><br />
Note that if you do not have any jobsteps, sstat will return an error.<br />
<pre><br />
username@nexusclip00: sstat --format JobID,NTasks,nodelist,MaxRSS,MaxVMSize,AveRSS,AveVMSize 172<br />
JobID NTasks Nodelist MaxRSS MaxVMSize AveRSS AveVMSize <br />
------------ -------- -------------------- ---------- ---------- ---------- ----------<br />
sstat: error: no steps running for job 237<br />
</pre><br />
If you do not run any srun commands, you will not create any job steps and metrics will not be available for your job. Your batch scripts should follow this format:<br />
<pre><br />
#!/bin/bash<br />
#SBATCH ...<br />
#SBATCH ...<br />
# set environment up<br />
module load ...<br />
<br />
# launch job steps<br />
srun <command to run> # that would be step 1<br />
srun <command to run> # that would be step 2<br />
</pre><br />
<br />
==sacct==<br />
The sacct command shows metrics from past jobs.<br />
<pre><br />
username@nexusclip00:sacct<br />
JobID JobName Partition Account AllocCPUS State ExitCode <br />
------------ ---------- ---------- ---------- ---------- ---------- -------- <br />
162 helloWorld test2 staff 2 COMPLETED 0:0 <br />
162.batch batch staff 1 COMPLETED 0:0 <br />
162.0 sleep staff 1 COMPLETED 0:0 <br />
162.1 sleep staff 1 COMPLETED 0:0 <br />
163 helloWorld test2 staff 2 COMPLETED 0:0 <br />
163.batch batch staff 1 COMPLETED 0:0 <br />
163.0 sleep staff 1 COMPLETED 0:0 <br />
</pre><br />
To check one specific job, you can run something like the following (if you omit .<$JOBSTEP>, all jobsteps will be shown):<br />
<pre>sacct --format JobID,jobname,NTasks,nodelist,MaxRSS,MaxVMSize,AveRSS,AveVMSize,Elapsed -j <$JOBID>.<$JOBSTEP></pre><br />
<pre><br />
username@nexusclip00:sacct --format JobID,jobname,NTasks,nodelist,MaxRSS,MaxVMSize,AveRSS,AveVMSize,Elapsed -j 171<br />
JobID JobName NTasks NodeList MaxRSS MaxVMSize AveRSS AveVMSize Elapsed <br />
------------ ---------- -------- --------------- ---------- ---------- ---------- ---------- ---------- <br />
171 helloWorld tron[00-01] 00:00:30 <br />
171.batch batch 1 tron00 0 119784K 0 113120K 00:00:30 <br />
171.0 sleep 1 tron00 0 186060K 0 107900K 00:00:30 <br />
171.1 sleep 1 tron01 0 186060K 0 107900K 00:00:30 <br />
</pre><br />
<br />
=Job Codes=<br />
When you list the current running jobs and your job is in <code>PD</code> (Pending), SLURM will provide you some information on what the reason for this in the NODELIST parameter. You can use <code>scontrol show job <jobid></code> to get all the parameters for your job which may be required to identify why your job is not running.<br />
<br />
<pre><br />
# squeue -u testuser<br />
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)<br />
581530 dpart bash testuser PD 0:00 1 (AssocGrpGRES)<br />
581533 dpart bash testuser PD 0:00 1 (Resources)<br />
581534 dpart bash testuser PD 0:00 1 (QOSMaxGRESPerUser)<br />
581535 scavenger bash testuser PD 0:00 1 (ReqNodeNotAvail, Reserved for maintenance)<br />
</pre><br />
<br />
Some common ones are as follows:<br />
* <code>Resources</code> - The cluster does not currently have the resources to fit your job.<br />
* <code>QOSMaxGRESPerUser</code> - The quality of service (QoS) your job is running in has a limit of resources per user. Use <code>show_qos</code> to identify the limits and then use <code>scontrol show job <jobid></code> for each of your jobs running in that QoS.<br />
* <code>AssocGrpGRES</code> - The SLURM account you are using has a limit on the resources available in total for the account. Use <code>sacctmgr show assoc account=<account_name></code> to identify the GrpTRES limit. You can see all jobs running under the account by running <code>squeue -A account_name</code> and then find out more information on each job by <code>scontrol show job <jobid></code>.<br />
* <code>ReqNodeNotAvail</code> - If you have requested a specific node and it is currently scheduled you can get this job code. You can also get this job code when it provides <code>Reserved for maintenance</code> that there is a reservation in place (often for a [[MonthlyMaintenanceWindow | maintenance window]]). You can see the current reservations by running <code>scontrol show reservation</code>. Often the culprit is that you have requested a TimeLimit that will conflict with the reservation. You can either lower your TimeLimit so that the job will complete before the reservation begins, or leave your job to wait until the reservation completes.</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM/JobStatus&diff=10407SLURM/JobStatus2022-04-22T23:42:31Z<p>Jayid07: /* sstat */</p>
<hr />
<div>=Job Status=<br />
SLURM offers a variety of tools to check the status of your jobs before, during, and after execution. When you first submit your job, SLURM should give you a job ID which represents the resources allocated to your job. Individual calls to srun will spawn job steps which can also be queried individually.<br />
<br />
==squeue==<br />
The squeue command shows job status in the queue. Helpful flags:<br />
* <code>-u username</code> to show only your jobs (replace username with your UMIACS username)<br />
* <code>--start</code> to estimate start time for a job that has not yet started and the reason why it is waiting<br />
* <code>-s</code> to show the status of individual job steps for a job (e.g. batch jobs)<br />
<br />
Examples:<br />
<pre><br />
username@nexusclip00:squeue -u username<br />
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)<br />
162 test2 helloWor username R 0:03 2 tron[00-01]<br />
</pre><br />
<br />
<pre><br />
username@nexusclip00:squeue --start -u username<br />
JOBID PARTITION NAME USER ST START_TIME NODES SCHEDNODES NODELIST(REASON)<br />
163 test2 helloWo2 username PD 2020-05-11T18:36:49 1 tron02 (Priority)<br />
</pre><br />
<br />
<pre><br />
username@nexusclip00:squeue -s -u username<br />
STEPID NAME PARTITION USER TIME NODELIST<br />
162.0 sleep test2 username 0:05 tron00<br />
162.1 sleep test2 username 0:05 tron01<br />
</pre><br />
<br />
==sstat==<br />
The sstat command shows metrics from currently running job steps. If you don't specify a job step, the lowest job step is displayed.<br />
<pre><br />
sstat --format JobID,NTasks,nodelist,MaxRSS,MaxVMSize,AveRSS,AveVMSize <$JOBID>.<$JOBSTEP><br />
</pre><br />
<pre><br />
username@nexusclip00: sstat --format JobID,NTasks,nodelist,MaxRSS,MaxVMSize,AveRSS,AveVMSize 171<br />
JobID NTasks Nodelist MaxRSS MaxVMSize AveRSS AveVMSize <br />
------------ -------- -------------------- ---------- ---------- ---------- ---------- <br />
171.0 1 tron00 0 186060K 0 107900K <br />
username@nexusclip00: sstat --format JobID,NTasks,nodelist,MaxRSS,MaxVMSize,AveRSS,AveVMSize 171.1<br />
JobID NTasks Nodelist MaxRSS MaxVMSize AveRSS AveVMSize <br />
------------ -------- -------------------- ---------- ---------- ---------- ---------- <br />
171.1 1 tron01 0 186060K 0 107900K <br />
</pre><br />
Note that if you do not have any jobsteps, sstat will return an error.<br />
<pre><br />
username@nexusclip00: sstat --format JobID,NTasks,nodelist,MaxRSS,MaxVMSize,AveRSS,AveVMSize 172<br />
JobID NTasks Nodelist MaxRSS MaxVMSize AveRSS AveVMSize <br />
------------ -------- -------------------- ---------- ---------- ---------- ----------<br />
sstat: error: no steps running for job 237<br />
</pre><br />
If you do not run any srun commands, you will not create any job steps and metrics will not be available for your job. Your batch scripts should follow this format:<br />
<pre><br />
#!/bin/bash<br />
#SBATCH ...<br />
#SBATCH ...<br />
# set environment up<br />
module load ...<br />
<br />
# launch job steps<br />
srun <command to run> # that would be step 1<br />
srun <command to run> # that would be step 2<br />
</pre><br />
<br />
==sacct==<br />
The sacct command shows metrics from past jobs.<br />
<pre><br />
username@opensub00:sacct<br />
JobID JobName Partition Account AllocCPUS State ExitCode <br />
------------ ---------- ---------- ---------- ---------- ---------- -------- <br />
162 helloWorld test2 staff 2 COMPLETED 0:0 <br />
162.batch batch staff 1 COMPLETED 0:0 <br />
162.0 sleep staff 1 COMPLETED 0:0 <br />
162.1 sleep staff 1 COMPLETED 0:0 <br />
163 helloWorld test2 staff 2 COMPLETED 0:0 <br />
163.batch batch staff 1 COMPLETED 0:0 <br />
163.0 sleep staff 1 COMPLETED 0:0 <br />
</pre><br />
To check one specific job, you can run something like the following (if you omit .<$JOBSTEP>, all jobsteps will be shown):<br />
<pre>sacct --format JobID,jobname,NTasks,nodelist,MaxRSS,MaxVMSize,AveRSS,AveVMSize,Elapsed -j <$JOBID>.<$JOBSTEP></pre><br />
<pre><br />
username@opensub00:sacct --format JobID,jobname,NTasks,nodelist,MaxRSS,MaxVMSize,AveRSS,AveVMSize,Elapsed -j 171<br />
JobID JobName NTasks NodeList MaxRSS MaxVMSize AveRSS AveVMSize Elapsed <br />
------------ ---------- -------- --------------- ---------- ---------- ---------- ---------- ---------- <br />
171 helloWorld openlab[00-01] 00:00:30 <br />
171.batch batch 1 openlab00 0 119784K 0 113120K 00:00:30 <br />
171.0 sleep 1 openlab00 0 186060K 0 107900K 00:00:30 <br />
171.1 sleep 1 openlab01 0 186060K 0 107900K 00:00:30 <br />
</pre><br />
<br />
=Job Codes=<br />
When you list the current running jobs and your job is in <code>PD</code> (Pending), SLURM will provide you some information on what the reason for this in the NODELIST parameter. You can use <code>scontrol show job <jobid></code> to get all the parameters for your job which may be required to identify why your job is not running.<br />
<br />
<pre><br />
# squeue -u testuser<br />
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)<br />
581530 dpart bash testuser PD 0:00 1 (AssocGrpGRES)<br />
581533 dpart bash testuser PD 0:00 1 (Resources)<br />
581534 dpart bash testuser PD 0:00 1 (QOSMaxGRESPerUser)<br />
581535 scavenger bash testuser PD 0:00 1 (ReqNodeNotAvail, Reserved for maintenance)<br />
</pre><br />
<br />
Some common ones are as follows:<br />
* <code>Resources</code> - The cluster does not currently have the resources to fit your job.<br />
* <code>QOSMaxGRESPerUser</code> - The quality of service (QoS) your job is running in has a limit of resources per user. Use <code>show_qos</code> to identify the limits and then use <code>scontrol show job <jobid></code> for each of your jobs running in that QoS.<br />
* <code>AssocGrpGRES</code> - The SLURM account you are using has a limit on the resources available in total for the account. Use <code>sacctmgr show assoc account=<account_name></code> to identify the GrpTRES limit. You can see all jobs running under the account by running <code>squeue -A account_name</code> and then find out more information on each job by <code>scontrol show job <jobid></code>.<br />
* <code>ReqNodeNotAvail</code> - If you have requested a specific node and it is currently scheduled you can get this job code. You can also get this job code when it provides <code>Reserved for maintenance</code> that there is a reservation in place (often for a [[MonthlyMaintenanceWindow | maintenance window]]). You can see the current reservations by running <code>scontrol show reservation</code>. Often the culprit is that you have requested a TimeLimit that will conflict with the reservation. You can either lower your TimeLimit so that the job will complete before the reservation begins, or leave your job to wait until the reservation completes.</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM/JobStatus&diff=10406SLURM/JobStatus2022-04-22T23:41:34Z<p>Jayid07: /* squeue */</p>
<hr />
<div>=Job Status=<br />
SLURM offers a variety of tools to check the status of your jobs before, during, and after execution. When you first submit your job, SLURM should give you a job ID which represents the resources allocated to your job. Individual calls to srun will spawn job steps which can also be queried individually.<br />
<br />
==squeue==<br />
The squeue command shows job status in the queue. Helpful flags:<br />
* <code>-u username</code> to show only your jobs (replace username with your UMIACS username)<br />
* <code>--start</code> to estimate start time for a job that has not yet started and the reason why it is waiting<br />
* <code>-s</code> to show the status of individual job steps for a job (e.g. batch jobs)<br />
<br />
Examples:<br />
<pre><br />
username@nexusclip00:squeue -u username<br />
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)<br />
162 test2 helloWor username R 0:03 2 tron[00-01]<br />
</pre><br />
<br />
<pre><br />
username@nexusclip00:squeue --start -u username<br />
JOBID PARTITION NAME USER ST START_TIME NODES SCHEDNODES NODELIST(REASON)<br />
163 test2 helloWo2 username PD 2020-05-11T18:36:49 1 tron02 (Priority)<br />
</pre><br />
<br />
<pre><br />
username@nexusclip00:squeue -s -u username<br />
STEPID NAME PARTITION USER TIME NODELIST<br />
162.0 sleep test2 username 0:05 tron00<br />
162.1 sleep test2 username 0:05 tron01<br />
</pre><br />
<br />
==sstat==<br />
The sstat command shows metrics from currently running job steps. If you don't specify a job step, the lowest job step is displayed.<br />
<pre><br />
sstat --format JobID,NTasks,nodelist,MaxRSS,MaxVMSize,AveRSS,AveVMSize <$JOBID>.<$JOBSTEP><br />
</pre><br />
<pre><br />
username@opensub00: sstat --format JobID,NTasks,nodelist,MaxRSS,MaxVMSize,AveRSS,AveVMSize 171<br />
JobID NTasks Nodelist MaxRSS MaxVMSize AveRSS AveVMSize <br />
------------ -------- -------------------- ---------- ---------- ---------- ---------- <br />
171.0 1 openlab00 0 186060K 0 107900K <br />
username@opensub00: sstat --format JobID,NTasks,nodelist,MaxRSS,MaxVMSize,AveRSS,AveVMSize 171.1<br />
JobID NTasks Nodelist MaxRSS MaxVMSize AveRSS AveVMSize <br />
------------ -------- -------------------- ---------- ---------- ---------- ---------- <br />
171.1 1 openlab01 0 186060K 0 107900K <br />
</pre><br />
Note that if you do not have any jobsteps, sstat will return an error.<br />
<pre><br />
username@opensub00: sstat --format JobID,NTasks,nodelist,MaxRSS,MaxVMSize,AveRSS,AveVMSize 172<br />
JobID NTasks Nodelist MaxRSS MaxVMSize AveRSS AveVMSize <br />
------------ -------- -------------------- ---------- ---------- ---------- ----------<br />
sstat: error: no steps running for job 237<br />
</pre><br />
If you do not run any srun commands, you will not create any job steps and metrics will not be available for your job. Your batch scripts should follow this format:<br />
<pre><br />
#!/bin/bash<br />
#SBATCH ...<br />
#SBATCH ...<br />
# set environment up<br />
module load ...<br />
<br />
# launch job steps<br />
srun <command to run> # that would be step 1<br />
srun <command to run> # that would be step 2<br />
</pre><br />
<br />
==sacct==<br />
The sacct command shows metrics from past jobs.<br />
<pre><br />
username@opensub00:sacct<br />
JobID JobName Partition Account AllocCPUS State ExitCode <br />
------------ ---------- ---------- ---------- ---------- ---------- -------- <br />
162 helloWorld test2 staff 2 COMPLETED 0:0 <br />
162.batch batch staff 1 COMPLETED 0:0 <br />
162.0 sleep staff 1 COMPLETED 0:0 <br />
162.1 sleep staff 1 COMPLETED 0:0 <br />
163 helloWorld test2 staff 2 COMPLETED 0:0 <br />
163.batch batch staff 1 COMPLETED 0:0 <br />
163.0 sleep staff 1 COMPLETED 0:0 <br />
</pre><br />
To check one specific job, you can run something like the following (if you omit .<$JOBSTEP>, all jobsteps will be shown):<br />
<pre>sacct --format JobID,jobname,NTasks,nodelist,MaxRSS,MaxVMSize,AveRSS,AveVMSize,Elapsed -j <$JOBID>.<$JOBSTEP></pre><br />
<pre><br />
username@opensub00:sacct --format JobID,jobname,NTasks,nodelist,MaxRSS,MaxVMSize,AveRSS,AveVMSize,Elapsed -j 171<br />
JobID JobName NTasks NodeList MaxRSS MaxVMSize AveRSS AveVMSize Elapsed <br />
------------ ---------- -------- --------------- ---------- ---------- ---------- ---------- ---------- <br />
171 helloWorld openlab[00-01] 00:00:30 <br />
171.batch batch 1 openlab00 0 119784K 0 113120K 00:00:30 <br />
171.0 sleep 1 openlab00 0 186060K 0 107900K 00:00:30 <br />
171.1 sleep 1 openlab01 0 186060K 0 107900K 00:00:30 <br />
</pre><br />
<br />
=Job Codes=<br />
When you list the current running jobs and your job is in <code>PD</code> (Pending), SLURM will provide you some information on what the reason for this in the NODELIST parameter. You can use <code>scontrol show job <jobid></code> to get all the parameters for your job which may be required to identify why your job is not running.<br />
<br />
<pre><br />
# squeue -u testuser<br />
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)<br />
581530 dpart bash testuser PD 0:00 1 (AssocGrpGRES)<br />
581533 dpart bash testuser PD 0:00 1 (Resources)<br />
581534 dpart bash testuser PD 0:00 1 (QOSMaxGRESPerUser)<br />
581535 scavenger bash testuser PD 0:00 1 (ReqNodeNotAvail, Reserved for maintenance)<br />
</pre><br />
<br />
Some common ones are as follows:<br />
* <code>Resources</code> - The cluster does not currently have the resources to fit your job.<br />
* <code>QOSMaxGRESPerUser</code> - The quality of service (QoS) your job is running in has a limit of resources per user. Use <code>show_qos</code> to identify the limits and then use <code>scontrol show job <jobid></code> for each of your jobs running in that QoS.<br />
* <code>AssocGrpGRES</code> - The SLURM account you are using has a limit on the resources available in total for the account. Use <code>sacctmgr show assoc account=<account_name></code> to identify the GrpTRES limit. You can see all jobs running under the account by running <code>squeue -A account_name</code> and then find out more information on each job by <code>scontrol show job <jobid></code>.<br />
* <code>ReqNodeNotAvail</code> - If you have requested a specific node and it is currently scheduled you can get this job code. You can also get this job code when it provides <code>Reserved for maintenance</code> that there is a reservation in place (often for a [[MonthlyMaintenanceWindow | maintenance window]]). You can see the current reservations by running <code>scontrol show reservation</code>. Often the culprit is that you have requested a TimeLimit that will conflict with the reservation. You can either lower your TimeLimit so that the job will complete before the reservation begins, or leave your job to wait until the reservation completes.</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=NASUsers&diff=10405NASUsers2022-04-22T23:39:21Z<p>Jayid07: /* Personal FTP Sites for Distributing Data */</p>
<hr />
<div>===Web Pages===<br />
<br />
Please see [[WebSpace#Personal%20Web%20Space | Personal Web Space]].<br />
<br />
===Personal FTP Sites for Distributing Data===<br />
<br />
Your ftp site is online at<br />
<br />
ftp://ftp.umiacs.umd.edu/pub/username<br />
<br />
On any supported UNIX workstation, you can access your ftp site as<br />
<br />
/fs/ftp/pub/username<br />
<br />
Windows users can map it as a network drive from<br />
<br />
\\fluidfs.ad.umiacs.umd.edu\ftp-umiacs\pub<br />
<br />
Please note that anyone with an internet connection can log in and download these files so please to do not use your ftp site to store confidential data.<br />
<br />
This file system has regular backups with our [[TSM]] service and has [[Snapshots]] for easy user restores.<br />
<br />
===Usage Guidelines===<br />
<br />
Personal NAS is configured to be highly available and modest in both size and usage. Please store large or heavily accessed datasets in a dedicated project storage directory that is tuned for your application.<br />
<br />
Please avoid storing shared project data in personal storage allocations. Separating project data from personal data will simplify administration and data management for both researchers and staff.</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=WebSpace&diff=10404WebSpace2022-04-22T23:37:57Z<p>Jayid07: /* OPENLab File Space */</p>
<hr />
<div>UMIACS provides static web space hosting for research/lab pages and user pages.<br />
<br />
== '''Hosting websites in UMIACS Object Store ''(preferred method)''''' ==<br />
Please refer to the section "Hosting a Website in your Bucket" on the [https://obj.umiacs.umd.edu/obj/help UMIACS Object Store Help Page] or visit [[OBJ/WebHosting]]. This is currently our most updated and reliable method for hosting websites.<br />
<br />
==Main Website and Lab Pages==<br />
<br />
<pre>http://www.umiacs.umd.edu</pre><br />
<br />
Users can access the main website and lab sites for editing in two ways:<br />
* From <b>Unix</b> as /fs/www - and can be remotely accessed by [[SFTP]] to a supported Unix host (eg. [[OpenLAB]]).<br />
* From <b>Windows</b> as \\fluidfs.ad.umiacs.umd.edu\www-umiacs - and remotely accessed by the same file share over the [[VPN]]<br><br />
<br />
Faculty members and authorized users can modify their own public profiles on the main UMIACS homepage. For instructions, see [[ContentManagement]].<br />
<br />
==Personal Web Space==<br />
<br />
Your personal website URL at UMIACS is<br />
<br />
<pre>http://www.umiacs.umd.edu/~username</pre><br />
<br />
where '''username''' is your UMIACS username. Users can set this page to redirect to any page of their choice by setting the '''Home Page''' attribute in their UMIACS [https://intranet.umiacs.umd.edu/directory/info/ directory entry].<br />
<br />
In general, large datasets related to a lab's research should go into the specific lab's web tree, not the individual users. Remember that a user's webpage is not permanently maintained once the user leaves UMIACS.<br />
<br />
UMIACS currently supports hosting a personal website on the Object Store.<br />
<br />
===UMIACS Object Store===<br />
<br />
This is the preferred method of hosting a personal website at UMIACS. Please see the [https://obj.umiacs.umd.edu/obj/help UMIACS Object Store (OBJ) Help Page] for more information on creating a website within OBJ. Once you create your website in OBJ, you will need to set your directory '''Home Page''' to the bucket's URL (the URL that ends in <code>umiacs.io</code>).<br />
<br />
===OPENLab File Space===<br />
<br />
{{Note|'''''This service has been deprecated.'''''}}<br />
<br />
This is primarily a legacy method for users who already have their websites configured this way. If you believe that your circumstances require your personal website to be hosted on this file space, please contact the [[HelpDesk | Help Desk]]. (This does not affect existing users who already have websites hosted on the OPENLab file space.)<br />
<br />
You will need set your directory '''Home Page''' attribute to <code>http://users.umiacs.umd.edu/~username</code>, where '''username''' is your UMIACS username (similar to your personal URL above). You can access your website for editing in two ways:<br />
<br />
* From <b>Unix</b> as /fs/www-users/username - and can be remotely accessed via [[SFTP]] to a supported UNIX host.<br />
* From <b>Windows</b> as \\fluidfs.ad.umiacs.umd.edu\www-users\username - and remotely accessed by the same file share over the [[VPN]].<br />
<br />
==Adding A Password Protected Folder To Your Web Space==<br />
{{Note|'''''This method will NOT work in the UMIACS Object Store.'''''}}<br />
<br />
1) Create the directory you want to password protect or <tt>cd</tt> into the directory you want to password protect<br />
<br />
2) Create a file called ''.htaccess'' (<tt> vi .htaccess</tt>) in the directory you wish to password protect.<br />
<br />
3) In the file you just created type the following lines <br />
<br />
<pre><br />
AuthUserFile "/your/directory/here/".htpasswd<br />
AuthName "Secure Document"<br />
AuthType Basic<br />
require user username<br />
</pre><br />
<br />
For example, if you were going to protect the <tt>/fs/www-users/username/private</tt> directory and you want the required name to be <tt>class239</tt>, then your file would look like this:<br />
<pre><br />
AuthUserFile /fs/www-users/username/private/.htpasswd<br />
AuthName "Secure Document"<br />
AuthType Basic<br />
require user class239<br />
</pre><br />
<br />
4) Create a file called ''.htpasswd'' in the same directory as ''.htaccess''. You create this file by typing in <tt>htpasswd -c .htpasswd ''username''</tt> in the directory area to be protected.<br />
<br />
In the example above, the username is <tt>class239</tt> so you would type <tt>htpasswd -c .htpasswd class239</tt><br />
<br />
You will be prompted to enter the password you want. The ''.htpasswd'' file will be created in the current directory and will contain an encrypted version of the password.<br />
<br />
To later change the username, edit the ''.htaccess'' file and change the username. If you want to later change the password, just retype the above line in step 4 and enter the new password at the prompt.<br />
<br />
==Restricting Content based on IP address==<br />
It is possible to have pages on your webspace only accessible to clients connecting from certain IP addresses. In order to accomplish this, cd in to the directory you wish to restrict, and edit your ''.htaccess'' or ''httpd.conf'' file. The example below shows how to make content only viewable to clients connecting from the UMD wifi in Apache 2.2.<br />
<br />
<pre style="white-space: pre-wrap; <br />
white-space: -moz-pre-wrap; <br />
white-space: -pre-wrap; <br />
white-space: -o-pre-wrap; <br />
word-wrap: break-word;">SetEnvIF X-Forwarded-For "^128\.8\.\d+\.\d+$" UMD_NETWORK<br />
SetEnvIF X-Forwarded-For "^129\.2\.\d+\.\d+$" UMD_NETWORK<br />
SetEnvIF X-Forwarded-For "^192\.168\.\d+\.\d+$" UMD_NETWORK<br />
SetEnvIF X-Forwarded-For "^206\.196\.(?:1[6-9][0-9]|2[0-5][0-9])\.\d+$" UMD_NETWORK<br />
SetEnvIF X-Forwarded-For "^10\.\d+\.\d+\.\d+$" UMD_NETWORK<br />
Order Deny,Allow<br />
Deny from all<br />
Allow from env=UMD_NETWORK<br />
</pre><br />
<br />
The SetEnvIF directive will modify one's environment if the specified attribute matches the provided regular expression. In this example, IP addresses that are forwarded from an IP within UMD's IP space are tagged with UMD_NETWORK. Then, all traffic to the example directory is blocked unless it has the UMD_NETWORK tag. See the following pages for a more in depth explanation of the commands used.<br />
<br />
[https://httpd.apache.org/docs/2.2/howto/htaccess.html .htaccess], [https://httpd.apache.org/docs/2.2/mod/mod_setenvif.html#setenvif SetEnvIf], [https://httpd.apache.org/docs/2.2/mod/mod_authz_host.html#order Order], [https://httpd.apache.org/docs/2.2/mod/mod_authz_host.html#deny Deny], [https://httpd.apache.org/docs/2.2/mod/mod_authz_host.html#allow Allow]</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=LocalDataStorage&diff=10403LocalDataStorage2022-04-22T23:27:02Z<p>Jayid07: /* UNIX Remote Storage */</p>
<hr />
<div>UMIACS recommends that any and all important data be stored on a redundant, backed-up file server. However, there are a number of cases where this is not feasible.<br />
<br />
==Windows Local Storage==<br />
Windows hosts at UMIACS store user directories on their local C drives. Supported, UMIACS-managed hosts automatically back up user data on the C drive nightly using the Institute's backup system. If you have a supported, UMIACS-managed host that has other internal or external hard drives attached to it, or partitions other than C on its primary hard drive, please be aware that these drives/partitions '''are not''' backed up. Laptops and non-standard hosts are not automatically backed up and should be manually backed up by their users.<br />
<br />
==UNIX Remote Storage==<br />
We provide storage to each of our users in our UNIX offerings through the [[NFShomes]].<br />
<br />
This home directory,<br />
<br />
/nfshomes/username<br />
<br />
is backed up nightly into our [[TSM]] backup system. This volume has [[Snapshots]] enabled for easy user restores.<br />
<br />
Users are given a 20 gigabyte [[Quota]].<br />
<br />
==UNIX Local Storage==<br />
UNIX machines use redundant, backed-up network file shares for user directories. Research data storage is also stored on redundant, backed-up network file shares and is generally available under /fs/<br />
<br />
All UNIX machines also have local storage available for transitory use. These directories may be used to store temporary, local '''''COPIES''''' of data that is permanently stored elsewhere or as a staging point for output.<br />
<br />
These directories may not, '''''under any circumstances''''', be used as permanent storage for unique, important data. UMIACS staff cannot recover damaged or deleted data from these directories and will not be responsible for data loss if they are misused. Additionally, these volumes may have an automated cleanup routine that will delete unmodified data after some number of days. You can check the page for the specific cluster you are using for more information.<br />
<br />
Please note that '''/tmp''' in particular is at risk for data loss or corruption as that directory is regularly used by system processes and services for temporary storage.<br />
<br />
These directories include:<br />
<br />
- /tmp<br />
- /scratch0, /scratch1, ... (/scratch#)<br />
- any directory named in whole or in part "tmp", "temp", or "scratch".<br />
<br />
==Locally Attached Storage==<br />
Locally attached storage like USB flash drives and USB hard drives are very popular. However, these devices are significantly more vulnerable to data loss or theft than internal or networked data storage. In general, UMIACS discourages the use of locally attached network storage when any other option is available. Please note that these devices are prone to high rates of failure and additional steps should be taken to ensure that the data is backed up and that critical or confidential data is not lost or stolen.<br />
<br />
==Network Scratch Storage==<br />
Some labs have network-attached storage dedicated for scratch/temporary storage. These shares are named in the same manner as local scratch or temporary storage (i.e. /fs/lab-scratch or /lab/scratch0 ) and are subject to the same policies as local scratch/tmp (discussed above.)<br />
<br />
==UNIX Storage Commands==<br />
Below are a few different CLI commands that may prove useful for monitoring your storage usage and performance. For additional information, run <code>[command] --help</code> or <code>man [command]</code><br />
<br />
df - Shows descriptive file system information<br />
<pre><br />
Usage: df [OPTION]... [FILE]...<br />
Show information about the file system on which each FILE resides,<br />
or all file systems by default.<br />
</pre><br />
<br />
du - Shows disk usage of specific files. Use the -d flag for better depth control.<br />
<pre><br />
Usage: du [OPTION]... [FILE]...<br />
or: du [OPTION]... --files0-from=F<br />
Summarize disk usage of each FILE, recursively for directories.<br />
</pre><br />
<br />
free - Shows current memory(RAM) usage. Use the -h flag for a human readable format.<br />
<pre><br />
Usage:<br />
free [options]<br />
</pre><br />
<br />
quota - Shows quota information, this is useful for viewing per filesystem limits in places such as a home directory. <br />
<pre><br />
quota: Usage: quota [-guqvswim] [-l | [-Q | -A]] [-F quotaformat]<br />
quota [-qvswim] [-l | [-Q | -A]] [-F quotaformat] -u username ...<br />
quota [-qvswim] [-l | [-Q | -A]] [-F quotaformat] -g groupname ...<br />
quota [-qvswugQm] [-F quotaformat] -f filesystem ...<br />
</pre><br />
<br />
iostat - Shows drive utilization, as well as other utilizations. Pair this with the <code>watch</code> command for regular updates. <br />
<pre><br />
Usage: iostat [ options ] [ <interval> [ <count> ] ]<br />
Options are:<br />
[ -c ] [ -d ] [ -h ] [ -k | -m ] [ -N ] [ -t ] [ -V ] [ -x ] [ -y ] [ -z ]<br />
[ -j { ID | LABEL | PATH | UUID | ... } ]<br />
[ [ -T ] -g <group_name> ] [ -p [ <device> [,...] | ALL ] ]<br />
[ <device> [...] | ALL ]<br />
<br />
</pre></div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=LocalDataStorage&diff=10402LocalDataStorage2022-04-22T23:26:27Z<p>Jayid07: /* UNIX Remote Storage */</p>
<hr />
<div>UMIACS recommends that any and all important data be stored on a redundant, backed-up file server. However, there are a number of cases where this is not feasible.<br />
<br />
==Windows Local Storage==<br />
Windows hosts at UMIACS store user directories on their local C drives. Supported, UMIACS-managed hosts automatically back up user data on the C drive nightly using the Institute's backup system. If you have a supported, UMIACS-managed host that has other internal or external hard drives attached to it, or partitions other than C on its primary hard drive, please be aware that these drives/partitions '''are not''' backed up. Laptops and non-standard hosts are not automatically backed up and should be manually backed up by their users.<br />
<br />
==UNIX Remote Storage==<br />
We provide storage to each of our users in our UNIX offerings through the [[Nexus]] [[NFShomes]].<br />
<br />
This home directory,<br />
<br />
/nfshomes/username<br />
<br />
is backed up nightly into our [[TSM]] backup system. This volume has [[Snapshots]] enabled for easy user restores.<br />
<br />
Users are given a 20 gigabyte [[Quota]].<br />
<br />
==UNIX Local Storage==<br />
UNIX machines use redundant, backed-up network file shares for user directories. Research data storage is also stored on redundant, backed-up network file shares and is generally available under /fs/<br />
<br />
All UNIX machines also have local storage available for transitory use. These directories may be used to store temporary, local '''''COPIES''''' of data that is permanently stored elsewhere or as a staging point for output.<br />
<br />
These directories may not, '''''under any circumstances''''', be used as permanent storage for unique, important data. UMIACS staff cannot recover damaged or deleted data from these directories and will not be responsible for data loss if they are misused. Additionally, these volumes may have an automated cleanup routine that will delete unmodified data after some number of days. You can check the page for the specific cluster you are using for more information.<br />
<br />
Please note that '''/tmp''' in particular is at risk for data loss or corruption as that directory is regularly used by system processes and services for temporary storage.<br />
<br />
These directories include:<br />
<br />
- /tmp<br />
- /scratch0, /scratch1, ... (/scratch#)<br />
- any directory named in whole or in part "tmp", "temp", or "scratch".<br />
<br />
==Locally Attached Storage==<br />
Locally attached storage like USB flash drives and USB hard drives are very popular. However, these devices are significantly more vulnerable to data loss or theft than internal or networked data storage. In general, UMIACS discourages the use of locally attached network storage when any other option is available. Please note that these devices are prone to high rates of failure and additional steps should be taken to ensure that the data is backed up and that critical or confidential data is not lost or stolen.<br />
<br />
==Network Scratch Storage==<br />
Some labs have network-attached storage dedicated for scratch/temporary storage. These shares are named in the same manner as local scratch or temporary storage (i.e. /fs/lab-scratch or /lab/scratch0 ) and are subject to the same policies as local scratch/tmp (discussed above.)<br />
<br />
==UNIX Storage Commands==<br />
Below are a few different CLI commands that may prove useful for monitoring your storage usage and performance. For additional information, run <code>[command] --help</code> or <code>man [command]</code><br />
<br />
df - Shows descriptive file system information<br />
<pre><br />
Usage: df [OPTION]... [FILE]...<br />
Show information about the file system on which each FILE resides,<br />
or all file systems by default.<br />
</pre><br />
<br />
du - Shows disk usage of specific files. Use the -d flag for better depth control.<br />
<pre><br />
Usage: du [OPTION]... [FILE]...<br />
or: du [OPTION]... --files0-from=F<br />
Summarize disk usage of each FILE, recursively for directories.<br />
</pre><br />
<br />
free - Shows current memory(RAM) usage. Use the -h flag for a human readable format.<br />
<pre><br />
Usage:<br />
free [options]<br />
</pre><br />
<br />
quota - Shows quota information, this is useful for viewing per filesystem limits in places such as a home directory. <br />
<pre><br />
quota: Usage: quota [-guqvswim] [-l | [-Q | -A]] [-F quotaformat]<br />
quota [-qvswim] [-l | [-Q | -A]] [-F quotaformat] -u username ...<br />
quota [-qvswim] [-l | [-Q | -A]] [-F quotaformat] -g groupname ...<br />
quota [-qvswugQm] [-F quotaformat] -f filesystem ...<br />
</pre><br />
<br />
iostat - Shows drive utilization, as well as other utilizations. Pair this with the <code>watch</code> command for regular updates. <br />
<pre><br />
Usage: iostat [ options ] [ <interval> [ <count> ] ]<br />
Options are:<br />
[ -c ] [ -d ] [ -h ] [ -k | -m ] [ -N ] [ -t ] [ -V ] [ -x ] [ -y ] [ -z ]<br />
[ -j { ID | LABEL | PATH | UUID | ... } ]<br />
[ [ -T ] -g <group_name> ] [ -p [ <device> [,...] | ALL ] ]<br />
[ <device> [...] | ALL ]<br />
<br />
</pre></div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM/ClusterStatus&diff=10401SLURM/ClusterStatus2022-04-22T23:13:03Z<p>Jayid07: /* scontrol */</p>
<hr />
<div>=Cluster Status=<br />
SLURM offers a variety of tools to check the general status of nodes/partitions in a cluster.<br />
<br />
==sinfo==<br />
The sinfo command will show you the status of partitions in the cluster. Passing the -N flag will show each node individually.<br />
<pre><br />
[username@nexuscml00 ~]$ sinfo<br />
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST<br />
gamma up infinite 3 idle gammagpu[01-03]<br />
scavenger up infinite 2 drain tron[50-51]<br />
scavenger up infinite 21 mix tron[00-01,03-15,46-49,52-53]<br />
scavenger up infinite 31 idle tron[02,16-45]<br />
tron* up 3-00:00:00 2 drain tron[50-51]<br />
tron* up 3-00:00:00 21 mix tron[00-01,03-15,46-49,52-53]<br />
tron* up 3-00:00:00 31 idle tron[02,16-45]<br />
<br />
</pre><br />
<pre><br />
[username@nexuscml00 ~]$ sinfo -N<br />
NODELIST NODES PARTITION STATE<br />
gammagpu01 1 gamma idle<br />
gammagpu02 1 gamma idle<br />
gammagpu03 1 gamma idle<br />
tron00 1 scavenger mix<br />
tron00 1 tron* mix<br />
tron01 1 scavenger mix<br />
tron01 1 tron* mix<br />
tron02 1 scavenger idle<br />
tron02 1 tron* idle<br />
tron03 1 scavenger mix<br />
tron03 1 tron* mix<br />
tron04 1 scavenger mix<br />
tron04 1 tron* mix<br />
...<br />
tron52 1 scavenger mix<br />
tron52 1 tron* mix<br />
tron53 1 scavenger mix<br />
tron53 1 tron* mix<br />
<br />
</pre><br />
<br />
==scontrol==<br />
The scontrol command can be used to view the status/configuration of the nodes in the cluster. If passed specific node name(s) only information about those node(s) will be displayed, otherwise all nodes will be listed. To specify multiple nodes, separate each node name by a comma (no spaces).<br />
<pre><br />
[username@nexuscml00 ~]$ scontrol show nodes tron05,tron13<br />
NodeName=tron05 Arch=x86_64 CoresPerSocket=16<br />
CPUAlloc=28 CPUTot=32 CPULoad=47.32<br />
AvailableFeatures=rhel8,AMD,EPYC-7302<br />
ActiveFeatures=rhel8,AMD,EPYC-7302<br />
Gres=gpu:rtxa6000:8<br />
NodeAddr=tron05 NodeHostName=tron05 Version=21.08.5<br />
OS=Linux 4.18.0-348.20.1.el8_5.x86_64 #1 SMP Tue Mar 8 12:56:54 EST 2022<br />
RealMemory=257538 AllocMem=157696 FreeMem=197620 Sockets=2 Boards=1<br />
State=MIXED ThreadsPerCore=1 TmpDisk=0 Weight=100 Owner=N/A MCS_label=N/A<br />
Partitions=scavenger,tron<br />
BootTime=2022-04-21T17:40:51 SlurmdStartTime=2022-04-21T18:00:56<br />
LastBusyTime=2022-04-22T11:21:16<br />
CfgTRES=cpu=32,mem=257538M,billing=346,gres/gpu=8,gres/gpu:rtxa6000=8<br />
AllocTRES=cpu=28,mem=154G,gres/gpu=7,gres/gpu:rtxa6000=7<br />
CapWatts=n/a<br />
CurrentWatts=0 AveWatts=0<br />
ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s<br />
<br />
NodeName=tron13 Arch=x86_64 CoresPerSocket=16<br />
CPUAlloc=1 CPUTot=16 CPULoad=8.41<br />
AvailableFeatures=rhel8,AMD,EPYC-7302P<br />
ActiveFeatures=rhel8,AMD,EPYC-7302P<br />
Gres=gpu:rtxa4000:4<br />
NodeAddr=tron13 NodeHostName=tron13 Version=21.08.5<br />
OS=Linux 4.18.0-348.20.1.el8_5.x86_64 #1 SMP Tue Mar 8 12:56:54 EST 2022<br />
RealMemory=128525 AllocMem=65536 FreeMem=33463 Sockets=1 Boards=1<br />
State=MIXED ThreadsPerCore=1 TmpDisk=0 Weight=10 Owner=N/A MCS_label=N/A<br />
Partitions=scavenger,tron<br />
BootTime=2022-04-21T17:40:46 SlurmdStartTime=2022-04-21T17:54:51<br />
LastBusyTime=2022-04-22T13:04:57<br />
CfgTRES=cpu=16,mem=128525M,billing=173,gres/gpu=4,gres/gpu:rtxa4000=4<br />
AllocTRES=cpu=1,mem=64G,gres/gpu=4,gres/gpu:rtxa4000=4<br />
CapWatts=n/a<br />
CurrentWatts=0 AveWatts=0<br />
ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s<br />
</pre><br />
<br />
==sacctmgr==<br />
The sacctmgr command shows cluster accounting information. One of the helpful commands is to list the available QoSes. <br />
<br />
<pre><br />
$ sacctmgr list qos format=Name,Priority,MaxWall,MaxJobsPU<br />
Name Priority MaxWall MaxJobsPU<br />
---------- ---------- ----------- ---------<br />
normal 0<br />
dpart 0 2-00:00:00 8<br />
gpu 0 08:00:00 2<br />
</pre></div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM/ClusterStatus&diff=10400SLURM/ClusterStatus2022-04-22T23:11:40Z<p>Jayid07: /* sinfo */</p>
<hr />
<div>=Cluster Status=<br />
SLURM offers a variety of tools to check the general status of nodes/partitions in a cluster.<br />
<br />
==sinfo==<br />
The sinfo command will show you the status of partitions in the cluster. Passing the -N flag will show each node individually.<br />
<pre><br />
[username@nexuscml00 ~]$ sinfo<br />
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST<br />
gamma up infinite 3 idle gammagpu[01-03]<br />
scavenger up infinite 2 drain tron[50-51]<br />
scavenger up infinite 21 mix tron[00-01,03-15,46-49,52-53]<br />
scavenger up infinite 31 idle tron[02,16-45]<br />
tron* up 3-00:00:00 2 drain tron[50-51]<br />
tron* up 3-00:00:00 21 mix tron[00-01,03-15,46-49,52-53]<br />
tron* up 3-00:00:00 31 idle tron[02,16-45]<br />
<br />
</pre><br />
<pre><br />
[username@nexuscml00 ~]$ sinfo -N<br />
NODELIST NODES PARTITION STATE<br />
gammagpu01 1 gamma idle<br />
gammagpu02 1 gamma idle<br />
gammagpu03 1 gamma idle<br />
tron00 1 scavenger mix<br />
tron00 1 tron* mix<br />
tron01 1 scavenger mix<br />
tron01 1 tron* mix<br />
tron02 1 scavenger idle<br />
tron02 1 tron* idle<br />
tron03 1 scavenger mix<br />
tron03 1 tron* mix<br />
tron04 1 scavenger mix<br />
tron04 1 tron* mix<br />
...<br />
tron52 1 scavenger mix<br />
tron52 1 tron* mix<br />
tron53 1 scavenger mix<br />
tron53 1 tron* mix<br />
<br />
</pre><br />
<br />
==scontrol==<br />
The scontrol command can be used to view the status/configuration of the nodes in the cluster. If passed specific node name(s) only information about those node(s) will be displayed, otherwise all nodes will be listed. To specify multiple nodes, separate each node name by a comma (no spaces).<br />
<pre><br />
$ scontrol show nodes openlab00,openlab08<br />
NodeName=openlab00 Arch=x86_64 CoresPerSocket=4<br />
CPUAlloc=8 CPUErr=0 CPUTot=8 CPULoad=7.10<br />
AvailableFeatures=(null)<br />
ActiveFeatures=(null)<br />
Gres=(null)<br />
NodeAddr=openlab00 NodeHostName=openlab00 Version=16.05<br />
OS=Linux RealMemory=7822 AllocMem=7822 FreeMem=149 Sockets=2 Boards=1<br />
State=ALLOCATED ThreadsPerCore=1 TmpDisk=49975 Weight=1 Owner=N/A MCS_label=N/A<br />
BootTime=2017-01-17T14:46:59 SlurmdStartTime=2017-01-17T14:47:43<br />
CapWatts=n/a<br />
CurrentWatts=0 LowestJoules=0 ConsumedJoules=0<br />
ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s<br />
<br />
<br />
NodeName=openlab08 Arch=x86_64 CoresPerSocket=8<br />
CPUAlloc=1 CPUErr=0 CPUTot=16 CPULoad=1.19<br />
AvailableFeatures=(null)<br />
ActiveFeatures=(null)<br />
Gres=gpu:3<br />
NodeAddr=openlab08 NodeHostName=openlab08 Version=16.05<br />
OS=Linux RealMemory=128722 AllocMem=1024 FreeMem=395 Sockets=2 Boards=1<br />
State=MIXED ThreadsPerCore=1 TmpDisk=49975 Weight=1 Owner=N/A MCS_label=N/A<br />
BootTime=2016-12-22T20:26:52 SlurmdStartTime=2016-12-22T20:33:21<br />
CapWatts=n/a<br />
CurrentWatts=0 LowestJoules=0 ConsumedJoules=0<br />
ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s<br />
</pre><br />
<br />
==sacctmgr==<br />
The sacctmgr command shows cluster accounting information. One of the helpful commands is to list the available QoSes. <br />
<br />
<pre><br />
$ sacctmgr list qos format=Name,Priority,MaxWall,MaxJobsPU<br />
Name Priority MaxWall MaxJobsPU<br />
---------- ---------- ----------- ---------<br />
normal 0<br />
dpart 0 2-00:00:00 8<br />
gpu 0 08:00:00 2<br />
</pre></div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=SLURM/ClusterStatus&diff=10399SLURM/ClusterStatus2022-04-22T23:11:25Z<p>Jayid07: /* sinfo */</p>
<hr />
<div>=Cluster Status=<br />
SLURM offers a variety of tools to check the general status of nodes/partitions in a cluster.<br />
<br />
==sinfo==<br />
The sinfo command will show you the status of partitions in the cluster. Passing the -N flag will show each node individually.<br />
<pre><br />
[username@nexuscml00 ~]$ sinfo<br />
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST<br />
gamma up infinite 3 idle gammagpu[01-03]<br />
scavenger up infinite 2 drain tron[50-51]<br />
scavenger up infinite 21 mix tron[00-01,03-15,46-49,52-53]<br />
scavenger up infinite 31 idle tron[02,16-45]<br />
tron* up 3-00:00:00 2 drain tron[50-51]<br />
tron* up 3-00:00:00 21 mix tron[00-01,03-15,46-49,52-53]<br />
tron* up 3-00:00:00 31 idle tron[02,16-45]<br />
<br />
</pre><br />
<pre><br />
[jayid07@nexuscml00 ~]$ sinfo -N<br />
NODELIST NODES PARTITION STATE<br />
gammagpu01 1 gamma idle<br />
gammagpu02 1 gamma idle<br />
gammagpu03 1 gamma idle<br />
tron00 1 scavenger mix<br />
tron00 1 tron* mix<br />
tron01 1 scavenger mix<br />
tron01 1 tron* mix<br />
tron02 1 scavenger idle<br />
tron02 1 tron* idle<br />
tron03 1 scavenger mix<br />
tron03 1 tron* mix<br />
tron04 1 scavenger mix<br />
tron04 1 tron* mix<br />
...<br />
tron52 1 scavenger mix<br />
tron52 1 tron* mix<br />
tron53 1 scavenger mix<br />
tron53 1 tron* mix<br />
<br />
</pre><br />
<br />
==scontrol==<br />
The scontrol command can be used to view the status/configuration of the nodes in the cluster. If passed specific node name(s) only information about those node(s) will be displayed, otherwise all nodes will be listed. To specify multiple nodes, separate each node name by a comma (no spaces).<br />
<pre><br />
$ scontrol show nodes openlab00,openlab08<br />
NodeName=openlab00 Arch=x86_64 CoresPerSocket=4<br />
CPUAlloc=8 CPUErr=0 CPUTot=8 CPULoad=7.10<br />
AvailableFeatures=(null)<br />
ActiveFeatures=(null)<br />
Gres=(null)<br />
NodeAddr=openlab00 NodeHostName=openlab00 Version=16.05<br />
OS=Linux RealMemory=7822 AllocMem=7822 FreeMem=149 Sockets=2 Boards=1<br />
State=ALLOCATED ThreadsPerCore=1 TmpDisk=49975 Weight=1 Owner=N/A MCS_label=N/A<br />
BootTime=2017-01-17T14:46:59 SlurmdStartTime=2017-01-17T14:47:43<br />
CapWatts=n/a<br />
CurrentWatts=0 LowestJoules=0 ConsumedJoules=0<br />
ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s<br />
<br />
<br />
NodeName=openlab08 Arch=x86_64 CoresPerSocket=8<br />
CPUAlloc=1 CPUErr=0 CPUTot=16 CPULoad=1.19<br />
AvailableFeatures=(null)<br />
ActiveFeatures=(null)<br />
Gres=gpu:3<br />
NodeAddr=openlab08 NodeHostName=openlab08 Version=16.05<br />
OS=Linux RealMemory=128722 AllocMem=1024 FreeMem=395 Sockets=2 Boards=1<br />
State=MIXED ThreadsPerCore=1 TmpDisk=49975 Weight=1 Owner=N/A MCS_label=N/A<br />
BootTime=2016-12-22T20:26:52 SlurmdStartTime=2016-12-22T20:33:21<br />
CapWatts=n/a<br />
CurrentWatts=0 LowestJoules=0 ConsumedJoules=0<br />
ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s<br />
</pre><br />
<br />
==sacctmgr==<br />
The sacctmgr command shows cluster accounting information. One of the helpful commands is to list the available QoSes. <br />
<br />
<pre><br />
$ sacctmgr list qos format=Name,Priority,MaxWall,MaxJobsPU<br />
Name Priority MaxWall MaxJobsPU<br />
---------- ---------- ----------- ---------<br />
normal 0<br />
dpart 0 2-00:00:00 8<br />
gpu 0 08:00:00 2<br />
</pre></div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=SecureShell/MFA&diff=10398SecureShell/MFA2022-04-22T23:07:29Z<p>Jayid07: /* Wrapping up */</p>
<hr />
<div>==Overview==<br />
UMIACS will soon be rolling out multi-factor authentication requirements when using [[SSH]] to connect to our public-facing hosts to provide better account and data security. Public-facing hosts are hosts that are reachable without first establishing a connection to our [[VPN]]. '''If you first connect to our VPN, you will not need to additionally multi-factor authenticate when using SSH.''' This is because our VPN already requires multi-factor authentication to establish a connection, and all subsequent SSH attempts after connecting to our VPN will pass through the tunnel created (already within the UMIACS border).<br />
<br />
SSH has two different authentication methods that we currently support on all of our internal hosts: interactive password authentication and [[SSH/Keys | public key authentication]]. Multi-factor authentication-enabled SSH on our public-facing hosts only supports interactive password authentication, with the secondary factor coming from our [[Duo]] instance. We do not currently support public key based authentication and [[Duo]] multi-factor authentication on our public-facing hosts. Please note that unfortunately [https://en.wikipedia.org/wiki/Universal_2nd_Factor U2F] hardware tokens registered with Duo are not supported for SSH login specifically. Other hardware tokens such as a [https://www.yubico.com/products/yubikey-5-overview/ YubiKey] or [https://guide.duo.com/tokens Duo's own hardware token] will still work.<br />
<br />
==Example==<br />
The initial command or session setup for connecting to a host with multi-factor authentication enabled over SSH is the same as one that does not have it enabled. Our example for connecting to a host over SSH can be found [[SecureShell#Connecting_to_an_SSH_Server | here]]. In the below example, we are also SSH-ing to a [[Nexus]] node e.g. <code>ssh username@nexusclip00.umiacs.umd.edu</code><br />
<br />
Once you enter the command (if using a native terminal) or start the session (PuTTY or other terminal emulators), you will be presented with the following prompt:<br />
<br />
<pre><br />
Password:<br />
</pre><br />
<br />
Enter your UMIACS password here (the same as if you were using interactive password authentication to connect to an internal host). After correctly entering your password, you will be taken to the following prompt. '''Please note: The options shown here will vary depending on what/how many devices you have registered with our Duo instance.''' In this example, we have a mobile phone that has the Duo app installed, a tablet (iPad) that has the Duo app installed, a Duo hardware token, and a YubiKey all registered against our UMIACS Duo instance.<br />
<br />
<pre><br />
Password:<br />
Duo two-factor login for username<br />
<br />
Enter a passcode or select one of the following options:<br />
<br />
1. Duo Push to XXX-XXX-1234<br />
2. Duo Push to iPad (iOS)<br />
3. Phone call to XXX-XXX-1234<br />
4. SMS passcodes to XXX-XXX-1234<br />
<br />
Passcode or option (1-4):<br />
</pre><br />
(if you have a registered phone, the last 4 digits shown will be replaced with the last 4 digits of the phone number you specifically have registered)<br />
<br />
The numbered options here correspond to different methods that Duo can take to authenticate you, and are more or less identical to the options that would be presented to you via a GUI if you were attempting to sign into another of our multi-factor authentication secured services, such as our [https://intranet.umiacs.umd.edu/directory/auth/login Directory application]. You can also enter the passcode <br />
<br />
===Duo Push to ___===<br />
This will send a push notification to the Duo app on whichever device you chose for you to accept to proceed.<br />
<br />
<pre><br />
Passcode or option (1-4): 1<br />
<br />
Pushed a login request to your device...<br />
</pre><br />
<br />
===Phone call to XXX-XXX-XXXX===<br />
This will call your registered phone and ask you to press any key on your phone to proceed.<br />
<br />
<pre><br />
Passcode or option (1-4): 3<br />
<br />
Calling your phone...<br />
Dialing XXX-XXX-1234...<br />
</pre><br />
<br />
(After answering) <br />
<br />
<pre>Answered. Press any key on your phone to log in.</pre><br />
<br />
===SMS passcodes to XXX-XXX-XXXX===<br />
This will send a one time passcode to your registered phone via SMS and then redisplay the prompt. Type the passcode received at the new prompt (which will show the first number of the passcode sent as a hint) to proceed.<br />
<pre><br />
Passcode or option (1-4): 4<br />
<br />
New SMS passcodes sent.<br />
<br />
Duo two-factor login for username<br />
<br />
Enter a passcode or select one of the following options:<br />
<br />
1. Duo Push to XXX-XXX-1234<br />
2. Duo Push to iPad (iOS)<br />
3. Phone call to XXX-XXX-1234<br />
4. SMS passcodes to XXX-XXX-1234 (next code starts with: 1)<br />
<br />
Passcode or option (1-4): 1234567<br />
</pre><br />
<br />
===Enter passcode or tap YubiKey===<br />
In addition, you can also enter the code shown in your Duo app for UMIACS, the code shown on a registered hardware token, or tap your YubiKey to emit a code:<br />
<br />
====Code shown in Duo app or hardware token====<br />
[[File:Duo_app_code.jpg]]<br />
<br />
(if in the Duo app)<br />
<br />
<pre><br />
Enter a passcode or select one of the following options:<br />
<br />
1. Duo Push to XXX-XXX-1234<br />
2. Duo Push to iPad (iOS)<br />
3. Phone call to XXX-XXX-1234<br />
4. SMS passcodes to XXX-XXX-1234<br />
<br />
Passcode or option (1-4): 672239<br />
</pre><br />
<br />
[[File:Duo_token.png]]<br />
<br />
(if using a hardware token)<br />
<br />
<pre><br />
Enter a passcode or select one of the following options:<br />
<br />
1. Duo Push to XXX-XXX-1234<br />
2. Duo Push to iPad (iOS)<br />
3. Phone call to XXX-XXX-1234<br />
4. SMS passcodes to XXX-XXX-1234<br />
<br />
Passcode or option (1-4): 123456<br />
</pre><br />
<br />
====YubiKey tap====<br />
Simply tap the sensor on your YubiKey plugged into the device you are using to SSH to have it emit a string of characters and automatically hit Enter.<br />
<br />
<pre><br />
Enter a passcode or select one of the following options:<br />
<br />
1. Duo Push to XXX-XXX-1234<br />
2. Duo Push to iPad (iOS)<br />
3. Phone call to XXX-XXX-1234<br />
4. SMS passcodes to XXX-XXX-1234<br />
<br />
Passcode or option (1-4): kffuastenhldrhfhadafdarivuntddugrvjvllddjjuget<br />
</pre><br />
<br />
==Wrapping up==<br />
After finishing your method of choice for using Duo to multi-factor authenticate, you will be logged in and can operate as normal.<br />
<pre><br />
Success. Logging you in...<br />
Last login: Wed Feb 17 12:00:00 2021 from ...<br />
[username@nexusclip01 ~]$<br />
</pre><br />
<br />
Subsequent SSH attempts from the window you have already connected via will not require multi-factor authentication, even if the host you are trying to SSH to is another public-facing host. This is because at this point the point of origin for the network traffic behind the connection attempt is coming from within the UMIACS border, rather than the rest of the Internet.<br />
<pre><br />
[username@nexusclip01 ~]$ ssh cbcbsub00.umiacs.umd.edu<br />
username@cbcbsub00.umiacs.umd.edu's password:<br />
Last login: Wed Feb 17 11:59:00 2021 from ...<br />
[username@nexusclip01 ~]$<br />
</pre><br />
<br />
==Considerations==<br />
Since there will now be an additional step to log in to our public-facing hosts if not using our [[VPN]], we would recommend first establishing a connection over our VPN if you anticipate needing to SSH to several different hosts or need to open several different terminal windows concurrently. As mentioned previously, you will not need an additional multi-factor authentication step when using SSH if you first connect to our VPN since it is already secured by multi-factor authentication.<br />
<br />
An alternative would be to use a [https://en.wikipedia.org/wiki/Terminal_multiplexer terminal multiplexer] such as [[Screen]] (on [[RHEL7]]) or [[Tmux]] (on RHEL8+, also available in our [[Modules | module tree]] on RHEL7) to minimize the number of times you need to multi-factor authenticate. Terminal multiplexers allow you to start several different processes out of one terminal display, and also detach from and later reattach to each of the processes.</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=SecureShell/MFA&diff=10397SecureShell/MFA2022-04-22T23:06:26Z<p>Jayid07: /* Example */</p>
<hr />
<div>==Overview==<br />
UMIACS will soon be rolling out multi-factor authentication requirements when using [[SSH]] to connect to our public-facing hosts to provide better account and data security. Public-facing hosts are hosts that are reachable without first establishing a connection to our [[VPN]]. '''If you first connect to our VPN, you will not need to additionally multi-factor authenticate when using SSH.''' This is because our VPN already requires multi-factor authentication to establish a connection, and all subsequent SSH attempts after connecting to our VPN will pass through the tunnel created (already within the UMIACS border).<br />
<br />
SSH has two different authentication methods that we currently support on all of our internal hosts: interactive password authentication and [[SSH/Keys | public key authentication]]. Multi-factor authentication-enabled SSH on our public-facing hosts only supports interactive password authentication, with the secondary factor coming from our [[Duo]] instance. We do not currently support public key based authentication and [[Duo]] multi-factor authentication on our public-facing hosts. Please note that unfortunately [https://en.wikipedia.org/wiki/Universal_2nd_Factor U2F] hardware tokens registered with Duo are not supported for SSH login specifically. Other hardware tokens such as a [https://www.yubico.com/products/yubikey-5-overview/ YubiKey] or [https://guide.duo.com/tokens Duo's own hardware token] will still work.<br />
<br />
==Example==<br />
The initial command or session setup for connecting to a host with multi-factor authentication enabled over SSH is the same as one that does not have it enabled. Our example for connecting to a host over SSH can be found [[SecureShell#Connecting_to_an_SSH_Server | here]]. In the below example, we are also SSH-ing to a [[Nexus]] node e.g. <code>ssh username@nexusclip00.umiacs.umd.edu</code><br />
<br />
Once you enter the command (if using a native terminal) or start the session (PuTTY or other terminal emulators), you will be presented with the following prompt:<br />
<br />
<pre><br />
Password:<br />
</pre><br />
<br />
Enter your UMIACS password here (the same as if you were using interactive password authentication to connect to an internal host). After correctly entering your password, you will be taken to the following prompt. '''Please note: The options shown here will vary depending on what/how many devices you have registered with our Duo instance.''' In this example, we have a mobile phone that has the Duo app installed, a tablet (iPad) that has the Duo app installed, a Duo hardware token, and a YubiKey all registered against our UMIACS Duo instance.<br />
<br />
<pre><br />
Password:<br />
Duo two-factor login for username<br />
<br />
Enter a passcode or select one of the following options:<br />
<br />
1. Duo Push to XXX-XXX-1234<br />
2. Duo Push to iPad (iOS)<br />
3. Phone call to XXX-XXX-1234<br />
4. SMS passcodes to XXX-XXX-1234<br />
<br />
Passcode or option (1-4):<br />
</pre><br />
(if you have a registered phone, the last 4 digits shown will be replaced with the last 4 digits of the phone number you specifically have registered)<br />
<br />
The numbered options here correspond to different methods that Duo can take to authenticate you, and are more or less identical to the options that would be presented to you via a GUI if you were attempting to sign into another of our multi-factor authentication secured services, such as our [https://intranet.umiacs.umd.edu/directory/auth/login Directory application]. You can also enter the passcode <br />
<br />
===Duo Push to ___===<br />
This will send a push notification to the Duo app on whichever device you chose for you to accept to proceed.<br />
<br />
<pre><br />
Passcode or option (1-4): 1<br />
<br />
Pushed a login request to your device...<br />
</pre><br />
<br />
===Phone call to XXX-XXX-XXXX===<br />
This will call your registered phone and ask you to press any key on your phone to proceed.<br />
<br />
<pre><br />
Passcode or option (1-4): 3<br />
<br />
Calling your phone...<br />
Dialing XXX-XXX-1234...<br />
</pre><br />
<br />
(After answering) <br />
<br />
<pre>Answered. Press any key on your phone to log in.</pre><br />
<br />
===SMS passcodes to XXX-XXX-XXXX===<br />
This will send a one time passcode to your registered phone via SMS and then redisplay the prompt. Type the passcode received at the new prompt (which will show the first number of the passcode sent as a hint) to proceed.<br />
<pre><br />
Passcode or option (1-4): 4<br />
<br />
New SMS passcodes sent.<br />
<br />
Duo two-factor login for username<br />
<br />
Enter a passcode or select one of the following options:<br />
<br />
1. Duo Push to XXX-XXX-1234<br />
2. Duo Push to iPad (iOS)<br />
3. Phone call to XXX-XXX-1234<br />
4. SMS passcodes to XXX-XXX-1234 (next code starts with: 1)<br />
<br />
Passcode or option (1-4): 1234567<br />
</pre><br />
<br />
===Enter passcode or tap YubiKey===<br />
In addition, you can also enter the code shown in your Duo app for UMIACS, the code shown on a registered hardware token, or tap your YubiKey to emit a code:<br />
<br />
====Code shown in Duo app or hardware token====<br />
[[File:Duo_app_code.jpg]]<br />
<br />
(if in the Duo app)<br />
<br />
<pre><br />
Enter a passcode or select one of the following options:<br />
<br />
1. Duo Push to XXX-XXX-1234<br />
2. Duo Push to iPad (iOS)<br />
3. Phone call to XXX-XXX-1234<br />
4. SMS passcodes to XXX-XXX-1234<br />
<br />
Passcode or option (1-4): 672239<br />
</pre><br />
<br />
[[File:Duo_token.png]]<br />
<br />
(if using a hardware token)<br />
<br />
<pre><br />
Enter a passcode or select one of the following options:<br />
<br />
1. Duo Push to XXX-XXX-1234<br />
2. Duo Push to iPad (iOS)<br />
3. Phone call to XXX-XXX-1234<br />
4. SMS passcodes to XXX-XXX-1234<br />
<br />
Passcode or option (1-4): 123456<br />
</pre><br />
<br />
====YubiKey tap====<br />
Simply tap the sensor on your YubiKey plugged into the device you are using to SSH to have it emit a string of characters and automatically hit Enter.<br />
<br />
<pre><br />
Enter a passcode or select one of the following options:<br />
<br />
1. Duo Push to XXX-XXX-1234<br />
2. Duo Push to iPad (iOS)<br />
3. Phone call to XXX-XXX-1234<br />
4. SMS passcodes to XXX-XXX-1234<br />
<br />
Passcode or option (1-4): kffuastenhldrhfhadafdarivuntddugrvjvllddjjuget<br />
</pre><br />
<br />
==Wrapping up==<br />
After finishing your method of choice for using Duo to multi-factor authenticate, you will be logged in and can operate as normal.<br />
<pre><br />
Success. Logging you in...<br />
Last login: Wed Feb 17 12:00:00 2021 from ...<br />
[username@opensub02 ~]$<br />
</pre><br />
<br />
Subsequent SSH attempts from the window you have already connected via will not require multi-factor authentication, even if the host you are trying to SSH to is another public-facing host. This is because at this point the point of origin for the network traffic behind the connection attempt is coming from within the UMIACS border, rather than the rest of the Internet.<br />
<pre><br />
[username@opensub02 ~]$ ssh cbcbsub00.umiacs.umd.edu<br />
username@cbcbsub00.umiacs.umd.edu's password:<br />
Last login: Wed Feb 17 11:59:00 2021 from ...<br />
[username@cbcbsub00 ~]$<br />
</pre><br />
<br />
==Considerations==<br />
Since there will now be an additional step to log in to our public-facing hosts if not using our [[VPN]], we would recommend first establishing a connection over our VPN if you anticipate needing to SSH to several different hosts or need to open several different terminal windows concurrently. As mentioned previously, you will not need an additional multi-factor authentication step when using SSH if you first connect to our VPN since it is already secured by multi-factor authentication.<br />
<br />
An alternative would be to use a [https://en.wikipedia.org/wiki/Terminal_multiplexer terminal multiplexer] such as [[Screen]] (on [[RHEL7]]) or [[Tmux]] (on RHEL8+, also available in our [[Modules | module tree]] on RHEL7) to minimize the number of times you need to multi-factor authenticate. Terminal multiplexers allow you to start several different processes out of one terminal display, and also detach from and later reattach to each of the processes.</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=Network/Troubleshooting&diff=10396Network/Troubleshooting2022-04-22T23:02:56Z<p>Jayid07: /* Trace Route */</p>
<hr />
<div>Occasionally network related issues may arise. The following outlines basic troubleshooting methods.<br />
<br />
==Check Network Connection==<br />
*Check that the ethernet cable is plugged in or that you are connected to a wireless network.<br />
*Check that you have an IP address.<br />
**On Windows: type 'ipconfig /all' into the command prompt<br />
**Linux/UNIX based hosts: type 'ifconfig' into a terminal<br />
<br />
==PING==<br />
PING is a network utility that sends ICMP packets to a specified host to test network connectivity.<br />
*Open up a command prompt. Type 'ping' followed by the hostname or IP address of the host you want to test connectivity to.<br />
bash$ ping google.com<br />
PING google.com (74.125.228.34) 56(84) bytes of data.<br />
64 bytes from iad23s06-in-f2.1e100.net (74.125.228.34): icmp_seq=1 ttl=52 time=4.52 ms<br />
64 bytes from iad23s06-in-f2.1e100.net (74.125.228.34): icmp_seq=2 ttl=52 time=4.17 ms<br />
64 bytes from iad23s06-in-f2.1e100.net (74.125.228.34): icmp_seq=3 ttl=52 time=4.41 ms<br />
64 bytes from iad23s06-in-f2.1e100.net (74.125.228.34): icmp_seq=4 ttl=52 time=4.04 ms<br />
*Linux/UNIXS may want to add the '-c' flag followed by a number to specify how many times to ping the destination host.<br />
<br />
If you are having trouble connecting to UMIACS services we would suggest you test against one of our publicly accessible [[Nexus]] hosts like nexuscml00.umiacs.umd.edu.<br />
<br />
<pre><br />
-bash-4.2$ ping nexuscml00.umiacs.umd.edu<br />
PING nexuscml00.umiacs.UMD.EDU (128.8.121.4) 56(84) bytes of data.<br />
64 bytes from nexuscml00.umiacs.umd.edu (128.8.121.4): icmp_seq=1 ttl=60 time=0.507 ms<br />
64 bytes from nexuscml00.umiacs.umd.edu (128.8.121.4): icmp_seq=2 ttl=60 time=0.335 ms<br />
64 bytes from nexuscml00.umiacs.umd.edu (128.8.121.4): icmp_seq=3 ttl=60 time=0.328 ms<br />
64 bytes from nexuscml00.umiacs.umd.edu (128.8.121.4): icmp_seq=4 ttl=60 time=0.379 ms<br />
</pre><br />
<br />
==Trace Route==<br />
The traceroute tool utilizes the IP protocol to track the route one's packets take through a network. Traceroute is useful when trying to figure out why a host is unreachable because it shows where the connection failed.<br />
*Windows:<br />
**In a command prompt type "tracert" followed by the hostname or IP address of the host you want to test the route to.<br />
C:\>tracert google.com<br />
Tracing route to google.com [74.125.228.38]<br />
over a maximum of 30 hops:<br />
1 <1 ms <1 ms <1 ms gw-486.umiacs.umd.edu [192.168.86.1] <br />
2 <1 ms <1 ms <1 ms umfirewall00.umiacs.umd.edu [128.8.120.17] <br />
3 13 ms <1 ms <1 ms avw1hub-gw.umiacs.umd.edu [128.8.120.1] <br />
4 1 ms <1 ms <1 ms vlan8.css-priv-r1.net.umd.edu [128.8.6.129] <br />
5 <1 ms <1 ms <1 ms gi6-1.css-core-r1.net.umd.edu [128.8.0.117] <br />
6 <1 ms <1 ms <1 ms gi3-2.css-fw-r1.net.umd.edu [128.8.0.82] <br />
7 1 ms 2 ms 2 ms 128.8.0.226 <br />
8 2 ms 2 ms 2 ms 107-0-84-29-static.hfc.comcastbusiness.net [107.0.84.29] <br />
9 3 ms 2 ms 2 ms xe-3-1-2-0-ar04.capitolhghts.md.bad.comcast.net [68.85.114.113] <br />
10 5 ms 6 ms 4 ms pos-5-7-0-0-cr01.ashburn.va.ibone.comcast.net [68.86.90.85] <br />
11 4 ms 4 ms 5 ms pos-0-2-0-0-pe01.ashburn.va.ibone.comcast.net [68.86.86.70] <br />
12 4 ms 6 ms 4 ms 75.149.231.62 <br />
13 42 ms 4 ms 4 ms 209.85.252.80 <br />
14 5 ms 5 ms 5 ms 72.14.238.175 <br />
15 4 ms 4 ms 5 ms iad23s06-in-f6.1e100.net [74.125.228.38] <br />
Trace complete.<br />
*Linux/UNIX:<br />
**In a terminal type 'traceroute' followed by the hostname or IP address of the host you want to test the route to.<br />
bash$ traceroute google.com<br />
traceroute: Warning: google.com has multiple addresses; using 74.125.228.72<br />
traceroute to google.com (74.125.228.72), 64 hops max, 52 byte packets<br />
1 10.109.160.1 (10.109.160.1) 1.329 ms 1.086 ms 1.232 ms<br />
2 129-2-129-129.wireless.umd.edu (129.2.129.129) 2.780 ms 1.924 ms 2.127 ms<br />
3 te1-6.css-core-r1.net.umd.edu (128.8.0.121) 2.514 ms 2.293 ms 1.863 ms<br />
4 gi3-2.css-fw-r1.net.umd.edu (128.8.0.82) 2.429 ms 2.548 ms 2.284 ms<br />
5 128.8.0.226 (128.8.0.226) 3.838 ms 3.465 ms 3.635 ms<br />
6 107-0-84-29-static.hfc.comcastbusiness.net (107.0.84.29) 3.941 ms 3.445 ms 3.303 ms<br />
7 xe-3-1-2-0-ar04.capitolhghts.md.bad.comcast.net (68.85.114.113) 3.522 ms 4.333 ms 3.441 ms<br />
8 pos-5-1-0-0-cr01.ashburn.va.ibone.comcast.net (68.86.90.241) 7.269 ms<br />
pos-4-12-0-0-cr01.newyork.ny.ibone.comcast.net (68.86.90.173) 9.025 ms 7.192 ms<br />
9 pos-0-4-0-0-pe01.ashburn.va.ibone.comcast.net (68.86.86.146) 5.236 ms 5.636 ms 6.229 ms<br />
10 75.149.231.62 (75.149.231.62) 5.186 ms 5.113 ms 6.725 ms<br />
11 209.85.252.46 (209.85.252.46) 6.836 ms 6.352 ms 78.371 ms<br />
12 72.14.238.247 (72.14.238.247) 7.823 ms 7.616 ms 8.331 ms<br />
13 iad23s07-in-f8.1e100.net (74.125.228.72) 10.798 ms 13.383 ms 7.540 ms<br />
<br />
Again, if you you are having trouble connecting to UMIACS-based hosts, please try to tracert/traceroute to a publicly accessible [[Nexus]] host such as <code>nexuscml00.umiacs.umd.edu</code> or <code>nexuscfar00.umiacs.umd.edu</code>.<br />
<br />
==Other Things to consider==<br />
*Is your IP address manually set?<br />
**In order for networking to function properly a machine must have a unique IP address on its network. In many cases IP address are assigned automatically via the DHCP protocol, but it is also possible to manually set an IP address. If your machine happens to have a manually set IP address that conflicts with that of another host, networking issues may arise.<br />
*Are you behind a firewall?<br />
**Some establishments use firewalls to restrict certain types of traffic both in and out of their networks. If you are having trouble accessing a specific service, this might be the case. To get around this you can connect to the VPN, or use SSH tunneling.<br />
*Other useful commands:<br />
**dig -performs DNS lookups and displays the answers that are returned from the name server(s) that were queried<br />
**host -simple utility for performing DNS loookups<br />
**netstat (linux/UNIX) -shows network status<br />
**route print (Windows) -shows network routing tables<br />
<br />
==Change your default DNS server==<br />
Please see [[Network/Troubleshooting/DNS | DNS]] to change your default DNS server.</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=Network/Troubleshooting&diff=10395Network/Troubleshooting2022-04-22T23:01:55Z<p>Jayid07: /* PING */</p>
<hr />
<div>Occasionally network related issues may arise. The following outlines basic troubleshooting methods.<br />
<br />
==Check Network Connection==<br />
*Check that the ethernet cable is plugged in or that you are connected to a wireless network.<br />
*Check that you have an IP address.<br />
**On Windows: type 'ipconfig /all' into the command prompt<br />
**Linux/UNIX based hosts: type 'ifconfig' into a terminal<br />
<br />
==PING==<br />
PING is a network utility that sends ICMP packets to a specified host to test network connectivity.<br />
*Open up a command prompt. Type 'ping' followed by the hostname or IP address of the host you want to test connectivity to.<br />
bash$ ping google.com<br />
PING google.com (74.125.228.34) 56(84) bytes of data.<br />
64 bytes from iad23s06-in-f2.1e100.net (74.125.228.34): icmp_seq=1 ttl=52 time=4.52 ms<br />
64 bytes from iad23s06-in-f2.1e100.net (74.125.228.34): icmp_seq=2 ttl=52 time=4.17 ms<br />
64 bytes from iad23s06-in-f2.1e100.net (74.125.228.34): icmp_seq=3 ttl=52 time=4.41 ms<br />
64 bytes from iad23s06-in-f2.1e100.net (74.125.228.34): icmp_seq=4 ttl=52 time=4.04 ms<br />
*Linux/UNIXS may want to add the '-c' flag followed by a number to specify how many times to ping the destination host.<br />
<br />
If you are having trouble connecting to UMIACS services we would suggest you test against one of our publicly accessible [[Nexus]] hosts like nexuscml00.umiacs.umd.edu.<br />
<br />
<pre><br />
-bash-4.2$ ping nexuscml00.umiacs.umd.edu<br />
PING nexuscml00.umiacs.UMD.EDU (128.8.121.4) 56(84) bytes of data.<br />
64 bytes from nexuscml00.umiacs.umd.edu (128.8.121.4): icmp_seq=1 ttl=60 time=0.507 ms<br />
64 bytes from nexuscml00.umiacs.umd.edu (128.8.121.4): icmp_seq=2 ttl=60 time=0.335 ms<br />
64 bytes from nexuscml00.umiacs.umd.edu (128.8.121.4): icmp_seq=3 ttl=60 time=0.328 ms<br />
64 bytes from nexuscml00.umiacs.umd.edu (128.8.121.4): icmp_seq=4 ttl=60 time=0.379 ms<br />
</pre><br />
<br />
==Trace Route==<br />
The traceroute tool utilizes the IP protocol to track the route one's packets take through a network. Traceroute is useful when trying to figure out why a host is unreachable because it shows where the connection failed.<br />
*Windows:<br />
**In a command prompt type "tracert" followed by the hostname or IP address of the host you want to test the route to.<br />
C:\>tracert google.com<br />
Tracing route to google.com [74.125.228.38]<br />
over a maximum of 30 hops:<br />
1 <1 ms <1 ms <1 ms gw-486.umiacs.umd.edu [192.168.86.1] <br />
2 <1 ms <1 ms <1 ms umfirewall00.umiacs.umd.edu [128.8.120.17] <br />
3 13 ms <1 ms <1 ms avw1hub-gw.umiacs.umd.edu [128.8.120.1] <br />
4 1 ms <1 ms <1 ms vlan8.css-priv-r1.net.umd.edu [128.8.6.129] <br />
5 <1 ms <1 ms <1 ms gi6-1.css-core-r1.net.umd.edu [128.8.0.117] <br />
6 <1 ms <1 ms <1 ms gi3-2.css-fw-r1.net.umd.edu [128.8.0.82] <br />
7 1 ms 2 ms 2 ms 128.8.0.226 <br />
8 2 ms 2 ms 2 ms 107-0-84-29-static.hfc.comcastbusiness.net [107.0.84.29] <br />
9 3 ms 2 ms 2 ms xe-3-1-2-0-ar04.capitolhghts.md.bad.comcast.net [68.85.114.113] <br />
10 5 ms 6 ms 4 ms pos-5-7-0-0-cr01.ashburn.va.ibone.comcast.net [68.86.90.85] <br />
11 4 ms 4 ms 5 ms pos-0-2-0-0-pe01.ashburn.va.ibone.comcast.net [68.86.86.70] <br />
12 4 ms 6 ms 4 ms 75.149.231.62 <br />
13 42 ms 4 ms 4 ms 209.85.252.80 <br />
14 5 ms 5 ms 5 ms 72.14.238.175 <br />
15 4 ms 4 ms 5 ms iad23s06-in-f6.1e100.net [74.125.228.38] <br />
Trace complete.<br />
*Linux/UNIX:<br />
**In a terminal type 'traceroute' followed by the hostname or IP address of the host you want to test the route to.<br />
bash$ traceroute google.com<br />
traceroute: Warning: google.com has multiple addresses; using 74.125.228.72<br />
traceroute to google.com (74.125.228.72), 64 hops max, 52 byte packets<br />
1 10.109.160.1 (10.109.160.1) 1.329 ms 1.086 ms 1.232 ms<br />
2 129-2-129-129.wireless.umd.edu (129.2.129.129) 2.780 ms 1.924 ms 2.127 ms<br />
3 te1-6.css-core-r1.net.umd.edu (128.8.0.121) 2.514 ms 2.293 ms 1.863 ms<br />
4 gi3-2.css-fw-r1.net.umd.edu (128.8.0.82) 2.429 ms 2.548 ms 2.284 ms<br />
5 128.8.0.226 (128.8.0.226) 3.838 ms 3.465 ms 3.635 ms<br />
6 107-0-84-29-static.hfc.comcastbusiness.net (107.0.84.29) 3.941 ms 3.445 ms 3.303 ms<br />
7 xe-3-1-2-0-ar04.capitolhghts.md.bad.comcast.net (68.85.114.113) 3.522 ms 4.333 ms 3.441 ms<br />
8 pos-5-1-0-0-cr01.ashburn.va.ibone.comcast.net (68.86.90.241) 7.269 ms<br />
pos-4-12-0-0-cr01.newyork.ny.ibone.comcast.net (68.86.90.173) 9.025 ms 7.192 ms<br />
9 pos-0-4-0-0-pe01.ashburn.va.ibone.comcast.net (68.86.86.146) 5.236 ms 5.636 ms 6.229 ms<br />
10 75.149.231.62 (75.149.231.62) 5.186 ms 5.113 ms 6.725 ms<br />
11 209.85.252.46 (209.85.252.46) 6.836 ms 6.352 ms 78.371 ms<br />
12 72.14.238.247 (72.14.238.247) 7.823 ms 7.616 ms 8.331 ms<br />
13 iad23s07-in-f8.1e100.net (74.125.228.72) 10.798 ms 13.383 ms 7.540 ms<br />
<br />
Again, if you you are having trouble connecting to UMIACS-based hosts, please try to tracert/traceroute to a publicly accessible host such as <code>openlab.umiacs.umd.edu</code>.<br />
<br />
==Other Things to consider==<br />
*Is your IP address manually set?<br />
**In order for networking to function properly a machine must have a unique IP address on its network. In many cases IP address are assigned automatically via the DHCP protocol, but it is also possible to manually set an IP address. If your machine happens to have a manually set IP address that conflicts with that of another host, networking issues may arise.<br />
*Are you behind a firewall?<br />
**Some establishments use firewalls to restrict certain types of traffic both in and out of their networks. If you are having trouble accessing a specific service, this might be the case. To get around this you can connect to the VPN, or use SSH tunneling.<br />
*Other useful commands:<br />
**dig -performs DNS lookups and displays the answers that are returned from the name server(s) that were queried<br />
**host -simple utility for performing DNS loookups<br />
**netstat (linux/UNIX) -shows network status<br />
**route print (Windows) -shows network routing tables<br />
<br />
==Change your default DNS server==<br />
Please see [[Network/Troubleshooting/DNS | DNS]] to change your default DNS server.</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=Network/Troubleshooting&diff=10394Network/Troubleshooting2022-04-22T23:00:48Z<p>Jayid07: /* PING */</p>
<hr />
<div>Occasionally network related issues may arise. The following outlines basic troubleshooting methods.<br />
<br />
==Check Network Connection==<br />
*Check that the ethernet cable is plugged in or that you are connected to a wireless network.<br />
*Check that you have an IP address.<br />
**On Windows: type 'ipconfig /all' into the command prompt<br />
**Linux/UNIX based hosts: type 'ifconfig' into a terminal<br />
<br />
==PING==<br />
PING is a network utility that sends ICMP packets to a specified host to test network connectivity.<br />
*Open up a command prompt. Type 'ping' followed by the hostname or IP address of the host you want to test connectivity to.<br />
bash$ ping google.com<br />
PING google.com (74.125.228.34) 56(84) bytes of data.<br />
64 bytes from iad23s06-in-f2.1e100.net (74.125.228.34): icmp_seq=1 ttl=52 time=4.52 ms<br />
64 bytes from iad23s06-in-f2.1e100.net (74.125.228.34): icmp_seq=2 ttl=52 time=4.17 ms<br />
64 bytes from iad23s06-in-f2.1e100.net (74.125.228.34): icmp_seq=3 ttl=52 time=4.41 ms<br />
64 bytes from iad23s06-in-f2.1e100.net (74.125.228.34): icmp_seq=4 ttl=52 time=4.04 ms<br />
*Linux/UNIXS may want to add the '-c' flag followed by a number to specify how many times to ping the destination host.<br />
<br />
If you are having trouble connecting to UMIACS services we would suggest you test against one of our publicly accessible hosts like the [[Nexus]] nodes.<br />
<br />
<pre><br />
-bash-4.2$ ping nexuscml00.umiacs.umd.edu<br />
PING nexuscml00.umiacs.UMD.EDU (128.8.121.4) 56(84) bytes of data.<br />
64 bytes from nexuscml00.umiacs.umd.edu (128.8.121.4): icmp_seq=1 ttl=60 time=0.507 ms<br />
64 bytes from nexuscml00.umiacs.umd.edu (128.8.121.4): icmp_seq=2 ttl=60 time=0.335 ms<br />
64 bytes from nexuscml00.umiacs.umd.edu (128.8.121.4): icmp_seq=3 ttl=60 time=0.328 ms<br />
64 bytes from nexuscml00.umiacs.umd.edu (128.8.121.4): icmp_seq=4 ttl=60 time=0.379 ms<br />
</pre><br />
<br />
==Trace Route==<br />
The traceroute tool utilizes the IP protocol to track the route one's packets take through a network. Traceroute is useful when trying to figure out why a host is unreachable because it shows where the connection failed.<br />
*Windows:<br />
**In a command prompt type "tracert" followed by the hostname or IP address of the host you want to test the route to.<br />
C:\>tracert google.com<br />
Tracing route to google.com [74.125.228.38]<br />
over a maximum of 30 hops:<br />
1 <1 ms <1 ms <1 ms gw-486.umiacs.umd.edu [192.168.86.1] <br />
2 <1 ms <1 ms <1 ms umfirewall00.umiacs.umd.edu [128.8.120.17] <br />
3 13 ms <1 ms <1 ms avw1hub-gw.umiacs.umd.edu [128.8.120.1] <br />
4 1 ms <1 ms <1 ms vlan8.css-priv-r1.net.umd.edu [128.8.6.129] <br />
5 <1 ms <1 ms <1 ms gi6-1.css-core-r1.net.umd.edu [128.8.0.117] <br />
6 <1 ms <1 ms <1 ms gi3-2.css-fw-r1.net.umd.edu [128.8.0.82] <br />
7 1 ms 2 ms 2 ms 128.8.0.226 <br />
8 2 ms 2 ms 2 ms 107-0-84-29-static.hfc.comcastbusiness.net [107.0.84.29] <br />
9 3 ms 2 ms 2 ms xe-3-1-2-0-ar04.capitolhghts.md.bad.comcast.net [68.85.114.113] <br />
10 5 ms 6 ms 4 ms pos-5-7-0-0-cr01.ashburn.va.ibone.comcast.net [68.86.90.85] <br />
11 4 ms 4 ms 5 ms pos-0-2-0-0-pe01.ashburn.va.ibone.comcast.net [68.86.86.70] <br />
12 4 ms 6 ms 4 ms 75.149.231.62 <br />
13 42 ms 4 ms 4 ms 209.85.252.80 <br />
14 5 ms 5 ms 5 ms 72.14.238.175 <br />
15 4 ms 4 ms 5 ms iad23s06-in-f6.1e100.net [74.125.228.38] <br />
Trace complete.<br />
*Linux/UNIX:<br />
**In a terminal type 'traceroute' followed by the hostname or IP address of the host you want to test the route to.<br />
bash$ traceroute google.com<br />
traceroute: Warning: google.com has multiple addresses; using 74.125.228.72<br />
traceroute to google.com (74.125.228.72), 64 hops max, 52 byte packets<br />
1 10.109.160.1 (10.109.160.1) 1.329 ms 1.086 ms 1.232 ms<br />
2 129-2-129-129.wireless.umd.edu (129.2.129.129) 2.780 ms 1.924 ms 2.127 ms<br />
3 te1-6.css-core-r1.net.umd.edu (128.8.0.121) 2.514 ms 2.293 ms 1.863 ms<br />
4 gi3-2.css-fw-r1.net.umd.edu (128.8.0.82) 2.429 ms 2.548 ms 2.284 ms<br />
5 128.8.0.226 (128.8.0.226) 3.838 ms 3.465 ms 3.635 ms<br />
6 107-0-84-29-static.hfc.comcastbusiness.net (107.0.84.29) 3.941 ms 3.445 ms 3.303 ms<br />
7 xe-3-1-2-0-ar04.capitolhghts.md.bad.comcast.net (68.85.114.113) 3.522 ms 4.333 ms 3.441 ms<br />
8 pos-5-1-0-0-cr01.ashburn.va.ibone.comcast.net (68.86.90.241) 7.269 ms<br />
pos-4-12-0-0-cr01.newyork.ny.ibone.comcast.net (68.86.90.173) 9.025 ms 7.192 ms<br />
9 pos-0-4-0-0-pe01.ashburn.va.ibone.comcast.net (68.86.86.146) 5.236 ms 5.636 ms 6.229 ms<br />
10 75.149.231.62 (75.149.231.62) 5.186 ms 5.113 ms 6.725 ms<br />
11 209.85.252.46 (209.85.252.46) 6.836 ms 6.352 ms 78.371 ms<br />
12 72.14.238.247 (72.14.238.247) 7.823 ms 7.616 ms 8.331 ms<br />
13 iad23s07-in-f8.1e100.net (74.125.228.72) 10.798 ms 13.383 ms 7.540 ms<br />
<br />
Again, if you you are having trouble connecting to UMIACS-based hosts, please try to tracert/traceroute to a publicly accessible host such as <code>openlab.umiacs.umd.edu</code>.<br />
<br />
==Other Things to consider==<br />
*Is your IP address manually set?<br />
**In order for networking to function properly a machine must have a unique IP address on its network. In many cases IP address are assigned automatically via the DHCP protocol, but it is also possible to manually set an IP address. If your machine happens to have a manually set IP address that conflicts with that of another host, networking issues may arise.<br />
*Are you behind a firewall?<br />
**Some establishments use firewalls to restrict certain types of traffic both in and out of their networks. If you are having trouble accessing a specific service, this might be the case. To get around this you can connect to the VPN, or use SSH tunneling.<br />
*Other useful commands:<br />
**dig -performs DNS lookups and displays the answers that are returned from the name server(s) that were queried<br />
**host -simple utility for performing DNS loookups<br />
**netstat (linux/UNIX) -shows network status<br />
**route print (Windows) -shows network routing tables<br />
<br />
==Change your default DNS server==<br />
Please see [[Network/Troubleshooting/DNS | DNS]] to change your default DNS server.</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=Services/CommonPool&diff=10393Services/CommonPool2022-04-22T22:53:43Z<p>Jayid07: /* Current Offerings */</p>
<hr />
<div>Common-Pool resources consist of various ....<br />
<br />
==Current Offerings==<br />
;[[Nexus | Nexus Cluster]]<br />
;Infrastructure as a service<br />
;Supported Shared Storage</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=Services&diff=10392Services2022-04-22T22:52:57Z<p>Jayid07: /* UMIACS Service Categories */</p>
<hr />
<div>UMIACS provides a wide range of services in order to help further the research of it's members. The categories listed below serve as a general grouping, as services not specifically mentioned in a category may be available.<br />
<br />
==UMIACS Service Categories==<br />
<br />
<br />
<br />
;[[Services/Compute | Computational Resources]] <br />
: Workstations, Laptops, HPC, Virtualization<br />
;[[Services/CommonPool | Common-Pool Resources]]<br />
: Nexus Cluster<br />
;[[Services/Data | Data Storage and Backup]] <br />
: Services to assist with data management and distribution<br />
;[[Services/EMail | Electronic Mail]]<br />
: Locally hosted E-Mail services.<br />
;[[Services/EquipmentLoans | Equipment Loans]]<br />
: Laptops, Projectors, Data Storage<br />
;[[Services/Logistics | Logistics]] <br />
: Ordering, Procurement, Receiving<br />
;[[Services/OnSite | On-Site Services]]<br />
: Network Access, Printing<br />
;[[Services/Support | Technical Support]] <br />
:Hardware and OS support<br />
;[[Services/Collaboration | Web-based Collaborative Tools]] <br />
: Revision Control, Data Sharing<br />
;[[Services/Web | Web Hosting]]<br />
:User Webspace, Lab Webspace, Project Pages</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=Modules&diff=10391Modules2022-04-22T22:49:00Z<p>Jayid07: /* Adding Modules into your Environment */</p>
<hr />
<div>=GNU Modules=<br />
Many large institutions use the concept of Modules to load software into user environments. It provides a way to add and remove, if later needed, environmental variables that provide access to UMIACS large set of software we offer on our Red Hat Enterprise Linux ([[RedHat]]) and [[Ubuntu]] platforms. This works by customizing your shell environment and is done automatically for the two major shell families (bash/sh (default shell) and tcsh/csh). If you use an alternate shell, please look to source the appropriate script for your shell in <tt>/usr/share/Modules/init</tt><br />
<br />
Initially your module environment is empty though included in your ModulePath is local operating system specific modules, locally built software modules and binary software modules (Matlab, Intel Compiler, etc...).<br />
<br />
==Available Software==<br />
To see if a piece of software is available you use the <tt>module avail</tt> command. This can be given a trailing prefix on the command line to search a subset of the available software.<br />
<br />
<pre><br />
[username@nexusstaff00~]$ module avail matlab<br />
<br />
---------------------------------------- /opt/common/.modulefiles -----------------------------------------<br />
matlab/2007b matlab/2008b matlab/2010a matlab/2011a matlab/2012a<br />
matlab/2008a matlab/2009b matlab/2010b matlab/2011b matlab/2012b<br />
</pre><br />
<br />
<pre><br />
[username@nexusstaff00~]$ module avail gcc<br />
<br />
-------------------------------------- /opt/local/stow/.modulefiles ---------------------------------------<br />
gcc/4.6.0 gcc/4.7.2(default) gcc/boost/1.53.0<br />
</pre><br />
<br />
There may be a (default) module otherwise the most recent version of software is loaded when specified.<br />
<br />
==Adding Modules into your Environment==<br />
You can simply add a module into your environment by using the <tt>module add <module></tt> command.<br />
<br />
<pre><br />
[username@nexusstaff00 ~]$ module add matlab<br />
</pre><br />
<br />
You can also specify a specific version of the software when we have multiple ones available.<br />
<br />
<pre><br />
[username@nexusstaff00 ~]$ module add cuda/5.0.35<br />
</pre><br />
<br />
==Listing Modules==<br />
You can list the currently loaded modules in your environment by using the '''list''' command.<br />
<br />
<pre><br />
[username@nexusstaff00 ~] $ module list<br />
Currently Loaded Modulefiles:<br />
1) R/3.1.2 2) kile/2.1 3) vim/7.4<br />
</pre><br />
<br />
==Showing a Module==<br />
You can show what the module is going to add to your environment (and the dependencies that will be added) with the '''show''' command.<br />
<br />
<pre><br />
[username@nexusstaff00 ~] $ module show fftw<br />
-------------------------------------------------------------------<br />
/opt/local/stow/.modulefiles/fftw/3.3.4:<br />
<br />
prepend-path PATH /opt/local/stow/fftw-3.3.4/bin<br />
prepend-path CPATH /opt/local/stow/fftw-3.3.4/include<br />
prepend-path LIBRARY_PATH /opt/local/stow/fftw-3.3.4/lib<br />
prepend-path LD_RUN_PATH /opt/local/stow/fftw-3.3.4/lib<br />
prepend-path MANPATH /opt/local/stow/fftw-3.3.4/share/man<br />
prepend-path PKG_CONFIG_PATH /opt/local/stow/fftw-3.3.4/lib/pkgconfig<br />
-------------------------------------------------------------------<br />
</pre><br />
<br />
==Removing Modules in your Environment==<br />
If you want to remove a module because it conflicts or you want to clean up your environment you can by using the <tt>module rm <module></tt> command.<br />
<br />
==Using Modules in Scripts==<br />
To use modules within a shell script or interpreted language you will need to load a file from <tt>/usr/share/Modules/init</tt> into your program.<br />
<br />
===Bash===<br />
<pre><br />
. /usr/share/Modules/init/bash<br />
module add gcc<br />
</pre><br />
<br />
===Tcsh===<br />
<pre><br />
source /usr/share/Modules/init/tcsh<br />
module add gcc<br />
</pre><br />
<br />
==Modules in Non-Interactive Shell Sessions==<br />
In non-interactive shell sessions (non-login shells), the Modules configuration environment will not automatically load. This will also occur if the OS version of the compute node you are scheduled on is different from the OS version of the submission node you are submitting the job from.<br />
<br />
If you will need the use of Modules in non-interactive [[SLURM]] jobs, cross-OS jobs, or other similar sessions, you will need to include the following in your shell init scripts:<br />
<br />
===Bash===<br />
<pre><br />
. /usr/share/Modules/init/bash<br />
. /etc/profile.d/ummodules.sh<br />
</pre><br />
<br />
===Tcsh===<br />
<pre><br />
source /usr/share/Modules/init/tcsh<br />
source /etc/profile.d/ummodules.csh<br />
</pre><br />
<br />
==Additional Help==<br />
You can type <tt>module</tt> with no arguments for a full list of commands or <tt>man module</tt> for further information.<br />
<br />
===Online Resources===<br />
*[http://modules.sourceforge.net/ Project Page (SourceForge)]<br />
*[http://modules.sourceforge.net/docs/Modules-Paper.pdf Introduction to Modules]<br />
*[http://sourceforge.net/p/modules/wiki/FAQ/ Modules FAQ]<br />
*[http://modules.sourceforge.net/docs/user-setup.pdf user-setup]</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=Modules&diff=10390Modules2022-04-22T22:48:46Z<p>Jayid07: /* Showing a Module */</p>
<hr />
<div>=GNU Modules=<br />
Many large institutions use the concept of Modules to load software into user environments. It provides a way to add and remove, if later needed, environmental variables that provide access to UMIACS large set of software we offer on our Red Hat Enterprise Linux ([[RedHat]]) and [[Ubuntu]] platforms. This works by customizing your shell environment and is done automatically for the two major shell families (bash/sh (default shell) and tcsh/csh). If you use an alternate shell, please look to source the appropriate script for your shell in <tt>/usr/share/Modules/init</tt><br />
<br />
Initially your module environment is empty though included in your ModulePath is local operating system specific modules, locally built software modules and binary software modules (Matlab, Intel Compiler, etc...).<br />
<br />
==Available Software==<br />
To see if a piece of software is available you use the <tt>module avail</tt> command. This can be given a trailing prefix on the command line to search a subset of the available software.<br />
<br />
<pre><br />
[username@nexusstaff00~]$ module avail matlab<br />
<br />
---------------------------------------- /opt/common/.modulefiles -----------------------------------------<br />
matlab/2007b matlab/2008b matlab/2010a matlab/2011a matlab/2012a<br />
matlab/2008a matlab/2009b matlab/2010b matlab/2011b matlab/2012b<br />
</pre><br />
<br />
<pre><br />
[username@nexusstaff00~]$ module avail gcc<br />
<br />
-------------------------------------- /opt/local/stow/.modulefiles ---------------------------------------<br />
gcc/4.6.0 gcc/4.7.2(default) gcc/boost/1.53.0<br />
</pre><br />
<br />
There may be a (default) module otherwise the most recent version of software is loaded when specified.<br />
<br />
==Adding Modules into your Environment==<br />
You can simply add a module into your environment by using the <tt>module add <module></tt> command.<br />
<br />
<pre><br />
[username@nexusstaff00~]$ module add matlab<br />
</pre><br />
<br />
You can also specify a specific version of the software when we have multiple ones available.<br />
<br />
<pre><br />
[username@nexusstaff00~]$ module add cuda/5.0.35<br />
</pre><br />
<br />
==Listing Modules==<br />
You can list the currently loaded modules in your environment by using the '''list''' command.<br />
<br />
<pre><br />
[username@nexusstaff00 ~] $ module list<br />
Currently Loaded Modulefiles:<br />
1) R/3.1.2 2) kile/2.1 3) vim/7.4<br />
</pre><br />
<br />
==Showing a Module==<br />
You can show what the module is going to add to your environment (and the dependencies that will be added) with the '''show''' command.<br />
<br />
<pre><br />
[username@nexusstaff00 ~] $ module show fftw<br />
-------------------------------------------------------------------<br />
/opt/local/stow/.modulefiles/fftw/3.3.4:<br />
<br />
prepend-path PATH /opt/local/stow/fftw-3.3.4/bin<br />
prepend-path CPATH /opt/local/stow/fftw-3.3.4/include<br />
prepend-path LIBRARY_PATH /opt/local/stow/fftw-3.3.4/lib<br />
prepend-path LD_RUN_PATH /opt/local/stow/fftw-3.3.4/lib<br />
prepend-path MANPATH /opt/local/stow/fftw-3.3.4/share/man<br />
prepend-path PKG_CONFIG_PATH /opt/local/stow/fftw-3.3.4/lib/pkgconfig<br />
-------------------------------------------------------------------<br />
</pre><br />
<br />
==Removing Modules in your Environment==<br />
If you want to remove a module because it conflicts or you want to clean up your environment you can by using the <tt>module rm <module></tt> command.<br />
<br />
==Using Modules in Scripts==<br />
To use modules within a shell script or interpreted language you will need to load a file from <tt>/usr/share/Modules/init</tt> into your program.<br />
<br />
===Bash===<br />
<pre><br />
. /usr/share/Modules/init/bash<br />
module add gcc<br />
</pre><br />
<br />
===Tcsh===<br />
<pre><br />
source /usr/share/Modules/init/tcsh<br />
module add gcc<br />
</pre><br />
<br />
==Modules in Non-Interactive Shell Sessions==<br />
In non-interactive shell sessions (non-login shells), the Modules configuration environment will not automatically load. This will also occur if the OS version of the compute node you are scheduled on is different from the OS version of the submission node you are submitting the job from.<br />
<br />
If you will need the use of Modules in non-interactive [[SLURM]] jobs, cross-OS jobs, or other similar sessions, you will need to include the following in your shell init scripts:<br />
<br />
===Bash===<br />
<pre><br />
. /usr/share/Modules/init/bash<br />
. /etc/profile.d/ummodules.sh<br />
</pre><br />
<br />
===Tcsh===<br />
<pre><br />
source /usr/share/Modules/init/tcsh<br />
source /etc/profile.d/ummodules.csh<br />
</pre><br />
<br />
==Additional Help==<br />
You can type <tt>module</tt> with no arguments for a full list of commands or <tt>man module</tt> for further information.<br />
<br />
===Online Resources===<br />
*[http://modules.sourceforge.net/ Project Page (SourceForge)]<br />
*[http://modules.sourceforge.net/docs/Modules-Paper.pdf Introduction to Modules]<br />
*[http://sourceforge.net/p/modules/wiki/FAQ/ Modules FAQ]<br />
*[http://modules.sourceforge.net/docs/user-setup.pdf user-setup]</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=Modules&diff=10389Modules2022-04-22T22:48:37Z<p>Jayid07: /* Listing Modules */</p>
<hr />
<div>=GNU Modules=<br />
Many large institutions use the concept of Modules to load software into user environments. It provides a way to add and remove, if later needed, environmental variables that provide access to UMIACS large set of software we offer on our Red Hat Enterprise Linux ([[RedHat]]) and [[Ubuntu]] platforms. This works by customizing your shell environment and is done automatically for the two major shell families (bash/sh (default shell) and tcsh/csh). If you use an alternate shell, please look to source the appropriate script for your shell in <tt>/usr/share/Modules/init</tt><br />
<br />
Initially your module environment is empty though included in your ModulePath is local operating system specific modules, locally built software modules and binary software modules (Matlab, Intel Compiler, etc...).<br />
<br />
==Available Software==<br />
To see if a piece of software is available you use the <tt>module avail</tt> command. This can be given a trailing prefix on the command line to search a subset of the available software.<br />
<br />
<pre><br />
[username@nexusstaff00~]$ module avail matlab<br />
<br />
---------------------------------------- /opt/common/.modulefiles -----------------------------------------<br />
matlab/2007b matlab/2008b matlab/2010a matlab/2011a matlab/2012a<br />
matlab/2008a matlab/2009b matlab/2010b matlab/2011b matlab/2012b<br />
</pre><br />
<br />
<pre><br />
[username@nexusstaff00~]$ module avail gcc<br />
<br />
-------------------------------------- /opt/local/stow/.modulefiles ---------------------------------------<br />
gcc/4.6.0 gcc/4.7.2(default) gcc/boost/1.53.0<br />
</pre><br />
<br />
There may be a (default) module otherwise the most recent version of software is loaded when specified.<br />
<br />
==Adding Modules into your Environment==<br />
You can simply add a module into your environment by using the <tt>module add <module></tt> command.<br />
<br />
<pre><br />
[username@nexusstaff00~]$ module add matlab<br />
</pre><br />
<br />
You can also specify a specific version of the software when we have multiple ones available.<br />
<br />
<pre><br />
[username@nexusstaff00~]$ module add cuda/5.0.35<br />
</pre><br />
<br />
==Listing Modules==<br />
You can list the currently loaded modules in your environment by using the '''list''' command.<br />
<br />
<pre><br />
[username@nexusstaff00 ~] $ module list<br />
Currently Loaded Modulefiles:<br />
1) R/3.1.2 2) kile/2.1 3) vim/7.4<br />
</pre><br />
<br />
==Showing a Module==<br />
You can show what the module is going to add to your environment (and the dependencies that will be added) with the '''show''' command.<br />
<br />
<pre><br />
[username@opensub02 ~] $ module show fftw<br />
-------------------------------------------------------------------<br />
/opt/local/stow/.modulefiles/fftw/3.3.4:<br />
<br />
prepend-path PATH /opt/local/stow/fftw-3.3.4/bin<br />
prepend-path CPATH /opt/local/stow/fftw-3.3.4/include<br />
prepend-path LIBRARY_PATH /opt/local/stow/fftw-3.3.4/lib<br />
prepend-path LD_RUN_PATH /opt/local/stow/fftw-3.3.4/lib<br />
prepend-path MANPATH /opt/local/stow/fftw-3.3.4/share/man<br />
prepend-path PKG_CONFIG_PATH /opt/local/stow/fftw-3.3.4/lib/pkgconfig<br />
-------------------------------------------------------------------<br />
</pre><br />
<br />
==Removing Modules in your Environment==<br />
If you want to remove a module because it conflicts or you want to clean up your environment you can by using the <tt>module rm <module></tt> command.<br />
<br />
==Using Modules in Scripts==<br />
To use modules within a shell script or interpreted language you will need to load a file from <tt>/usr/share/Modules/init</tt> into your program.<br />
<br />
===Bash===<br />
<pre><br />
. /usr/share/Modules/init/bash<br />
module add gcc<br />
</pre><br />
<br />
===Tcsh===<br />
<pre><br />
source /usr/share/Modules/init/tcsh<br />
module add gcc<br />
</pre><br />
<br />
==Modules in Non-Interactive Shell Sessions==<br />
In non-interactive shell sessions (non-login shells), the Modules configuration environment will not automatically load. This will also occur if the OS version of the compute node you are scheduled on is different from the OS version of the submission node you are submitting the job from.<br />
<br />
If you will need the use of Modules in non-interactive [[SLURM]] jobs, cross-OS jobs, or other similar sessions, you will need to include the following in your shell init scripts:<br />
<br />
===Bash===<br />
<pre><br />
. /usr/share/Modules/init/bash<br />
. /etc/profile.d/ummodules.sh<br />
</pre><br />
<br />
===Tcsh===<br />
<pre><br />
source /usr/share/Modules/init/tcsh<br />
source /etc/profile.d/ummodules.csh<br />
</pre><br />
<br />
==Additional Help==<br />
You can type <tt>module</tt> with no arguments for a full list of commands or <tt>man module</tt> for further information.<br />
<br />
===Online Resources===<br />
*[http://modules.sourceforge.net/ Project Page (SourceForge)]<br />
*[http://modules.sourceforge.net/docs/Modules-Paper.pdf Introduction to Modules]<br />
*[http://sourceforge.net/p/modules/wiki/FAQ/ Modules FAQ]<br />
*[http://modules.sourceforge.net/docs/user-setup.pdf user-setup]</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=Modules&diff=10388Modules2022-04-22T22:48:28Z<p>Jayid07: /* Adding Modules into your Environment */</p>
<hr />
<div>=GNU Modules=<br />
Many large institutions use the concept of Modules to load software into user environments. It provides a way to add and remove, if later needed, environmental variables that provide access to UMIACS large set of software we offer on our Red Hat Enterprise Linux ([[RedHat]]) and [[Ubuntu]] platforms. This works by customizing your shell environment and is done automatically for the two major shell families (bash/sh (default shell) and tcsh/csh). If you use an alternate shell, please look to source the appropriate script for your shell in <tt>/usr/share/Modules/init</tt><br />
<br />
Initially your module environment is empty though included in your ModulePath is local operating system specific modules, locally built software modules and binary software modules (Matlab, Intel Compiler, etc...).<br />
<br />
==Available Software==<br />
To see if a piece of software is available you use the <tt>module avail</tt> command. This can be given a trailing prefix on the command line to search a subset of the available software.<br />
<br />
<pre><br />
[username@nexusstaff00~]$ module avail matlab<br />
<br />
---------------------------------------- /opt/common/.modulefiles -----------------------------------------<br />
matlab/2007b matlab/2008b matlab/2010a matlab/2011a matlab/2012a<br />
matlab/2008a matlab/2009b matlab/2010b matlab/2011b matlab/2012b<br />
</pre><br />
<br />
<pre><br />
[username@nexusstaff00~]$ module avail gcc<br />
<br />
-------------------------------------- /opt/local/stow/.modulefiles ---------------------------------------<br />
gcc/4.6.0 gcc/4.7.2(default) gcc/boost/1.53.0<br />
</pre><br />
<br />
There may be a (default) module otherwise the most recent version of software is loaded when specified.<br />
<br />
==Adding Modules into your Environment==<br />
You can simply add a module into your environment by using the <tt>module add <module></tt> command.<br />
<br />
<pre><br />
[username@nexusstaff00~]$ module add matlab<br />
</pre><br />
<br />
You can also specify a specific version of the software when we have multiple ones available.<br />
<br />
<pre><br />
[username@nexusstaff00~]$ module add cuda/5.0.35<br />
</pre><br />
<br />
==Listing Modules==<br />
You can list the currently loaded modules in your environment by using the '''list''' command.<br />
<br />
<pre><br />
[username@opensub02 ~] $ module list<br />
Currently Loaded Modulefiles:<br />
1) R/3.1.2 2) kile/2.1 3) vim/7.4<br />
</pre><br />
<br />
==Showing a Module==<br />
You can show what the module is going to add to your environment (and the dependencies that will be added) with the '''show''' command.<br />
<br />
<pre><br />
[username@opensub02 ~] $ module show fftw<br />
-------------------------------------------------------------------<br />
/opt/local/stow/.modulefiles/fftw/3.3.4:<br />
<br />
prepend-path PATH /opt/local/stow/fftw-3.3.4/bin<br />
prepend-path CPATH /opt/local/stow/fftw-3.3.4/include<br />
prepend-path LIBRARY_PATH /opt/local/stow/fftw-3.3.4/lib<br />
prepend-path LD_RUN_PATH /opt/local/stow/fftw-3.3.4/lib<br />
prepend-path MANPATH /opt/local/stow/fftw-3.3.4/share/man<br />
prepend-path PKG_CONFIG_PATH /opt/local/stow/fftw-3.3.4/lib/pkgconfig<br />
-------------------------------------------------------------------<br />
</pre><br />
<br />
==Removing Modules in your Environment==<br />
If you want to remove a module because it conflicts or you want to clean up your environment you can by using the <tt>module rm <module></tt> command.<br />
<br />
==Using Modules in Scripts==<br />
To use modules within a shell script or interpreted language you will need to load a file from <tt>/usr/share/Modules/init</tt> into your program.<br />
<br />
===Bash===<br />
<pre><br />
. /usr/share/Modules/init/bash<br />
module add gcc<br />
</pre><br />
<br />
===Tcsh===<br />
<pre><br />
source /usr/share/Modules/init/tcsh<br />
module add gcc<br />
</pre><br />
<br />
==Modules in Non-Interactive Shell Sessions==<br />
In non-interactive shell sessions (non-login shells), the Modules configuration environment will not automatically load. This will also occur if the OS version of the compute node you are scheduled on is different from the OS version of the submission node you are submitting the job from.<br />
<br />
If you will need the use of Modules in non-interactive [[SLURM]] jobs, cross-OS jobs, or other similar sessions, you will need to include the following in your shell init scripts:<br />
<br />
===Bash===<br />
<pre><br />
. /usr/share/Modules/init/bash<br />
. /etc/profile.d/ummodules.sh<br />
</pre><br />
<br />
===Tcsh===<br />
<pre><br />
source /usr/share/Modules/init/tcsh<br />
source /etc/profile.d/ummodules.csh<br />
</pre><br />
<br />
==Additional Help==<br />
You can type <tt>module</tt> with no arguments for a full list of commands or <tt>man module</tt> for further information.<br />
<br />
===Online Resources===<br />
*[http://modules.sourceforge.net/ Project Page (SourceForge)]<br />
*[http://modules.sourceforge.net/docs/Modules-Paper.pdf Introduction to Modules]<br />
*[http://sourceforge.net/p/modules/wiki/FAQ/ Modules FAQ]<br />
*[http://modules.sourceforge.net/docs/user-setup.pdf user-setup]</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=Modules&diff=10387Modules2022-04-22T22:48:04Z<p>Jayid07: /* Available Software */</p>
<hr />
<div>=GNU Modules=<br />
Many large institutions use the concept of Modules to load software into user environments. It provides a way to add and remove, if later needed, environmental variables that provide access to UMIACS large set of software we offer on our Red Hat Enterprise Linux ([[RedHat]]) and [[Ubuntu]] platforms. This works by customizing your shell environment and is done automatically for the two major shell families (bash/sh (default shell) and tcsh/csh). If you use an alternate shell, please look to source the appropriate script for your shell in <tt>/usr/share/Modules/init</tt><br />
<br />
Initially your module environment is empty though included in your ModulePath is local operating system specific modules, locally built software modules and binary software modules (Matlab, Intel Compiler, etc...).<br />
<br />
==Available Software==<br />
To see if a piece of software is available you use the <tt>module avail</tt> command. This can be given a trailing prefix on the command line to search a subset of the available software.<br />
<br />
<pre><br />
[username@nexusstaff00~]$ module avail matlab<br />
<br />
---------------------------------------- /opt/common/.modulefiles -----------------------------------------<br />
matlab/2007b matlab/2008b matlab/2010a matlab/2011a matlab/2012a<br />
matlab/2008a matlab/2009b matlab/2010b matlab/2011b matlab/2012b<br />
</pre><br />
<br />
<pre><br />
[username@nexusstaff00~]$ module avail gcc<br />
<br />
-------------------------------------- /opt/local/stow/.modulefiles ---------------------------------------<br />
gcc/4.6.0 gcc/4.7.2(default) gcc/boost/1.53.0<br />
</pre><br />
<br />
There may be a (default) module otherwise the most recent version of software is loaded when specified.<br />
<br />
==Adding Modules into your Environment==<br />
You can simply add a module into your environment by using the <tt>module add <module></tt> command.<br />
<br />
<pre><br />
[username@opensub02 ~]$ module add matlab<br />
</pre><br />
<br />
You can also specify a specific version of the software when we have multiple ones available.<br />
<br />
<pre><br />
[username@opensub02 ~]$ module add cuda/5.0.35<br />
</pre><br />
<br />
==Listing Modules==<br />
You can list the currently loaded modules in your environment by using the '''list''' command.<br />
<br />
<pre><br />
[username@opensub02 ~] $ module list<br />
Currently Loaded Modulefiles:<br />
1) R/3.1.2 2) kile/2.1 3) vim/7.4<br />
</pre><br />
<br />
==Showing a Module==<br />
You can show what the module is going to add to your environment (and the dependencies that will be added) with the '''show''' command.<br />
<br />
<pre><br />
[username@opensub02 ~] $ module show fftw<br />
-------------------------------------------------------------------<br />
/opt/local/stow/.modulefiles/fftw/3.3.4:<br />
<br />
prepend-path PATH /opt/local/stow/fftw-3.3.4/bin<br />
prepend-path CPATH /opt/local/stow/fftw-3.3.4/include<br />
prepend-path LIBRARY_PATH /opt/local/stow/fftw-3.3.4/lib<br />
prepend-path LD_RUN_PATH /opt/local/stow/fftw-3.3.4/lib<br />
prepend-path MANPATH /opt/local/stow/fftw-3.3.4/share/man<br />
prepend-path PKG_CONFIG_PATH /opt/local/stow/fftw-3.3.4/lib/pkgconfig<br />
-------------------------------------------------------------------<br />
</pre><br />
<br />
==Removing Modules in your Environment==<br />
If you want to remove a module because it conflicts or you want to clean up your environment you can by using the <tt>module rm <module></tt> command.<br />
<br />
==Using Modules in Scripts==<br />
To use modules within a shell script or interpreted language you will need to load a file from <tt>/usr/share/Modules/init</tt> into your program.<br />
<br />
===Bash===<br />
<pre><br />
. /usr/share/Modules/init/bash<br />
module add gcc<br />
</pre><br />
<br />
===Tcsh===<br />
<pre><br />
source /usr/share/Modules/init/tcsh<br />
module add gcc<br />
</pre><br />
<br />
==Modules in Non-Interactive Shell Sessions==<br />
In non-interactive shell sessions (non-login shells), the Modules configuration environment will not automatically load. This will also occur if the OS version of the compute node you are scheduled on is different from the OS version of the submission node you are submitting the job from.<br />
<br />
If you will need the use of Modules in non-interactive [[SLURM]] jobs, cross-OS jobs, or other similar sessions, you will need to include the following in your shell init scripts:<br />
<br />
===Bash===<br />
<pre><br />
. /usr/share/Modules/init/bash<br />
. /etc/profile.d/ummodules.sh<br />
</pre><br />
<br />
===Tcsh===<br />
<pre><br />
source /usr/share/Modules/init/tcsh<br />
source /etc/profile.d/ummodules.csh<br />
</pre><br />
<br />
==Additional Help==<br />
You can type <tt>module</tt> with no arguments for a full list of commands or <tt>man module</tt> for further information.<br />
<br />
===Online Resources===<br />
*[http://modules.sourceforge.net/ Project Page (SourceForge)]<br />
*[http://modules.sourceforge.net/docs/Modules-Paper.pdf Introduction to Modules]<br />
*[http://sourceforge.net/p/modules/wiki/FAQ/ Modules FAQ]<br />
*[http://modules.sourceforge.net/docs/user-setup.pdf user-setup]</div>Jayid07https://wiki.umiacs.umd.edu/umiacs/index.php?title=NAGWareCompiler&diff=10386NAGWareCompiler2022-04-22T22:43:59Z<p>Jayid07: /* Example use of the compiler */</p>
<hr />
<div>__NOTOC__<br />
<br />
The NAGWare Fortran compiler is available under <tt>/opt/common/NAGWare_f95</tt>* on our supported RHEL7/Ubuntu hosts. You can either load the binaries into your environment using [[Modules | GNU Modules]], reference the paths directly, else add them to your [[PATH | PATH]].<br />
<br />
==Example use of the compiler==<br />
<br />
Here is a very basic compilation and execution example of [http://en.wikipedia.org/wiki/Hello_world_program Hello World] using the NAGWare fortran compiler. This example will utilize a GNU Module available on all supported Linux hosts to quickly load the binaries and libraries into your environment. <br />
<br />
First, in a working directory, create a file <tt>hello.f95</tt> with the following contents:<br />
<br />
PRINT *, "Hello World!" <br />
END<br />
<br />
Then, load the NAGWare modules into your environment and verify that the compiler is in your path (libraries, MANpages, and license server information will be loaded as well):<br />
<br />
-bash-4.2$ module load nagware<br />
-bash-4.2$ which f95<br />
/opt/common/NAGWare_f95-5.1/bin/f95<br />
<br />
Compile and then run the program:<br />
<br />
-bash-4.2$ f95 -o hello hello.f95 <br />
-bash-4.2$ ./hello <br />
Hello World!<br />
<br />
==See Also==<br />
* [http://www.nag.com/nagware/np/doc_index.asp NAG Compiler Documentation Index]<br />
* [http://www.nag.com/nagware/np.asp NAG Compiler Product Page]</div>Jayid07