Quota: Difference between revisions

From UMIACS
Jump to navigation Jump to search
No edit summary
No edit summary
Line 1: Line 1:
Quotas in UMIACS are almost exclusively on our ONStor filers however they do not support rpc.quotad and we use something called tree quotas.  The user can see their quota via using the '''df''' command and looking for the path they are in.
==Tree Quotas==
The current most prevalent style of quota management is done through tree quotas that show up in how much space is available in the file system by using the '''<tt>df</tt>''' command to inspect either the current path (no arguments given) or a given path.  [[NFShomes]] and many other file systems utilize this style of quota management.


For example to show my /nfshomes/derektest home directory quota i can just use '''df ~'''
For example to show my /nfshomes/derektest home directory quota i can just use '''<tt>df ~</tt>'''
<pre>
<pre>
$ df ~
$ df ~
Line 7: Line 8:
umiacsfs02:/nfshomes/derektest
umiacsfs02:/nfshomes/derektest
                       1024000    49984    974016  5% /nfshomes/derektest
                       1024000    49984    974016  5% /nfshomes/derektest
</pre>
==RPC Quotas==
Alternatively we now have some file systems that support RPC quotad quotas.  These are reported to the user by the '''<tt>quota</tt>''' command.  Home directories that are mounted from our Dell FluidFS NAS will support these kinds of quotas (/cliphomes, /hidhomes, /cfarhomes, /cbcbhomes).
To find out what your current quota you can run '''<tt>df .</tt>''' to find out what file system you are currently mounted from (in this example it is  <tt>fluidfs:/rama_cfarhomes/derek</tt>)
<pre>
# df .
Filesystem          1K-blocks      Used Available Use% Mounted on
fluidfs:/rama_cfarhomes/derek
                    1073741824 759351008 314390816  71% /cfarhomes/derek
</pre>
You can then run '''<tt>quota</tt>''' and that line will list your quota information for that file system.  If you see errors such as "Error while getting quota from ..." you may safely ignore these as some of our file systems such as Gluster do not report quotas correctly.
<pre>
$ quota
Disk quotas for user derek (uid 2174):
    Filesystem  blocks  quota  limit  grace  files  quota  limit  grace
fluidfs:/rama_cfarhomes/derek
                337560      0 10240000              0      0      0
</pre>
</pre>

Revision as of 19:08, 8 January 2013

Tree Quotas

The current most prevalent style of quota management is done through tree quotas that show up in how much space is available in the file system by using the df command to inspect either the current path (no arguments given) or a given path. NFShomes and many other file systems utilize this style of quota management.

For example to show my /nfshomes/derektest home directory quota i can just use df ~

$ df ~
Filesystem           1K-blocks      Used Available Use% Mounted on
umiacsfs02:/nfshomes/derektest
                       1024000     49984    974016   5% /nfshomes/derektest

RPC Quotas

Alternatively we now have some file systems that support RPC quotad quotas. These are reported to the user by the quota command. Home directories that are mounted from our Dell FluidFS NAS will support these kinds of quotas (/cliphomes, /hidhomes, /cfarhomes, /cbcbhomes).

To find out what your current quota you can run df . to find out what file system you are currently mounted from (in this example it is fluidfs:/rama_cfarhomes/derek)

# df .
Filesystem           1K-blocks      Used Available Use% Mounted on
fluidfs:/rama_cfarhomes/derek
                     1073741824 759351008 314390816  71% /cfarhomes/derek

You can then run quota and that line will list your quota information for that file system. If you see errors such as "Error while getting quota from ..." you may safely ignore these as some of our file systems such as Gluster do not report quotas correctly.

$ quota
Disk quotas for user derek (uid 2174): 
     Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
fluidfs:/rama_cfarhomes/derek
                 337560       0 10240000               0       0       0