Tensorflow: Difference between revisions

From UMIACS
Jump to navigation Jump to search
(Created page with "Tensorflow is a Python deep learning package from Google. The easiest way to use it is to build a Python virtualenv with it in it. First load GPU modules to allow access to accelerated GPGPU training. <pre>module add cuda/9.0.176 cudnn/v7.0.5</pre> Next you will want to create a virtualenv and source into it. <pre> $ virtualenv env New python executable in env/bin/python Installing Setuptools..............................................................................")
 
No edit summary
 
(7 intermediate revisions by one other user not shown)
Line 1: Line 1:
Tensorflow is a Python deep learning package from Google.  The easiest way to use it is to build a Python virtualenv with it in it.
[https://www.tensorflow.org/ Tensorflow] is a [[Python]] deep learning package from Google.  The easiest way to use install it is to build a [[PythonVirtualEnv | Python virtualenv]] with it in it.


First load GPU modules to allow access to accelerated GPGPU training.
First load GPU modules to allow access to accelerated GPGPU training. You can find the list of Tensorflow versions and their corresponding Python/CUDA/cuDNN requirements here. [https://www.tensorflow.org/install/source#gpu] These modules are appropriate for tensorflow-2.16.1, which is the latest tensorflow release as of April 29, 2024.


<pre>module add cuda/9.0.176 cudnn/v7.0.5</pre>
<pre>module add Python3/3.9.16 cuda/12.4.1 cudnn/v8.9.7</pre>


Next you will want to create a virtualenv and source into it.
Next you will want to create a virtualenv and source into it. Note that depending on the version of Tensorflow you need, you may also need to load a module for a more recent version of Python3.


<pre>
<pre>
$ virtualenv env
$ python3 -m venv env
New python executable in env/bin/python
Installing Setuptools..............................................................................................................................................................................................................................done.
Installing Pip.....................................................................................................................................................................................................................................................................................................................................done.
$ source env/bin/activate
$ source env/bin/activate
(env) $
(env) $
</pre>
</pre>


The next step is to ensure you have a recent copy of pip in your virtualenv.
Then ensure you have a recent copy of pip in your virtualenv.


<pre>
<pre>
(env) $ pip install --upgrade pip
(env) $ pip install --upgrade pip
Downloading/unpacking pip from https://pypi.python.org/packages/11/b6/abcb525026a4be042b486df43905d6893fb04f05aac21c32c638e939e447/pip-9.0.1.tar.gz#md5=35f01da33009719497f01a4ba69d63c9
  Downloading pip-9.0.1.tar.gz (1.2MB): 1.2MB downloaded
  Running setup.py egg_info for package pip
  ....
</pre>
</pre>


Then you can now install the Tensorflow wheel through pip.
Then install the Tensorflow wheel through pip.


<pre>
<pre>
(env) $ pip install --upgrade tensorflow-gpu
(env) $ pip install --upgrade tensorflow
Collecting tensorflow-gpu
  Downloading tensorflow_gpu-1.1.0-cp27-cp27mu-manylinux1_x86_64.whl (84.1MB)
    100% |████████████████████████████████| 84.1MB 15kB/s
Collecting wheel (from tensorflow-gpu)
  Downloading wheel-0.29.0-py2.py3-none-any.whl (66kB)
    100% |████████████████████████████████| 71kB 2.0MB/s
...
</pre>
</pre>


Finally start up a python shell (or install ipython through pip) and import Tensorflow.
Finally, start up a python shell (or install ipython through pip) and import Tensorflow.


<pre>
<pre>
(env)[derek@ramawks76:/scratch0/derek ] $ python
(env)[username@hostname:/scratch0/username ] $ python
Python 2.7.5 (default, Aug  2 2016, 04:20:16)
Python 3.9.16 (main, Feb 28 2023, 09:58:09)  
[GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] on linux2
[GCC 8.5.0 20210514 (Red Hat 8.5.0-16)] on linux
Type "help", "copyright", "credits" or "license" for more information.
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> import tensorflow as tf
2024-04-29 13:14:09.685077: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-04-29 13:14:09.737280: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-04-29 13:14:11.316744: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
>>> tf.__version__
>>> tf.__version__
'1.1.0'
'2.16.1'
</pre>
</pre>


You can then try a more rigourous test by running the following example.
You can then try a more rigorous test by running the following example. Note that you may need to export XLA_FLAGS in your shell: <code>export XLA_FLAGS=--xla_gpu_cuda_data_dir=/opt/common/cuda/cuda-x.x.x</code>
<pre>
<pre>
import tensorflow as tf
import tensorflow as tf
Line 73: Line 63:
</pre>
</pre>


<b>To use this install after you close the shell you did this install in, you will need to both add the correct Cuda/cuDNN modules and activate the virtualenv by the source command.  This includes any time you are submitting to Slurm or other resource managers.</b>
<b>To use this install after you close the shell you did this install in, you will need to both add the correct [[CUDA]]/cuDNN modules, export the XLA_FLAGS variable (if needed), and activate the virtualenv by the source command.  This includes any time you are submitting to [[SLURM]].</b>

Latest revision as of 17:17, 29 April 2024

Tensorflow is a Python deep learning package from Google. The easiest way to use install it is to build a Python virtualenv with it in it.

First load GPU modules to allow access to accelerated GPGPU training. You can find the list of Tensorflow versions and their corresponding Python/CUDA/cuDNN requirements here. [1] These modules are appropriate for tensorflow-2.16.1, which is the latest tensorflow release as of April 29, 2024.

module add Python3/3.9.16 cuda/12.4.1 cudnn/v8.9.7

Next you will want to create a virtualenv and source into it. Note that depending on the version of Tensorflow you need, you may also need to load a module for a more recent version of Python3.

$ python3 -m venv env
$ source env/bin/activate
(env) $

Then ensure you have a recent copy of pip in your virtualenv.

(env) $ pip install --upgrade pip

Then install the Tensorflow wheel through pip.

(env) $ pip install --upgrade tensorflow

Finally, start up a python shell (or install ipython through pip) and import Tensorflow.

(env)[username@hostname:/scratch0/username ] $ python
Python 3.9.16 (main, Feb 28 2023, 09:58:09) 
[GCC 8.5.0 20210514 (Red Hat 8.5.0-16)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
2024-04-29 13:14:09.685077: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-04-29 13:14:09.737280: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-04-29 13:14:11.316744: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
>>> tf.__version__
'2.16.1'

You can then try a more rigorous test by running the following example. Note that you may need to export XLA_FLAGS in your shell: export XLA_FLAGS=--xla_gpu_cuda_data_dir=/opt/common/cuda/cuda-x.x.x

import tensorflow as tf
mnist = tf.keras.datasets.mnist

(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(512, activation=tf.nn.relu),
  tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)

To use this install after you close the shell you did this install in, you will need to both add the correct CUDA/cuDNN modules, export the XLA_FLAGS variable (if needed), and activate the virtualenv by the source command. This includes any time you are submitting to SLURM.