Tensorflow: Difference between revisions
No edit summary |
No edit summary |
||
(4 intermediate revisions by one other user not shown) | |||
Line 1: | Line 1: | ||
[https://www.tensorflow.org/ Tensorflow] is a [[Python]] deep learning package from Google. The easiest way to use install it is to build a [[PythonVirtualEnv | Python virtualenv]] with it in it. | [https://www.tensorflow.org/ Tensorflow] is a [[Python]] deep learning package from Google. The easiest way to use install it is to build a [[PythonVirtualEnv | Python virtualenv]] with it in it. | ||
First load GPU modules to allow access to accelerated GPGPU training. | First load GPU modules to allow access to accelerated GPGPU training. You can find the list of Tensorflow versions and their corresponding Python/CUDA/cuDNN requirements here. [https://www.tensorflow.org/install/source#gpu] These modules are appropriate for tensorflow-2.16.1, which is the latest tensorflow release as of April 29, 2024. | ||
<pre>module add cuda/ | <pre>module add Python3/3.9.16 cuda/12.4.1 cudnn/v8.9.7</pre> | ||
Next you will want to create a virtualenv and source into it. | Next you will want to create a virtualenv and source into it. Note that depending on the version of Tensorflow you need, you may also need to load a module for a more recent version of Python3. | ||
<pre> | <pre> | ||
Line 13: | Line 13: | ||
</pre> | </pre> | ||
Then ensure you have a recent copy of pip in your virtualenv. | |||
<pre> | <pre> | ||
(env) $ pip install --upgrade pip | (env) $ pip install --upgrade pip | ||
</pre> | </pre> | ||
Then | Then install the Tensorflow wheel through pip. | ||
<pre> | <pre> | ||
(env) $ pip install --upgrade tensorflow | (env) $ pip install --upgrade tensorflow | ||
</pre> | </pre> | ||
Finally start up a python shell (or install ipython through pip) and import Tensorflow. | Finally, start up a python shell (or install ipython through pip) and import Tensorflow. | ||
<pre> | <pre> | ||
(env)[ | (env)[username@hostname:/scratch0/username ] $ python | ||
Python | Python 3.9.16 (main, Feb 28 2023, 09:58:09) | ||
[GCC | [GCC 8.5.0 20210514 (Red Hat 8.5.0-16)] on linux | ||
Type "help", "copyright", "credits" or "license" for more information. | Type "help", "copyright", "credits" or "license" for more information. | ||
>>> import tensorflow as tf | >>> import tensorflow as tf | ||
2024-04-29 13:14:09.685077: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. | |||
2024-04-29 13:14:09.737280: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. | |||
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. | |||
2024-04-29 13:14:11.316744: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT | |||
>>> tf.__version__ | >>> tf.__version__ | ||
' | '2.16.1' | ||
</pre> | </pre> | ||
You can then try a more | You can then try a more rigorous test by running the following example. Note that you may need to export XLA_FLAGS in your shell: <code>export XLA_FLAGS=--xla_gpu_cuda_data_dir=/opt/common/cuda/cuda-x.x.x</code> | ||
<pre> | <pre> | ||
import tensorflow as tf | import tensorflow as tf | ||
Line 74: | Line 63: | ||
</pre> | </pre> | ||
<b>To use this install after you close the shell you did this install in, you will need to both add the correct | <b>To use this install after you close the shell you did this install in, you will need to both add the correct [[CUDA]]/cuDNN modules, export the XLA_FLAGS variable (if needed), and activate the virtualenv by the source command. This includes any time you are submitting to [[SLURM]].</b> |
Latest revision as of 17:17, 29 April 2024
Tensorflow is a Python deep learning package from Google. The easiest way to use install it is to build a Python virtualenv with it in it.
First load GPU modules to allow access to accelerated GPGPU training. You can find the list of Tensorflow versions and their corresponding Python/CUDA/cuDNN requirements here. [1] These modules are appropriate for tensorflow-2.16.1, which is the latest tensorflow release as of April 29, 2024.
module add Python3/3.9.16 cuda/12.4.1 cudnn/v8.9.7
Next you will want to create a virtualenv and source into it. Note that depending on the version of Tensorflow you need, you may also need to load a module for a more recent version of Python3.
$ python3 -m venv env $ source env/bin/activate (env) $
Then ensure you have a recent copy of pip in your virtualenv.
(env) $ pip install --upgrade pip
Then install the Tensorflow wheel through pip.
(env) $ pip install --upgrade tensorflow
Finally, start up a python shell (or install ipython through pip) and import Tensorflow.
(env)[username@hostname:/scratch0/username ] $ python Python 3.9.16 (main, Feb 28 2023, 09:58:09) [GCC 8.5.0 20210514 (Red Hat 8.5.0-16)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow as tf 2024-04-29 13:14:09.685077: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 2024-04-29 13:14:09.737280: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2024-04-29 13:14:11.316744: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT >>> tf.__version__ '2.16.1'
You can then try a more rigorous test by running the following example. Note that you may need to export XLA_FLAGS in your shell: export XLA_FLAGS=--xla_gpu_cuda_data_dir=/opt/common/cuda/cuda-x.x.x
import tensorflow as tf mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(512, activation=tf.nn.relu), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation=tf.nn.softmax) ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(x_train, y_train, epochs=5) model.evaluate(x_test, y_test)
To use this install after you close the shell you did this install in, you will need to both add the correct CUDA/cuDNN modules, export the XLA_FLAGS variable (if needed), and activate the virtualenv by the source command. This includes any time you are submitting to SLURM.