How can I check if Tensorflow utilising the GPU Resource or not?
I am currently working on a machine learning project by using Tensorflow and j want to optimize GPU utilization. How can I check whether the Tensorflow is properly utilizing the available GPU resources during the process of model training or not?
In the context of Python programming language, you can ensure whether TensorFlow check GPU resources or not by using the following coding snippet which involves compatibility of Tensorflow with NVIDA’s CUDA libraries.
Here is the example given by using Python programming language:-
Import tensorflow as tf
# Check available GPUs
Gpus = tf.config.experimental.list_physical_devices(‘GPU’)
If gpus:
# Restrict TensorFlow to only allocate memory on demand
For gpu in gpus:
Tf.config.experimental.set_memory_growth(gpu, True)
# Create a simple TensorFlow computation to observe GPU utilization
With tf.device(‘/GPU:0’):
A = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
B = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
C = tf.matmul(a, b)
# Run the computation and monitor GPU usage
With tf.compat.v1.Session() as sess:
Result = sess.run©