diff --git a/contrib/machine-learning/Use_GPU_In_PyTorch.md b/contrib/machine-learning/Use_GPU_In_PyTorch.md index 6ab15ad..2394cd6 100644 --- a/contrib/machine-learning/Use_GPU_In_PyTorch.md +++ b/contrib/machine-learning/Use_GPU_In_PyTorch.md @@ -22,8 +22,11 @@ To check if you've got access to a Nvidia GPU, you can run `!nvidia-smi` where t ```python !nvidia-smi +``` -output -> /bin/bash: line 1: nvidia-smi: command not found +#### Output +``` +/bin/bash: line 1: nvidia-smi: command not found ``` As you can see that it is showing that `command not found error` it means currently we do'nt have colab GPU access. @@ -40,8 +43,9 @@ Now to check again run the command ```python !nvidia-smi +``` -output -> Fri May 31 04:01:18 2024 +#### Output -> Fri May 31 04:01:18 2024 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ @@ -61,7 +65,6 @@ output -> Fri May 31 04:01:18 2024 |=======================================================================================| | No running processes found | +---------------------------------------------------------------------------------------+ - ``` Whoo!!! now we have GPU access @@ -80,8 +83,10 @@ You can test if PyTorch has access to a GPU using `torch.cuda.is_available()`. # Check for GPU import torch torch.cuda.is_available() - -output -> True +``` +#### Output +``` +True ``` @@ -97,8 +102,11 @@ Let's create a device variable to store what kind of device is available. ```python device = "cuda" if torch.cuda.is_available else "cpu" device +``` -output -> 'cuda' +#### Output +``` +'cuda' ``` @@ -110,8 +118,11 @@ You can count the number of GPUs PyTorch has access to using `torch.cuda.device_ ```python torch.cuda.device_count() +``` -output -> 1 +#### Output +``` +1 ``` @@ -123,8 +134,11 @@ output -> 1 tensor = torch.tensor([1,2,3]) print(f"Tensor is running on the :{tensor.device}") +``` -output -> Tensor is running on the :cpu +#### Output +``` +Tensor is running on the :cpu ``` Note: By default tensors run on the 'CPU' @@ -135,8 +149,11 @@ Note: By default tensors run on the 'CPU' tensor_on_gpu = tensor.to(device) print(f"Tensor is running on:{tensor_on_gpu.device}") +``` -output -> Tensor is running on:cuda:0 +#### Output +``` +Tensor is running on:cuda:0 ``` Notice the second tensor has device=`'cuda:0'`, this means it's stored on the 0th GPU available (GPUs are 0 indexed, if two GPUs were available, they'd be `'cuda:0'` and `'cuda:1'` respectively, up to `'cuda:n'`). @@ -153,8 +170,9 @@ Let's try using the `torch.Tensor.numpy()` method on our `tensor_on_gpu`. ```python # If tensor is on GPU, can't transform it to NumPy (this will error) tensor_on_gpu.numpy() +``` -output -> +#### Output --------------------------------------------------------------------------- @@ -167,8 +185,7 @@ output -> TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first. - -``` + Instead, to get a tensor back to CPU and usable with NumPy we can use Tensor.cpu(). This copies the tensor to CPU memory so it's usable with CPUs @@ -178,7 +195,10 @@ This copies the tensor to CPU memory so it's usable with CPUs # Instead, copy the tensor back to cpu tensor_back_on_cpu = tensor_on_gpu.cpu().numpy() tensor_back_on_cpu - -output -> array([1, 2, 3]) +``` + +#### Output +``` +array([1, 2, 3]) ```