Update Use_GPU_In_PyTorch.md

pull/1158/head
Krishna Kaushik 2024-06-13 09:31:14 +05:30 zatwierdzone przez GitHub
rodzic 9940ebde67
commit e05cc4f695
Nie znaleziono w bazie danych klucza dla tego podpisu
ID klucza GPG: B5690EEEBB952194
1 zmienionych plików z 19 dodań i 38 usunięć

Wyświetl plik

@ -22,10 +22,9 @@ To check if you've got access to a Nvidia GPU, you can run `!nvidia-smi` where t
```python
!nvidia-smi
```
/bin/bash: line 1: nvidia-smi: command not found
output -> /bin/bash: line 1: nvidia-smi: command not found
```
As you can see that it is showing that `command not found error` it means currently we do'nt have colab GPU access.
@ -41,9 +40,8 @@ Now to check again run the command
```python
!nvidia-smi
```
Fri May 31 04:01:18 2024
output -> Fri May 31 04:01:18 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
@ -63,7 +61,7 @@ Now to check again run the command
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
```
Whoo!!! now we have GPU access
@ -82,15 +80,11 @@ You can test if PyTorch has access to a GPU using `torch.cuda.is_available()`.
# Check for GPU
import torch
torch.cuda.is_available()
output -> True
```
True
If the above outputs `True`, PyTorch can see and use the GPU, if it outputs `False`, it can't see the GPU and in that case, you'll have to go back through the installation steps.
Now, let's say you wanted to setup your code so it ran on CPU or the GPU if it was available.
@ -103,16 +97,12 @@ Let's create a device variable to store what kind of device is available.
```python
device = "cuda" if torch.cuda.is_available else "cpu"
device
output -> 'cuda'
```
'cuda'
If the above output "cuda" it means we can set all of our PyTorch code to use the available CUDA device (a GPU) and if it output "cpu", our PyTorch code will stick with the CPU.
You can count the number of GPUs PyTorch has access to using `torch.cuda.device_count()`.
@ -120,15 +110,11 @@ You can count the number of GPUs PyTorch has access to using `torch.cuda.device_
```python
torch.cuda.device_count()
output -> 1
```
1
## 3. Putting tensors on the GPU
@ -137,10 +123,9 @@ torch.cuda.device_count()
tensor = torch.tensor([1,2,3])
print(f"Tensor is running on the :{tensor.device}")
```
Tensor is running on the :cpu
output -> Tensor is running on the :cpu
```
Note: By default tensors run on the 'CPU'
@ -150,10 +135,9 @@ Note: By default tensors run on the 'CPU'
tensor_on_gpu = tensor.to(device)
print(f"Tensor is running on:{tensor_on_gpu.device}")
```
Tensor is running on:cuda:0
output -> Tensor is running on:cuda:0
```
Notice the second tensor has device=`'cuda:0'`, this means it's stored on the 0th GPU available (GPUs are 0 indexed, if two GPUs were available, they'd be `'cuda:0'` and `'cuda:1'` respectively, up to `'cuda:n'`).
@ -169,7 +153,8 @@ Let's try using the `torch.Tensor.numpy()` method on our `tensor_on_gpu`.
```python
# If tensor is on GPU, can't transform it to NumPy (this will error)
tensor_on_gpu.numpy()
```
output ->
---------------------------------------------------------------------------
@ -183,7 +168,7 @@ tensor_on_gpu.numpy()
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
```
Instead, to get a tensor back to CPU and usable with NumPy we can use Tensor.cpu().
This copies the tensor to CPU memory so it's usable with CPUs
@ -193,11 +178,7 @@ This copies the tensor to CPU memory so it's usable with CPUs
# Instead, copy the tensor back to cpu
tensor_back_on_cpu = tensor_on_gpu.cpu().numpy()
tensor_back_on_cpu
output -> array([1, 2, 3])
```
array([1, 2, 3])