kopia lustrzana https://github.com/animator/learn-python
Update Use_GPU_In_PyTorch.md
rodzic
e05cc4f695
commit
6dc7f34e1e
|
@ -22,8 +22,11 @@ To check if you've got access to a Nvidia GPU, you can run `!nvidia-smi` where t
|
||||||
|
|
||||||
```python
|
```python
|
||||||
!nvidia-smi
|
!nvidia-smi
|
||||||
|
```
|
||||||
|
|
||||||
output -> /bin/bash: line 1: nvidia-smi: command not found
|
#### Output
|
||||||
|
```
|
||||||
|
/bin/bash: line 1: nvidia-smi: command not found
|
||||||
```
|
```
|
||||||
|
|
||||||
As you can see that it is showing that `command not found error` it means currently we do'nt have colab GPU access.
|
As you can see that it is showing that `command not found error` it means currently we do'nt have colab GPU access.
|
||||||
|
@ -40,8 +43,9 @@ Now to check again run the command
|
||||||
|
|
||||||
```python
|
```python
|
||||||
!nvidia-smi
|
!nvidia-smi
|
||||||
|
```
|
||||||
|
|
||||||
output -> Fri May 31 04:01:18 2024
|
#### Output -> Fri May 31 04:01:18 2024
|
||||||
+---------------------------------------------------------------------------------------+
|
+---------------------------------------------------------------------------------------+
|
||||||
| NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2 |
|
| NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2 |
|
||||||
|-----------------------------------------+----------------------+----------------------+
|
|-----------------------------------------+----------------------+----------------------+
|
||||||
|
@ -61,7 +65,6 @@ output -> Fri May 31 04:01:18 2024
|
||||||
|=======================================================================================|
|
|=======================================================================================|
|
||||||
| No running processes found |
|
| No running processes found |
|
||||||
+---------------------------------------------------------------------------------------+
|
+---------------------------------------------------------------------------------------+
|
||||||
```
|
|
||||||
|
|
||||||
Whoo!!! now we have GPU access
|
Whoo!!! now we have GPU access
|
||||||
|
|
||||||
|
@ -80,8 +83,10 @@ You can test if PyTorch has access to a GPU using `torch.cuda.is_available()`.
|
||||||
# Check for GPU
|
# Check for GPU
|
||||||
import torch
|
import torch
|
||||||
torch.cuda.is_available()
|
torch.cuda.is_available()
|
||||||
|
```
|
||||||
output -> True
|
#### Output
|
||||||
|
```
|
||||||
|
True
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
@ -97,8 +102,11 @@ Let's create a device variable to store what kind of device is available.
|
||||||
```python
|
```python
|
||||||
device = "cuda" if torch.cuda.is_available else "cpu"
|
device = "cuda" if torch.cuda.is_available else "cpu"
|
||||||
device
|
device
|
||||||
|
```
|
||||||
|
|
||||||
output -> 'cuda'
|
#### Output
|
||||||
|
```
|
||||||
|
'cuda'
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
@ -110,8 +118,11 @@ You can count the number of GPUs PyTorch has access to using `torch.cuda.device_
|
||||||
|
|
||||||
```python
|
```python
|
||||||
torch.cuda.device_count()
|
torch.cuda.device_count()
|
||||||
|
```
|
||||||
|
|
||||||
output -> 1
|
#### Output
|
||||||
|
```
|
||||||
|
1
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
@ -123,8 +134,11 @@ output -> 1
|
||||||
tensor = torch.tensor([1,2,3])
|
tensor = torch.tensor([1,2,3])
|
||||||
|
|
||||||
print(f"Tensor is running on the :{tensor.device}")
|
print(f"Tensor is running on the :{tensor.device}")
|
||||||
|
```
|
||||||
|
|
||||||
output -> Tensor is running on the :cpu
|
#### Output
|
||||||
|
```
|
||||||
|
Tensor is running on the :cpu
|
||||||
```
|
```
|
||||||
|
|
||||||
Note: By default tensors run on the 'CPU'
|
Note: By default tensors run on the 'CPU'
|
||||||
|
@ -135,8 +149,11 @@ Note: By default tensors run on the 'CPU'
|
||||||
tensor_on_gpu = tensor.to(device)
|
tensor_on_gpu = tensor.to(device)
|
||||||
|
|
||||||
print(f"Tensor is running on:{tensor_on_gpu.device}")
|
print(f"Tensor is running on:{tensor_on_gpu.device}")
|
||||||
|
```
|
||||||
|
|
||||||
output -> Tensor is running on:cuda:0
|
#### Output
|
||||||
|
```
|
||||||
|
Tensor is running on:cuda:0
|
||||||
```
|
```
|
||||||
|
|
||||||
Notice the second tensor has device=`'cuda:0'`, this means it's stored on the 0th GPU available (GPUs are 0 indexed, if two GPUs were available, they'd be `'cuda:0'` and `'cuda:1'` respectively, up to `'cuda:n'`).
|
Notice the second tensor has device=`'cuda:0'`, this means it's stored on the 0th GPU available (GPUs are 0 indexed, if two GPUs were available, they'd be `'cuda:0'` and `'cuda:1'` respectively, up to `'cuda:n'`).
|
||||||
|
@ -153,8 +170,9 @@ Let's try using the `torch.Tensor.numpy()` method on our `tensor_on_gpu`.
|
||||||
```python
|
```python
|
||||||
# If tensor is on GPU, can't transform it to NumPy (this will error)
|
# If tensor is on GPU, can't transform it to NumPy (this will error)
|
||||||
tensor_on_gpu.numpy()
|
tensor_on_gpu.numpy()
|
||||||
|
```
|
||||||
|
|
||||||
output ->
|
#### Output
|
||||||
|
|
||||||
|
|
||||||
---------------------------------------------------------------------------
|
---------------------------------------------------------------------------
|
||||||
|
@ -167,8 +185,7 @@ output ->
|
||||||
|
|
||||||
|
|
||||||
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
|
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
|
||||||
|
|
||||||
```
|
|
||||||
Instead, to get a tensor back to CPU and usable with NumPy we can use Tensor.cpu().
|
Instead, to get a tensor back to CPU and usable with NumPy we can use Tensor.cpu().
|
||||||
|
|
||||||
This copies the tensor to CPU memory so it's usable with CPUs
|
This copies the tensor to CPU memory so it's usable with CPUs
|
||||||
|
@ -178,7 +195,10 @@ This copies the tensor to CPU memory so it's usable with CPUs
|
||||||
# Instead, copy the tensor back to cpu
|
# Instead, copy the tensor back to cpu
|
||||||
tensor_back_on_cpu = tensor_on_gpu.cpu().numpy()
|
tensor_back_on_cpu = tensor_on_gpu.cpu().numpy()
|
||||||
tensor_back_on_cpu
|
tensor_back_on_cpu
|
||||||
|
```
|
||||||
output -> array([1, 2, 3])
|
|
||||||
|
#### Output
|
||||||
|
```
|
||||||
|
array([1, 2, 3])
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
Ładowanie…
Reference in New Issue