README.md: Add info about nvidia-docker installation

Add links to NVIDIA documentation explaining how to setup the NVIDIA Container
Toolkit needed to run the ODM docker image with CUDA support.
pull/1227/head
Yauhen Kharuzhy 2022-08-28 00:07:10 +03:00
rodzic 95977a55ba
commit 2876ed760f
1 zmienionych plików z 1 dodań i 1 usunięć

Wyświetl plik

@ -216,7 +216,7 @@ To run a standalone installation of WebODM (the user interface), including the p
* 100 GB free disk space
* 16 GB RAM
Don't expect to process more than a few hundred images with these specifications. To process larger datasets, add more RAM linearly to the number of images you want to process. A CPU with more cores will speed up processing, but can increase memory usage. GPU acceleration is also supported. To make use of your CUDA-compatible graphics card, make sure to pass `--gpu` when starting WebODM.
Don't expect to process more than a few hundred images with these specifications. To process larger datasets, add more RAM linearly to the number of images you want to process. A CPU with more cores will speed up processing, but can increase memory usage. GPU acceleration is also supported. To make use of your CUDA-compatible graphics card, make sure to pass `--gpu` when starting WebODM. You need the nvidia-docker installed in this case, see https://github.com/NVIDIA/nvidia-docker and https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker for information on docker/NVIDIA setup.
WebODM runs best on Linux, but works well on Windows and Mac too. If you are technically inclined, you can get WebODM to run natively on all three platforms.