From 2876ed760fad8fa46b864e2614148f35a8e29341 Mon Sep 17 00:00:00 2001 From: Yauhen Kharuzhy Date: Sun, 28 Aug 2022 00:07:10 +0300 Subject: [PATCH] README.md: Add info about nvidia-docker installation Add links to NVIDIA documentation explaining how to setup the NVIDIA Container Toolkit needed to run the ODM docker image with CUDA support. --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 641dac70..81d699f4 100644 --- a/README.md +++ b/README.md @@ -216,7 +216,7 @@ To run a standalone installation of WebODM (the user interface), including the p * 100 GB free disk space * 16 GB RAM -Don't expect to process more than a few hundred images with these specifications. To process larger datasets, add more RAM linearly to the number of images you want to process. A CPU with more cores will speed up processing, but can increase memory usage. GPU acceleration is also supported. To make use of your CUDA-compatible graphics card, make sure to pass `--gpu` when starting WebODM. +Don't expect to process more than a few hundred images with these specifications. To process larger datasets, add more RAM linearly to the number of images you want to process. A CPU with more cores will speed up processing, but can increase memory usage. GPU acceleration is also supported. To make use of your CUDA-compatible graphics card, make sure to pass `--gpu` when starting WebODM. You need the nvidia-docker installed in this case, see https://github.com/NVIDIA/nvidia-docker and https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker for information on docker/NVIDIA setup. WebODM runs best on Linux, but works well on Windows and Mac too. If you are technically inclined, you can get WebODM to run natively on all three platforms.