kopia lustrzana https://github.com/OpenDroneMap/docs
tutorial for ClusterODM, NodeODM, SLURM, with Singularity on HPC
rodzic
75be202674
commit
8a12c11ed4
Plik binarny nie jest wyświetlany.
Po Szerokość: | Wysokość: | Rozmiar: 30 KiB |
Plik binarny nie jest wyświetlany.
Po Szerokość: | Wysokość: | Rozmiar: 89 KiB |
|
@ -853,3 +853,177 @@ For instance, point clouds properties can be modified to show elevation and also
|
||||||
|
|
||||||
|
|
||||||
`Learn to edit <https://github.com/opendronemap/docs#how-to-make-your-first-contribution>`_ and help improve `this page <https://github.com/OpenDroneMap/docs/blob/publish/source/tutorials.rst>`_!
|
`Learn to edit <https://github.com/opendronemap/docs#how-to-make-your-first-contribution>`_ and help improve `this page <https://github.com/OpenDroneMap/docs/blob/publish/source/tutorials.rst>`_!
|
||||||
|
|
||||||
|
|
||||||
|
***************************************************
|
||||||
|
ClusterODM, NodeODM, SLURM, with Singularity on HPC
|
||||||
|
***************************************************
|
||||||
|
|
||||||
|
Let's say that we will get ClusterODM and NodeODM images in the same folder
|
||||||
|
|
||||||
|
Downloading and installing the images
|
||||||
|
=====================================
|
||||||
|
|
||||||
|
In this example ClusterODM and NodeODM will be installed in $HOME/git
|
||||||
|
|
||||||
|
ClusterODM
|
||||||
|
----------
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
cd $HOME/git
|
||||||
|
git clone https://github.com/OpenDroneMap/ClusterODM
|
||||||
|
cd ClusterODM
|
||||||
|
singularity pull --force --disable-cache docker://opendronemap/clusterodm:latest
|
||||||
|
|
||||||
|
ClusterODM image needs to be "installed"
|
||||||
|
::
|
||||||
|
|
||||||
|
singularity shell --bind $PWD:/var/www clusterodm_latest.sif`
|
||||||
|
|
||||||
|
And then in the Singularity shell
|
||||||
|
::
|
||||||
|
|
||||||
|
cd /var/www
|
||||||
|
npm install --production
|
||||||
|
exit
|
||||||
|
|
||||||
|
NodeODM
|
||||||
|
-------
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
cd $HOME/git
|
||||||
|
git clone https://github.com/OpenDroneMap/NodeODM
|
||||||
|
cd NodeODMDM
|
||||||
|
singularity pull --force --disable-cache docker://opendronemap/nodeodm:latest
|
||||||
|
|
||||||
|
NodeODM image needs to be "installed"
|
||||||
|
::
|
||||||
|
|
||||||
|
singularity shell --bind $PWD:/var/www nodeodm_latest.sif
|
||||||
|
|
||||||
|
And then in the Singularity shell
|
||||||
|
::
|
||||||
|
|
||||||
|
cd /var/www
|
||||||
|
npm install --production
|
||||||
|
exit
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Launching
|
||||||
|
=========
|
||||||
|
On two different terminals connected to the HPC , or with tmux (or screen...) a slurm script will start NodeODM instances.
|
||||||
|
Then ClusterODM could be started
|
||||||
|
|
||||||
|
NodeODM
|
||||||
|
-------
|
||||||
|
Create a nodeodm.slurm script in $HOME/git/NodeODM with
|
||||||
|
::
|
||||||
|
|
||||||
|
#!/usr/bin/bash
|
||||||
|
#source .bashrc
|
||||||
|
|
||||||
|
|
||||||
|
#SBATCH -J NodeODM
|
||||||
|
#SBATCH --partition=ncpulong,ncpu
|
||||||
|
#SBATCH --nodes=2
|
||||||
|
#SBATCH --mem=10G
|
||||||
|
#SBATCH --output logs_nodeodm-%j.out
|
||||||
|
|
||||||
|
cd $HOME/git/NodeODM
|
||||||
|
|
||||||
|
#Launched on first node
|
||||||
|
srun --nodes=1 singularity run --bind $PWD:/var/www nodeodm_latest.sif $
|
||||||
|
|
||||||
|
#Launch on second node
|
||||||
|
|
||||||
|
srun --nodes=1 singularity run --bind $PWD:/var/www nodeodm_latest.sif $
|
||||||
|
|
||||||
|
wait
|
||||||
|
|
||||||
|
start this script with
|
||||||
|
::
|
||||||
|
|
||||||
|
sbatch $HOME/git/NodeODM/nodeodm.slurm
|
||||||
|
|
||||||
|
logs of this script are written in $HOME/git/NodeODM/logs_nodeodm-XXX.out XXX is the slurm job number
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
ClusterODM
|
||||||
|
----------
|
||||||
|
Then you can start ClusterODM on the head node with
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
cd $HOME/git/ClusterODM
|
||||||
|
singularity run --bind $PWD:/var/www clusterodm_latest.sif
|
||||||
|
|
||||||
|
Connecting Nodes to ClusterODM
|
||||||
|
==============================
|
||||||
|
Use the following command to get the nodes names where NodeODM is running
|
||||||
|
::
|
||||||
|
|
||||||
|
squeue -u $USER
|
||||||
|
|
||||||
|
ex : squeue -u $USER
|
||||||
|
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
|
||||||
|
1829323 ncpu NodeODM bonaime R 24:19 2 ncpu[015-016]
|
||||||
|
|
||||||
|
In this case, NodeODM run on ncpu015 and ncpu016
|
||||||
|
|
||||||
|
Web interface
|
||||||
|
-------------
|
||||||
|
ClusterODM administrative web interface could be used to wire NodeODMs to the ClusterODM.
|
||||||
|
Open another shell window in your local machine and tunnel them to the HPC using the following command:
|
||||||
|
::
|
||||||
|
|
||||||
|
ssh -L localhost:10000:localhost:10000 yourusername@hpc-address
|
||||||
|
Replace yourusername and hpc-address with your appropriate username and the hpc address.
|
||||||
|
|
||||||
|
Basically, this command will tunnel the port of the hpc to your local port.
|
||||||
|
After this, open a browser in your local machine and connect to http://localhost:10000.
|
||||||
|
Port 10000 is where ClusterODM's administrative web interface is hosted at.
|
||||||
|
Then NodeODMs could be add/deleted to ClusterODM
|
||||||
|
This is what it looks like :
|
||||||
|
|
||||||
|
.. figure:: images/clusterodm-admin-interface.png
|
||||||
|
:alt: Clusterodm admin interface
|
||||||
|
:align: center
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
telnet
|
||||||
|
------
|
||||||
|
You can connect to the ClusterODM CLI and wire the NodeODMs. For the previous example :
|
||||||
|
|
||||||
|
telnet localhost 8080
|
||||||
|
> NODE ADD ncpu015 3000
|
||||||
|
> NODE ADD ncpu016 3000
|
||||||
|
> NODE LIST
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Using ClusterODM and its NodeODMs
|
||||||
|
=================================
|
||||||
|
|
||||||
|
Open another shell window in your local machine and tunnel them to the HPC using the following command:
|
||||||
|
::
|
||||||
|
|
||||||
|
ssh -L localhost:10000:localhost:10000 yourusername@hpc-address
|
||||||
|
Replace yourusername and hpc-address with your appropriate username and the hpc address.
|
||||||
|
|
||||||
|
After this, open a browser in your local machine and connect to http://localhost:3000 with your browser
|
||||||
|
Here, you can Assign Tasks and observe the tasks' processes.
|
||||||
|
|
||||||
|
.. figure:: images/clusterodm-user-interface.png
|
||||||
|
:alt: Clusterodm user interface
|
||||||
|
:align: center
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
After adding images in this browser, you can press Start Task and see ClusterODM assigning tasks to the nodes you have wired to. Go for a walk and check the progress.
|
||||||
|
|
Ładowanie…
Reference in New Issue