Updated docs for 0.6.0

pull/15/head
Piero Toffanin 2019-05-22 14:35:33 -04:00
rodzic af8cc7e713
commit 9d82f56ac5
9 zmienionych plików z 234 dodań i 223 usunięć

Wyświetl plik

@ -15,7 +15,7 @@ help:
.PHONY: help Makefile
livehtml:
sphinx-autobuild --open-browser -b html "$(SOURCEDIR)" "$(BUILDDIR)"
sphinx-autobuild --open-browser -H 0.0.0.0 -b html "$(SOURCEDIR)" "$(BUILDDIR)"
deploy:
@$(SPHINXBUILD) -M html "$(SOURCEDIR)" "$(BUILDDIR)" -nW
@ -25,4 +25,4 @@ deploy:
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

Wyświetl plik

@ -14,9 +14,9 @@ copyright = '2018, OpenDroneMap'
author = 'OpenDroneMap'
# The short X.Y version
version = '0.4'
version = '0.6'
# The full version, including alpha/beta/rc tags
release = '0.4'
release = '0.6'
# -- General configuration ---------------------------------------------------

Wyświetl plik

@ -8,7 +8,7 @@ OpenDroneMap relies on community contributions. You can contribute in many ways,
Community Forum
---------------
If you are looking to get involved, are stuck on a problem, or want to reach out, `the forum <http://community.opendronemap.org/>`_ is a great place to start. You may find your questions already answered or else you can find other useful tips and resources. You can also contribute your open access datasets for others to explore. It is a good place go before submitting bug reports or getting in touch with developers before writing a new feature. In addition to the forum, you can reach us on `gitter <https://gitter.im/OpenDroneMap/OpenDroneMap/>`_.
If you are looking to get involved, are stuck on a problem, or want to reach out, `the forum <https://community.opendronemap.org/>`_ is a great place to start. You may find your questions already answered or else you can find other useful tips and resources. You can also contribute your open access datasets for others to explore. It is a good place go before submitting bug reports or getting in touch with developers before writing a new feature.
Reporting Bugs
--------------

Wyświetl plik

@ -1,18 +0,0 @@
.. Where to download the stable/ unstable versions
.. _download:
Download
========
Stable
------
`<https://github.com/OpenDroneMap/OpenDroneMap/releases/latest>`_
Development
-----------
Use git to clone the OpenDroneMap repository::
git clone https://github.com/OpenDroneMap/OpenDroneMap.git

Wyświetl plik

@ -5,10 +5,9 @@ Welcome to OpenDroneMap's documentation!
:maxdepth: 2
:caption: Contents:
download
building
installation
using
dataset
outputs
large
api
flying

Wyświetl plik

@ -1,7 +1,7 @@
.. Notes and doc on building ODM
.. Notes and doc on installing ODM
Building
========
Installation
============
Hardware Recommendations
@ -16,14 +16,47 @@ Minimum 4GB of RAM, recommended 16GB or more. Many parts of the ODM toolchain ar
Docker Installation (cross-platform)
------------------------------------
First you need to `download and install Docker <https://www.docker.com/>`_. Note for Windows users that Docker CE only works for Windows 10 Professional and Enterprise. Otherwise you should use `Docker Toolbox <https://www.docker.com/products/docker-toolbox>`_
We recommend people use `docker <https://www.docker.com>`_ for running ODM, whether you are on Windows, macOS or Linux.
You can easily pull and run a pre-built image. Start here: :ref:`docker-usage`. If you want to build your own docker image, follow the instructions below.
Install Docker on Windows
`````````````````````````
Before you start, check that your CPU supports virtualization! Sometimes this is disabled from the BIOS. The feature is often called "VT-X" and needs to be enabled. On Windows 8 or higher you can check if virtualization is enabled by opening the Task Manager --> Performance tab. The "Virtualization" field of your CPU should read "enabled". On Windows 7 you can use `this tool <http://www.microsoft.com/en-us/download/details.aspx?id=592>`_ instead.
Now we can install docker:
* If you are on Windows 10 Home, Windows 8 (any version) or Windows 7 (any version), use Docker Toolbox: https://github.com/docker/toolbox/releases/download/v18.09.3/DockerToolbox-18.09.3.exe
* If you are on Windows 10 Professional or a newer version use Docker for Windows instead: https://download.docker.com/win/stable/Docker%20for%20Windows%20Installer.exe
Install Docker on macOS/Linux
`````````````````````````````
Docker installation on macOS and Linux is straightforward.
* For macOS simply download the Docker for Mac installer from https://download.docker.com/mac/stable/Docker.dmg
* For Linux simply use the the docker install script::
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
Running ODM/WebODM
``````````````````
With docker installed, see the :ref:`docker-usage` usage page on using ODM.
Before running ODM it's advised to check that Docker is allocating sufficient resources to containers. In Windows this can be done in the 'Docker for Windows' program under the 'Advanced' tab.
Build the image
```````````````
ODM is a command line utility. We also have a graphical interface called `WebODM <https://github.com/OpenDroneMap/WebODM>`_. To install it, make sure you have installed `git <https://git-scm.com/downloads/>`_ and `python <https://www.python.org/downloads/>`_. Then launch a shell (macOS/Linux) or ``Git Bash`` (Windows) from the start menu (**not** a command prompt/powershell) and simply type::
git clone https://github.com/OpenDroneMap/WebODM --config core.autocrlf=input --depth 1
cd WebODM/
./webodm.sh start
See the `WebODM project README <https://github.com/OpenDroneMap/WebODM>`_ for other useful commands.
Build your docker image
```````````````````````
Download and extract the latest version of ODM: :ref:`download`

Wyświetl plik

@ -3,56 +3,27 @@
Splitting Large Datasets
========================
A recent ODM update (coined split-merge) introduces a new workflow for splitting up very large datasets into manageable chunks (called submodels), running the pipeline on each chunk, and then producing some merged products.
Starting with ODM version ``0.6.0`` you can split up very large datasets into manageable chunks (called submodels), running the pipeline on each chunk, and then producing merged DEMs, orthophotos and point clouds. The process is referred to as "split-merge".
Why might you use the split-merge pipeline? If you have a very large number of images in your dataset, split-merge will help make the processing more manageable on a large machine. It will also alleviate the need for Ground Control. GPS information gathered from the UAV is still a long way from being accurate, and those problems only get more noticeable as datasets scale. Obviously the best results will come from survey-grade GCPs, but this is not always possible.
What split-merge doesnt solve is the ability to run large datasets on small machines. We have made strides towards reducing processing costs in general, but the goal of split-merge was specifically to solve a scaling problem, not necessarily an efficiency one.
Why might you use the split-merge pipeline? If you have a very large number of images in your dataset, split-merge will help make the processing more manageable on a large machine (it will require less memory). If you have many machines all connected to the same network you can also process the submodels in parallel, thus allowing for horizontal scaling and processing thousands of images more quickly.
Split-merge works in WebODM out of the box as long as the processing nodes support split-merge, by enabling the ``--split`` option when creating a new task.
Calibrate images
----------------
Image calibration is essential for large datasets because error propagation due to image distortion will cause a bowl effect on the models. Calibration instructions can be found at :ref:`calibration`.
Image calibration is recommended (but not required) for large datasets because error propagation due to image distortion could cause a bowl effect on the models. Calibration instructions can be found at :ref:`calibration`.
Overview
--------
Local Split-Merge
-----------------
The scripts lay inside the ``scripts/metadataset`` folder. They are:::
Splitting a dataset into more manageable submodels and sequentially processing all submodels on the same machine is easy! Just use ``--split`` and ``--split-overlap`` to decide the the average number of images per submodels and the overlap (in meters) between submodels respectively::
run_all.sh
setup.py
run_matching.py
split.py
run_reconstructions.py
align.py
run_dense.py
merge.py
docker run -ti --rm -v /my/project:/datasets/code opendronemap/odm --project-path /datasets --split 400 --split-overlap 100
If you look into ``run_all.sh`` you will see that you run each of these scripts in the order above. It's really just a placeholder file we used for testing, so I will largely ignore it. Instead I will go through each step in order to explain what it does and how to run it.
Before you run the following scripts, you should set up the environment variables:::
If you already know how you want to split the dataset, you can provide that information and it will be used instead of the clustering algorithm.
RUNPATH=<Set this to your ODM base directory, eg. /home/dmb2/opendronemap>
export PYTHONPATH=$RUNPATH:$RUNPATH/SuperBuild/install/lib/python2.7/dist-packages:$RUNPATH/SuperBuild/src/opensfm:$PYTHONPATH
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$RUNPATH/SuperBuild/install/lib
And make sure you have all the parameters you want to pass in the settings.yaml file in the software root directory. You won't be able to put most of those settings into the command line.
setup.py
````````
The setup.py command initializes the dataset and writes the config file for OpenSfM. The command accepts command line parameters to configure the process.
A first group of parameters are equivalent to the standard ODM parameters and configure the feature extraction and matching: ``--resize-to``, ``--min-num-features``, ``--num-cores``, ``--matcher-neighbors``. See :ref:`arguments` for more info.
A second group of parameters controls the size and overlap of the submodels. They are equivalent to the OpenSfM parameters with the same name.
``submodel_size``: Average number of images per submodel. When splitting a large dataset into smaller submodels, images are grouped into clusters. This value regulates the number of images that each cluster should have on average. The splitting is done via K-means clustering with k set to the number of images divided by submodel_size.
``submodel_overlap``: Radius of the overlapping region between submodels in meters. To be able to align the different submodels, there needs to be some common images between the neighboring submodels. Any image that is closer to a cluster than submodel_overlap it is added to that cluster.
Finally, if you already know how you want to split the dataset, you can provide that information and it will be used instead of the clustering algorithm.
The grouping can be provided by adding a file named image_groups.txt in the main dataset folder. The file should have one line per image. Each line should have two words: first the name of the image and second the name of the group it belongs to. For example:::
The grouping can be provided by adding a file named image_groups.txt in the main dataset folder. The file should have one line per image. Each line should have two words: first the name of the image and second the name of the group it belongs to. For example::
01.jpg A
02.jpg A
@ -60,60 +31,48 @@ The grouping can be provided by adding a file named image_groups.txt in the main
04.jpg B
05.jpg C
will create 3 submodels.::
python setup.py ~/ODMProjects/split-merge-test/
run_matching.py
```````````````
Before we split the dataset up, we have to match the images so that the software knows where to make the splits. This is done on the whole dataset, so it can take a while if there are a lot of photos.
This one takes nothing except the project path:::
python run_matching.py ~/ODMProjects/split-merge-test/
split.py
````````
Now we split the model. This will create a directory called “submodels” within which is a set of directories for each submodel. Each of these is set up like a typical ODM project folder, except the images are symlinked to the root images folder. This is an important concept to know because moving the project folder will break the symlinks if you are not careful.::
python split.py ~/ODMProjects/split-merge-test/
run_reconstructions.py
``````````````````````
Now that we have the submodels, we can run the sparse reconstruction on each. There is an optional argument in this script to run matching on each submodel. You already should have run matching above so we don't need to do it again.::
--run-matching
Here's what I ran:::
python run_reconstructions.py ~/ODMProjects/split-merge-test/
align.py
````````
Each submodel is self-referenced, so its important to realign the point clouds before getting to the next steps of the pipeline:::
python align.py ~/ODMProjects/split-merge-test/
run_dense.py
````````````
This is the one that will take a while. It basically runs the rest of the toolchain for each aligned submodel.::
python run_dense.py ~/ODMProjects/split-merge-test/
And then we wait....
merge.py
````````
The previous script generated an orthophoto for each submodel, and now we have to merge them into a single file. By default it will not overwrite the resulting TIFF so if you need to rerun, make sure you append ``--overwrite``.::
python merge.py ~/ODMProjects/split-merge-test/ --overwrite
will create 3 submodels. Make sure to pass ``--split-overlap 0`` if you manually provide a ``image_groups.txt`` file.
Next steps
----------
This process is a pretty great start to scaling our image processing capabilities, although there is always work to be done. Overall, I would like to see the process streamlined into the standard OpenDroneMap flow. I would also like to see more merged outputs than only the GeoTIFF: the point cloud, textured mesh, and DSM/DTM for starters. Huge props to Pau and the folks at Mapillary for their amazing contributions to OpenDroneMap through their OpenSfM code. I look forward to further pushing the limits of OpenDroneMap and seeing how big a dataset we can process.
Distributed Split-Merge
-----------------------
ODM can also automatically distribute the processing of each submodel to multiple machines via `NodeODM <https://github.com/OpenDroneMap/NodeODM>`_ nodes, orchestrated via `ClusterODM <https://github.com/OpenDroneMap/ClusterODM>`_.
The first step is start ClusterODM::
docker run -ti -p 3001:3000 -p 8080:8080 opendronemap/clusterodm
Then on each machine you want to use for processing, launch a NodeODM instance via::
docker run -ti -p 3000:3000 opendronemap/nodeodm
Connect via telnet to ClusterODM and add the IP addresses/port of the machines running NodeODM::
$ telnet <cluster-odm-ip> 8080
Connected to <cluster-odm-ip>.
Escape character is '^]'.
[...]
# node add <node-odm-ip-1> 3000
# node add <node-odm-ip-2> 3000
[...]
# node list
1) <node-odm-ip-1>:3000 [online] [0/2] <version 1.5.1>
2) <node-odm-ip-2>:3000 [online] [0/2] <version 1.5.1>
Make sure you are running version ``1.5.1`` or higher of the NodeODM API.
At this point, simply use the ``--sm-cluster`` option to enable distributed split-merge::
docker run -ti --rm -v /my/project:/datasets/code opendronemap/odm --project-path /datasets --split 400 --split-overlap 100 --sm-cluster http://<cluster-odm-ip>:3001
Limitations
-----------
The 3D textured meshes are currently not being merged as part of the workflow (only point clouds, DEMs and orthophotos are). Point clouds are also currently merged using a naive merge approach, which requires a lot of memory. We have plans to use `Entwine <https://github.com/connormanning/entwine>`_ for point cloud merging in the near future.
GCPs are fully supported, however, there needs to be at least 3 GCP points on each submodel for the georeferencing to take place. If a submodel has less than 3 GCPs, a combination of the remaining GCPs + EXIF data will be used instead (which is going to be less accurate). We recommend using the ``image_groups.txt`` file to accurately control the submodel split when using GCPs.
Aknowledgments
--------------
Huge props to Pau and the folks at Mapillary for their amazing contributions to OpenDroneMap through their OpenSfM code, which is a key component of the split-merge pipeline. We look forward to further pushing the limits of OpenDroneMap and seeing how big a dataset we can process.

Wyświetl plik

@ -1,7 +1,7 @@
.. Explaining the dataset structure
.. Explaining the outputs structure
Dataset Structure
=================
Outputs
=======
::
project/
@ -43,52 +43,40 @@ Dataset Structure
│ ├── odm_orthophoto_log.txt # Log file
│ └── gdal_translate_log.txt # Log for georeferencing the png file
└── odm_dem/
├── odm_dsm.tif # Digital Surface Model Geotiff - the tops of everything
└── odm_dtm.tif # Digital Terrain Model Geotoff - the ground.
├── dsm.tif # Digital Surface Model Geotiff - the tops of everything
└── dtm.tif # Digital Terrain Model Geotoff - the ground.
Outputs
```````
Listed below are some of the useful outputs ODM produces
3D Models (unreferenced)
^^^^^^^^^^^^^^^^^^^^^^^^
Point Cloud
^^^^^^^^^^^
``pmvs/recon0/models/option-0000.ply`` -- The point cloud file
``odm_georeferenced_model.ply/laz/csv`` -- The georeferenced point cloud in different file formats
``odm_meshing/odm_mesh.ply`` -- The meshed surface
Textured Models
^^^^^^^^^^^^^^^
3D Textured Model
^^^^^^^^^^^^^^^^^
``odm_texturing/odm_textured_model.obj`` -- The textured surface mesh
``odm_texturing/odm_textured_model_geo.obj`` -- The georeferenced and textured surface mesh
You can access the point cloud and textured meshes using MeshLab. Open MeshLab, and choose File:Import Mesh and choose your textured mesh from a location similar to the following: ``odm_texturing\odm_textured_model.obj``
Georeferencing
^^^^^^^^^^^^^^
``odm_texturing/odm_textured_model_geo.obj`` -- The georeferenced and textured surface mesh
``odm_georeferenced_model.ply/laz/csv`` -- The georeferenced point cloud in different file formats
Orthophoto
^^^^^^^^^^
``odm_orthphoto.png`` -- The orthophoto, but this is a simple png, which doesn't have any georeferencing information
``odm_orthophoto/odm_orthphoto.png`` -- The orthophoto, but this is a simple png, which doesn't have any georeferencing information
``odm_orthphoto.tif`` -- GeoTIFF Orthophoto. You can use it in QGIS as a raster layer.
``odm_orthophoto/odm_orthphoto.tif`` -- GeoTIFF Orthophoto. You can use it in QGIS as a raster layer.
DEM/DSM
DTM/DSM
^^^^^^^
DEM/DSM will only be created if the ``--dem`` or ``--dsm`` options are used.
DTM/DSM will only be created if the ``--dtm`` or ``--dsm`` options are used and will be stored in:
``odm_dem.tif``
* ``odm_dem/dtm.tif``
* ``odm_dem/dsm.tif``
``odm_dsm.tif``
2.5D Meshing Reconstruction
^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``--use-25dmesh`` option will generate two extra directories, ``odm_texturing_25d/`` and ``odm_25dgeoreferencing``. The items inside correspond to those inside the 3D meshed output folders, only they were run using the 2.5D meshing algorithm.

Wyświetl plik

@ -9,44 +9,19 @@ Usage
Docker
------
There are two methods for running with docker. One pulls a pre-built image from the docker hub. This is the most reliable. You can also :ref:`build your own image <docker-installation>`. In either case, the run command is the same, what you will change is the name of the image. For the docker hub image, use ``opendronemap/opendronemap``. For an image you built yourself, use that image name (in our case, ``my_odm_image``).::
There are two methods for running with docker. One pulls a pre-built image from the docker hub. This is the most reliable. You can also :ref:`build your own image <docker-installation>`. In either case, the run command is the same, what you will change is the name of the image. For the docker hub image, use ``opendronemap/odm``. For an image you built yourself, use that image name (in our case, ``my_odm_image``).::
docker run -it --rm \
-v $(pwd)/images:/code/images \
-v $(pwd)/odm_texturing:/code/odm_texturing \
-v $(pwd)/odm_orthophoto:/code/odm_orthophoto \
<docker-image>
docker run -ti --rm -v /my/project:/datasets/code <my_odm_image> --project-path /datasets
``-v`` is used to connect folders in the docker container to local folders. See :doc:`dataset` for reference on the project layout.
Where /my/project is the path to your project containing an ``images`` folder (/my/project/images). ``-v`` is used to connect folders in the docker container to local folders. See :doc:`dataset` for reference on the project layout.
If you want to get all intermediate outputs, run the following command:::
To pass in custom parameters to the run.py script, simply pass it as arguments to the docker run command. For example::
docker run -it --rm \
-v $(pwd)/images:/code/images \
-v $(pwd)/odm_meshing:/code/odm_meshing \
-v $(pwd)/odm_orthophoto:/code/odm_orthophoto \
-v $(pwd)/odm_georeferencing:/code/odm_georeferencing \
-v $(pwd)/odm_texturing:/code/odm_texturing \
-v $(pwd)/opensfm:/code/opensfm \
-v $(pwd)/pmvs:/code/pmvs \
opendronemap/opendronemap
docker run -ti --rm -v /my/project:/datasets/code <my_odm_image> --project-path /datasets --resize-to 1800 --dsm
To pass in custom parameters to the run.py script, simply pass it as arguments to the docker run command. For example:::
If you want to pass in custom parameters using the settings.yaml file, you can pass it as a -v volume binding::
docker run -it --rm \
-v $(pwd)/images:/code/images \
-v $(pwd)/odm_orthophoto:/code/odm_orthophoto \
-v $(pwd)/odm_texturing:/code/odm_texturing \
opendronemap/opendronemap --resize-to 1800 --force-ccd 6.16
If you want to pass in custom parameters using the settings.yaml file, you can pass it as a -v volume binding:::
docker run -it --rm \
-v $(pwd)/images:/code/images \
-v $(pwd)/odm_orthophoto:/code/odm_orthophoto \
-v $(pwd)/odm_texturing:/code/odm_texturing \
-v $(pwd)/settings.yaml:/code/settings.yaml \
opendronemap/opendronemap
docker run -ti --rm -v $(pwd)/settings.yaml:/code/settings.yaml -v /my/project:/datasets/code <my_odm_image> --project-path /datasets --resize-to 1800 --dsm
For more information about Docker, check out their `docs <https://docs.docker.com/>`_.
@ -88,17 +63,17 @@ Arguments::
resizes images by the largest side for opensfm. Set to
-1 to disable. Default: 2048
--end-with <string>, -e <string>
Can be one of:dataset | opensfm | slam | mve |
odm_meshing | odm_25dmeshing | mvs_texturing |
Can be one of:dataset | split | merge | opensfm | mve
| odm_filterpoints | odm_meshing | mvs_texturing |
odm_georeferencing | odm_dem | odm_orthophoto
--rerun <string>, -r <string>
Can be one of:dataset | opensfm | slam | mve |
odm_meshing | odm_25dmeshing | mvs_texturing |
Can be one of:dataset | split | merge | opensfm | mve
| odm_filterpoints | odm_meshing | mvs_texturing |
odm_georeferencing | odm_dem | odm_orthophoto
--rerun-all force rerun of all tasks
--rerun-from <string>
Can be one of:dataset | opensfm | slam | mve |
odm_meshing | odm_25dmeshing | mvs_texturing |
Can be one of:dataset | split | merge | opensfm | mve
| odm_filterpoints | odm_meshing | mvs_texturing |
odm_georeferencing | odm_dem | odm_orthophoto
--video <string> Path to the video file to process
--slam-config <string>
@ -153,6 +128,11 @@ Arguments::
the reconstruction and a global adjustment every 100
images. Speeds up reconstruction for very large
datasets.
--mve-confidence <float: 0 <= x <= 1>
Discard points that have less than a certain
confidence threshold. This only affects dense
reconstructions performed with MVE. Higher values
discard more points. Default: 0.6
--use-3dmesh Use a full 3D mesh to compute the orthophoto instead
of a 2.5D mesh. This option is a bit faster and
provides similar results in planar areas.
@ -166,10 +146,6 @@ Arguments::
lower memory usage. Since GSD is an estimate,
sometimes ignoring it can result in slightly better
image output quality.
--mve-confidence Discard points that have less than a certain confidence
threshold. This only affects dense reconstructions
performed with MVE. Higher values discard more points.
Default: 0.6
--mesh-size <positive integer>
The maximum vertex count of the output mesh. Default:
100000
@ -180,7 +156,7 @@ Arguments::
--mesh-samples <float >= 1.0>
Number of points per octree node, recommended and
default value: 1.0
--mesh-point-weight <interpolation weight>
--mesh-point-weight <positive float>
This floating point value specifies the importance
that interpolation of the point samples is given in
the formulation of the screened Poisson equation. The
@ -196,13 +172,30 @@ Arguments::
Automatically crop image outputs by creating a smooth
buffer around the dataset boundaries, shrinked by N
meters. Use 0 to disable cropping. Default: 3
--pc-classify <string>
Classify the .LAS point cloud output using a Simple
--pc-classify Classify the point cloud outputs using a Simple
Morphological Filter. You can control the behavior of
smrf by tweaking the --dem-* and --smrf-* parameters.
Default: none
this option by tweaking the --dem-* parameters.
Default: False
--pc-csv Export the georeferenced point cloud in CSV format.
Default: False
--pc-las Export the georeferenced point cloud in LAS format.
Default: False
--pc-filter <positive float>
Filters the point cloud by removing points that
deviate more than N standard deviations from the local
mean. Set to 0 to disable filtering. Default: 2.5
--smrf-scalar <positive float>
Simple Morphological Filter elevation scalar
parameter. Default: 1.25
--smrf-slope <positive float>
Simple Morphological Filter slope parameter (rise over
run). Default: 0.15
--smrf-threshold <positive float>
Simple Morphological Filter elevation threshold
parameter (meters). Default: 0.5
--smrf-window <positive float>
Simple Morphological Filter window radius parameter
(meters). Default: 18.0
--texturing-data-term <string>
Data term: [area, gmi]. Default: gmi
--texturing-nadir-weight <integer: 0 <= x <= 32>
@ -237,11 +230,12 @@ Arguments::
--use-exif Use this tag if you have a gcp_list.txt but want to
use the exif geotags instead
--dtm Use this tag to build a DTM (Digital Terrain Model,
ground only) using a progressive morphological filter.
Check the --dem* parameters for fine tuning.
ground only) using a simple morphological filter.
Check the --dem* and --smrf* parameters for finer
tuning.
--dsm Use this tag to build a DSM (Digital Surface Model,
ground + objects) using a progressive morphological
filter. Check the --dem* parameters for fine tuning.
filter. Check the --dem* parameters for finer tuning.
--dem-gapfill-steps <positive integer>
Number of steps used to fill areas with gaps. Set to 0
to disable gap filling. Starting with a radius equal
@ -256,21 +250,11 @@ Arguments::
Decimate the points before generating the DEM. 1 is no
decimation (full quality). 100 decimates ~99% of the
points. Useful for speeding up generation. Default=1
--smrf-scalar <positive float>
Simple Morphological Filter elevation scalar parameter.
Default: 1.25
--smrf-slope <positive float>
Simple Morphological Filter slope parameter
(rise over run)
Default: 0.15
--smrf-threshold <positive float>
Simple Morphological Filter elevation threshold
parameter (meters).
Default: 0.5
--smrf-window <positive float>'
Simple Morphological Filter window radius parameter
(meters)
Default: 18.0
--dem-euclidean-map Computes an euclidean raster map for each DEM. The map
reports the distance from each cell to the nearest
NODATA value (before any hole filling takes place).
This can be useful to isolate the areas that have been
filled. Default: False
--orthophoto-resolution <float > 0.0>
Orthophoto resolution in cm / pixel. Default: 5
--orthophoto-no-tiled
@ -288,12 +272,34 @@ Arguments::
IF_SAFER. See GDAL specs:
https://www.gdal.org/frmt_gtiff.html for more info.
Default: IF_SAFER
--orthophoto-cutline Generates a polygon around the cropping area that cuts
the orthophoto around the edges of features. This
polygon can be useful for stitching seamless mosaics
with multiple overlapping orthophotos. Default: False
--build-overviews Build orthophoto overviews using gdaladdo.
--verbose, -v Print additional messages to the console Default:
False
--time Generates a benchmark file with runtime info Default:
False
--version Displays version number and exits.
--split <positive integer>
Average number of images per submodel. When splitting
a large dataset into smaller submodels, images are
grouped into clusters. This value regulates the number
of images that each cluster should have on average.
--split-overlap <positive integer>
Radius of the overlap between submodels. After
grouping images into clusters, images that are closer
than this radius to a cluster are added to the
cluster. This is done to ensure that neighboring
submodels overlap.
--sm-cluster <string>
URL to a nodeodm-proxy instance for distributing a
split-merge workflow on multiple nodes in parallel.
Default: None
--merge <string> Choose what to merge in the merge step in a split
dataset. By default all available outputs are merged.
Default: all
.. _ground-control-points:
@ -328,10 +334,54 @@ The ``gcp_list.txt`` file must be created in the base of your project folder.
For good results your file should have a minimum of 15 lines after the header (5 points with 3 images to each point).
Video Reconstruction (Experimental)
Tutorials
---------
Below you will find step-by-step instructions for some common use cases.
Creating High Quality Orthophotos
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Without any parameter tweaks, ODM chooses a good compromise between quality, speed and memory usage. If you want to get higher quality results, you need to tweak some parameters:
* ``--orthophoto-resolution`` is the resolution of the orthophoto in cm/pixel. Increase this value for a higher resolution result.
* ``--ignore-gsd`` is a flag that instructs ODM to skip certain memory and speed optimizations that directly affect the orthophoto. Using this flag will increase runtime and memory usage, but will produce sharper results.
* ``--texturing-nadir-weight`` should be increased to ``29-32`` in urban areas to reconstruct better edges of roofs. It should be decreased to ``0-6`` in grassy / flat areas.
* ``--texturing-data-term`` should be set to `area` in forest areas.
* ``--mesh-size`` should be increased to `300000-600000` and `--mesh-octree-depth`` should be increased to `10-11` in urban areas to recreate better buildings / roofs.
Creating Digital Terrain Models
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
By default ODM does not create DEMs. To create a digital terrain model, make sure to pass the ``--dtm`` flag.
For DTM generation, a Simple Morphological Filter (smrf) is used to classify points in ground vs. non-ground and only the ground points are used. The ``smrf`` filter can be controlled via several parameters:
* ``--smrf-scalar`` scaling value. Increase this parameter for terrains with lots of height variation.
* ``--smrf-slope`` slope parameter, which is a measure of "slope tolerance". Increase this parameter for terrains with lots of height variation. Should be set to something higher than 0.1 and not higher than 1.2.
* ``--smrf-threshold`` elevation threshold. Set this parameter to the minimum height (in meters) that you expect non-ground objects to be.
* ``--smrf-window`` window radius parameter (in meters) that corresponds to the size of the largest feature (building, trees, etc.) to be removed. Should be set to a value higher than 10.
Changing these options can affect the result of DTMs significantly. The best source to read to understand how the parameters affect the output is to read the original paper `An improved simple morphological filter for the terrain classification of airborne LIDAR data <https://www.researchgate.net/publication/258333806_An_Improved_Simple_Morphological_Filter_for_the_Terrain_Classification_of_Airborne_LIDAR_Data>`_ (PDF freely available).
Overall the ``--smrf-threshold`` option has the biggest impact on results.
SMRF is good at avoiding Type I errors (small number of ground points mistakenly classified as non-ground) but only "acceptable" at avoiding Type II errors (large number non-ground points mistakenly classified as ground). This needs to be taken in consideration when generating DTMs that are meant to be used visually, since objects mistaken for ground look like artifacts in the final DTM.
Two other important parameters affect DEM generation:
* ``--dem-resolution`` which sets the output resolution of the DEM raster (cm/pixel)
* ``--dem-gapfill-steps`` which determines the number of progressive DEM layers to use. For urban scenes increasing this value to `4-5` can help produce better interpolation results in the areas that are left empty by the SMRF filter.
Example of how to generate a DTM::
docker run -ti --rm -v /my/project:/datasets/code <my_odm_image> --project-path /datasets --dtm --dem-resolution 2 --smrf-threshold 0.4 --smrf-window 24
Video Reconstruction (Developers Only)
-----------------------------------
**Note: This is an experimental feature**
**Note: Video reconstruction currently will not work out of the box! There's code in the project that should allow a developer to add SLAM functionality to ODM, but this feature has not been touched in a while and is currently broken.**
It is possible to build a reconstruction using a video file instead of still images. The technique for reconstructing the camera trajectory from a video is called Simultaneous Localization And Mapping (SLAM). OpenDroneMap uses the opensource `ORB_SLAM2 <https://github.com/raulmur/ORB_SLAM2>`_ library for this task.