kopia lustrzana https://github.com/OpenDroneMap/docs
basic doc structure in sphinx
commit
0b7521e682
|
@ -0,0 +1,2 @@
|
|||
_build
|
||||
venv
|
|
@ -0,0 +1,23 @@
|
|||
# makefile for Sphinx documentation
|
||||
# =================================
|
||||
|
||||
# You can set these variables from the command line.
|
||||
SPHINXOPTS =
|
||||
SPHINXBUILD = sphinx-build
|
||||
SPHINXPROJ = OpenDroneMap
|
||||
SOURCEDIR = source
|
||||
BUILDDIR = _build
|
||||
|
||||
# Put it first so that "make" without argument is like "make help".
|
||||
help:
|
||||
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
||||
|
||||
.PHONY: help Makefile
|
||||
|
||||
livehtml:
|
||||
sphinx-autobuild -b html $(SOURCEDIR) $(BUILDDIR)/html
|
||||
|
||||
# Catch-all target: route all unknown targets to Sphinx using the new
|
||||
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
|
||||
%: Makefile
|
||||
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
|
@ -0,0 +1,32 @@
|
|||
## OpenDroneMap Docs
|
||||
|
||||
Documentation for [OpenDroneMap](https://github.com/OpenDroneMap/OpenDroneMap).
|
||||
|
||||
### Development
|
||||
|
||||
_(on OSX)_
|
||||
|
||||
```bash
|
||||
# install python
|
||||
brew install python3
|
||||
# check installation
|
||||
python --version
|
||||
# install virtualenv
|
||||
pip install virtualenv
|
||||
# create Python environment
|
||||
virtualenv -p /usr/local/bin/python3 venv
|
||||
# open the Python environment
|
||||
source venv/bin/activate
|
||||
# install requirements
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
More on [`virtualenv`](https://virtualenv.pypa.io/en/stable/).
|
||||
|
||||
Use [`sphinx-autobuild`](https://github.com/GaretJax/sphinx-autobuild) to automatically watch for changes and rebuild the html site using:
|
||||
```
|
||||
make livehtml
|
||||
```
|
||||
Go to the logged URL to access your site (e.g. `http://127.0.0.1:8000`).
|
||||
To stop the server simply press `Ctrl+C`.
|
||||
|
||||
|
|
@ -0,0 +1,3 @@
|
|||
Sphinx==1.7.1
|
||||
sphinx-rtd-theme==0.2.4
|
||||
sphinx-autobuild==0.7.1
|
|
@ -0,0 +1,4 @@
|
|||
Code Reference
|
||||
==============
|
||||
|
||||
Coming soon!
|
|
@ -0,0 +1,84 @@
|
|||
.. Notes and doc on building ODM
|
||||
|
||||
Building
|
||||
========
|
||||
|
||||
|
||||
Hardware Recommendations
|
||||
------------------------
|
||||
|
||||
OpenDroneMap is built on Ubuntu 16.04 but can be run on other major platforms using Docker.
|
||||
|
||||
Minimum 4GB of RAM, recommended 16GB or more. Many parts of the ODM toolchain are parallelized, and memory requirements will increase as the size of the input data increases.
|
||||
|
||||
.. _docker-installation:
|
||||
|
||||
Docker Installation (cross-platform)
|
||||
------------------------------------
|
||||
|
||||
First you need to `download and install Docker <https://www.docker.com/>`_. Note for Windows users that Docker CE only works for Windows 10 Professional and Enterprise. Otherwise you should use `Docker Toolbox <https://www.docker.com/products/docker-toolbox>`_
|
||||
|
||||
You can easily pull and run a pre-built image. Start here: :ref:`docker-usage`. If you want to build your own docker image, follow the instructions below.
|
||||
|
||||
Before running ODM it's advised to check that Docker is allocating sufficient resources to containers. In Windows this can be done in the 'Docker for Windows' program under the 'Advanced' tab.
|
||||
|
||||
Build the image
|
||||
```````````````
|
||||
|
||||
Download and extract the latest version of ODM: :ref:`download`
|
||||
|
||||
In Docker toolbox or Docker CE, navigate to your extracted ODM directory. Then build the Docker image.::
|
||||
|
||||
cd Documents/OpenDroneMap_v0_3_1/
|
||||
docker build -t my_odm_image .
|
||||
|
||||
When building your own Docker image, if image size is of importance to you, you should use the ``--squash`` flag, like so:::
|
||||
|
||||
docker build --squash -t my_odm_image .
|
||||
|
||||
This will clean up intermediate steps in the Docker build process, resulting in a significantly smaller image (about half the size).
|
||||
|
||||
Experimental flags need to be enabled in Docker to use the ``--squash`` flag. To enable this, insert the following into the file /etc/docker/daemon.json:::
|
||||
|
||||
{
|
||||
"experimental": true
|
||||
}
|
||||
|
||||
After this, you must restart docker by typing ``sudo service docker restart`` into your Linux terminal.
|
||||
|
||||
|
||||
|
||||
Once this is done, go to :ref:`docker-usage`
|
||||
|
||||
|
||||
.. _native-installation:
|
||||
|
||||
Native Installation (Ubuntu 16.04)
|
||||
----------------------------------
|
||||
|
||||
Download and extract the latest version of ODM: :ref:`download`
|
||||
|
||||
The installation is simple::
|
||||
|
||||
bash configure.sh install
|
||||
|
||||
|
||||
configure.sh can take up to 2 arguments
|
||||
|
||||
configure.sh command [n]
|
||||
command: can be one of (install, reinstall, uninstall, usage)
|
||||
|
||||
[n] is an optional argument that will set the number of processes for the compiler
|
||||
|
||||
|
||||
Setting environment variables
|
||||
`````````````````````````````
|
||||
|
||||
Using your favorite editor, open `~/.bashrc` and append the following to the bottom of the file (replace /your/path/OpenDroneMap with your installation path, e.g. /home/user/OpenDroneMap)::
|
||||
|
||||
export PYTHONPATH=$PYTHONPATH:/your/path/OpenDroneMap/SuperBuild/install/lib/python2.7/dist-packages
|
||||
export PYTHONPATH=$PYTHONPATH:/your/path/OpenDroneMap/SuperBuild/src/opensfm
|
||||
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/your/path/OpenDroneMap/SuperBuild/install/lib
|
||||
|
||||
You will need to log out and back in again for the variables to set.
|
||||
|
|
@ -0,0 +1,168 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Configuration file for the Sphinx documentation builder.
|
||||
#
|
||||
# This file does only contain a selection of the most common options. For a
|
||||
# full list see the documentation:
|
||||
# http://www.sphinx-doc.org/en/stable/config
|
||||
|
||||
# -- Project information -----------------------------------------------------
|
||||
|
||||
# General information about the project.
|
||||
project = 'OpenDroneMap'
|
||||
copyright = '2018, OpenDroneMap'
|
||||
author = 'OpenDroneMap'
|
||||
|
||||
# The short X.Y version
|
||||
version = '0.4'
|
||||
# The full version, including alpha/beta/rc tags
|
||||
release = '0.4'
|
||||
|
||||
|
||||
# -- General configuration ---------------------------------------------------
|
||||
|
||||
# If your documentation needs a minimal Sphinx version, state it here.
|
||||
#
|
||||
# needs_sphinx = '1.0'
|
||||
|
||||
# Add any Sphinx extension module names here, as strings. They can be
|
||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
||||
# ones.
|
||||
extensions = [
|
||||
'sphinx.ext.todo',
|
||||
]
|
||||
|
||||
# Add any paths that contain templates here, relative to this directory.
|
||||
templates_path = ['_templates']
|
||||
|
||||
# The suffix(es) of source filenames.
|
||||
# You can specify multiple suffix as a list of string:
|
||||
#
|
||||
# source_suffix = ['.rst', '.md']
|
||||
source_suffix = '.rst'
|
||||
|
||||
# The master toctree document.
|
||||
master_doc = 'index'
|
||||
|
||||
# The language for content autogenerated by Sphinx. Refer to documentation
|
||||
# for a list of supported languages.
|
||||
#
|
||||
# This is also used if you do content translation via gettext catalogs.
|
||||
# Usually you set "language" from the command line for these cases.
|
||||
language = None
|
||||
|
||||
# List of patterns, relative to source directory, that match files and
|
||||
# directories to ignore when looking for source files.
|
||||
# This pattern also affects html_static_path and html_extra_path .
|
||||
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
|
||||
|
||||
# The name of the Pygments (syntax highlighting) style to use.
|
||||
pygments_style = 'sphinx'
|
||||
|
||||
|
||||
# -- Options for HTML output -------------------------------------------------
|
||||
|
||||
# The theme to use for HTML and HTML Help pages. See the documentation for
|
||||
# a list of builtin themes.
|
||||
#
|
||||
# https://sphinx-rtd-theme.readthedocs.io/en/latest/
|
||||
html_theme = 'sphinx_rtd_theme'
|
||||
html_theme_path = ["_themes", ]
|
||||
|
||||
# Theme options are theme-specific and customize the look and feel of a theme
|
||||
# further. For a list of options available for each theme, see the
|
||||
# documentation.
|
||||
#
|
||||
html_theme_options = {
|
||||
'canonical_url': '',
|
||||
'analytics_id': '',
|
||||
'logo_only': False,
|
||||
'display_version': True,
|
||||
'prev_next_buttons_location': 'bottom',
|
||||
# Toc options
|
||||
'collapse_navigation': False,
|
||||
'sticky_navigation': True,
|
||||
'navigation_depth': 4,
|
||||
}
|
||||
# Add any paths that contain custom static files (such as style sheets) here,
|
||||
# relative to this directory. They are copied after the builtin static files,
|
||||
# so a file named "default.css" will overwrite the builtin "default.css".
|
||||
html_static_path = ['_static']
|
||||
|
||||
# Custom sidebar templates, must be a dictionary that maps document names
|
||||
# to template names.
|
||||
#
|
||||
html_sidebars = {
|
||||
# '**': [
|
||||
# 'globaltoc.html',
|
||||
# 'relations.html',
|
||||
# 'searchbox.html'
|
||||
# ]
|
||||
}
|
||||
|
||||
|
||||
# -- Options for HTMLHelp output ---------------------------------------------
|
||||
|
||||
# Output file base name for HTML help builder.
|
||||
htmlhelp_basename = 'OpenDroneMapdoc'
|
||||
|
||||
|
||||
# -- Options for LaTeX output ------------------------------------------------
|
||||
|
||||
latex_elements = {
|
||||
# The paper size ('letterpaper' or 'a4paper').
|
||||
#
|
||||
# 'papersize': 'letterpaper',
|
||||
|
||||
# The font size ('10pt', '11pt' or '12pt').
|
||||
#
|
||||
# 'pointsize': '10pt',
|
||||
|
||||
# Additional stuff for the LaTeX preamble.
|
||||
#
|
||||
# 'preamble': '',
|
||||
|
||||
# Latex figure (float) alignment
|
||||
#
|
||||
# 'figure_align': 'htbp',
|
||||
}
|
||||
|
||||
# Grouping the document tree into LaTeX files. List of tuples
|
||||
# (source start file, target name, title,
|
||||
# author, documentclass [howto, manual, or own class]).
|
||||
latex_documents = [
|
||||
(master_doc, 'OpenDroneMap.tex', 'OpenDroneMap Documentation',
|
||||
'OpenDroneMap', 'manual'),
|
||||
]
|
||||
|
||||
|
||||
# -- Options for manual page output ------------------------------------------
|
||||
|
||||
# One entry per manual page. List of tuples
|
||||
# (source start file, name, description, authors, manual section).
|
||||
man_pages = [
|
||||
(master_doc, 'opendronemap', 'OpenDroneMap Documentation',
|
||||
[author], 1)
|
||||
]
|
||||
|
||||
|
||||
# -- Options for Texinfo output ----------------------------------------------
|
||||
|
||||
# Grouping the document tree into Texinfo files. List of tuples
|
||||
# (source start file, target name, title, author,
|
||||
# dir menu entry, description, category)
|
||||
texinfo_documents = [
|
||||
(master_doc, 'OpenDroneMap', 'OpenDroneMap Documentation',
|
||||
author, 'OpenDroneMap', 'One line description of project.',
|
||||
'Miscellaneous'),
|
||||
]
|
||||
|
||||
# Example configuration for intersphinx: refer to the Python standard library.
|
||||
intersphinx_mapping = {'https://docs.python.org/': None}
|
||||
|
||||
# -- Extension configuration -------------------------------------------------
|
||||
|
||||
# -- Options for todo extension ----------------------------------------------
|
||||
|
||||
# If true, `todo` and `todoList` produce output, else they produce nothing.
|
||||
todo_include_todos = True
|
|
@ -0,0 +1,80 @@
|
|||
.. contributing
|
||||
|
||||
How to contribute
|
||||
=================
|
||||
|
||||
OpenDroneMap relies on community contributions. You can contribute in many ways, even if you are not a programmer.
|
||||
|
||||
Community Forum
|
||||
---------------
|
||||
|
||||
If you are looking to get involved, are stuck on a problem, or want to reach out, `the forum <https://community.opendronemap.org/>`_ is a great place to start. You may find your questions already answered or else you can find other useful tips and resources. You can also contribute your open access datasets for others to explore. It is a good place go before submitting bug reports or getting in touch with developers before writing a new feature. In addition to the forum, you can reach us on `gitter <https://gitter.im/OpenDroneMap/OpenDroneMap/>`_.
|
||||
|
||||
Reporting Bugs
|
||||
--------------
|
||||
|
||||
Bugs are tracked as Github issues. Please create an issue in the repository and tag it with the Bug tag.
|
||||
|
||||
Explain the problem and include additional details to help maintainers reproduce the problem:
|
||||
|
||||
* **Use a clear and descriptive title** for the issue to identify the problem.
|
||||
* **Describe the exact steps which reproduce the problem** in as many details as possible. For example, start by explaining how you run ODM (Docker, Vagrant, etc), e.g. which command exactly you used in the terminal. When listing steps, **don't just say what you did, but explain how you did it.**
|
||||
* **Provide specific examples to demonstrate the steps.** Include links to files or GitHub projects, or copy/pasteable snippets, which you use in those examples. If you're providing snippets in the issue, use `Markdown code blocks <https://help.github.com/articles/markdown-basics/#multiple-lines>`_.
|
||||
* **Describe the behavior you observed after following the steps** and point out what exactly is the problem with that behavior.
|
||||
* **Explain which behavior you expected to see instead and why.**
|
||||
* **Include screenshots and animated GIFs** which show you following the described steps and clearly demonstrate the problem. You can use `this tool to record GIFs on macOS and Windows <http://www.cockos.com/licecap/>`_, and `this tool <https://github.com/colinkeenan/silentcast>`_ or `this one <https://github.com/GNOME/byzanz>`_ on Linux.
|
||||
* **If the problem is related to performance,** please post your machine's specs (host and guest machine).
|
||||
* **If the problem wasn't triggered by a specific action,** describe what you were doing before the problem happened and share more information using the guidelines below.
|
||||
|
||||
Include details about your configuration and environment:
|
||||
|
||||
* **Which version of ODM are you using?** A stable release? a clone of master?
|
||||
* **What's the name and version of the OS you're using?**
|
||||
* **Are you running ODM in a virtual machine or Docker?** If so, which VM software are you using and which operating systems and versions are used for the host and the guest?
|
||||
|
||||
Template For Submitting Bug Reports
|
||||
```````````````````````````````````
|
||||
::
|
||||
|
||||
[Short description of problem here]
|
||||
|
||||
**Reproduction Steps:**
|
||||
|
||||
1. [First Step]
|
||||
2. [Second Step]
|
||||
3. [Other Steps...]
|
||||
|
||||
**Expected behavior:**
|
||||
|
||||
[Describe expected behavior here]
|
||||
|
||||
**Observed behavior:**
|
||||
|
||||
[Describe observed behavior here]
|
||||
|
||||
**Screenshots and GIFs**
|
||||
|
||||

|
||||
|
||||
**ODM version:** [Enter ODM version here]
|
||||
**OS and version:** [Enter OS name and version here]
|
||||
|
||||
**Additional information:**
|
||||
|
||||
* Problem started happening recently, didn't happen in an older version of ODM: [Yes/No]
|
||||
* Problem can be reliably reproduced, doesn't happen randomly: [Yes/No]
|
||||
* Problem happens with all datasets and projects, not only some datasets or projects: [Yes/No]
|
||||
|
||||
Pull Requests
|
||||
-------------
|
||||
|
||||
* Include screenshots and animated GIFs in your pull request whenever possible.
|
||||
* Follow the PEP8 Python Style Guide.
|
||||
* End files with a newline.
|
||||
* Avoid platform-dependent code:
|
||||
* Use require('fs-plus').getHomeDirectory() to get the home directory.
|
||||
* Use path.join() to concatenate filenames.
|
||||
* Use os.tmpdir() rather than /tmp when you need to reference the temporary directory.
|
||||
* Using a plain return when returning explicitly at the end of a function.
|
||||
* Not return null, return undefined, null, or undefined
|
||||
|
|
@ -0,0 +1,94 @@
|
|||
.. Explaining the dataset structure
|
||||
|
||||
Dataset Structure
|
||||
=================
|
||||
::
|
||||
|
||||
project/
|
||||
├── images/
|
||||
│ ├── img-1234.jpg
|
||||
│ └── ...
|
||||
├── opensfm/ # Tie Points and camera positions here in JSON format
|
||||
│ ├── config.yaml
|
||||
│ ├── images/
|
||||
│ ├── masks/
|
||||
│ ├── gcp_list.txt
|
||||
│ ├── metadata/
|
||||
│ ├── features/
|
||||
│ ├── matches/
|
||||
│ ├── tracks.tsv
|
||||
│ ├── reconstruction.json
|
||||
│ ├── reconstruction.meshed.json
|
||||
│ ├── undistorted/
|
||||
│ ├── undistorted_tracks.json
|
||||
│ ├── undistorted_reconstruction.json
|
||||
│ └── depthmaps/
|
||||
│ └── merged.ply # Dense Point Cloud
|
||||
├── odm_meshing/
|
||||
│ ├── odm_mesh.ply # A 3D mesh
|
||||
│ └── odm_meshing_log.txt # Output of the meshing task. May point out errors.
|
||||
├── odm_texturing/
|
||||
│ ├── odm_textured_model.obj # Textured mesh
|
||||
│ ├── odm_textured_model_geo.obj # Georeferenced textured mesh
|
||||
│ └── texture_N.jpg # Associated textured images used by the model
|
||||
├── odm_georeferencing/
|
||||
│ ├── odm_georeferenced_model.ply # A georeferenced dense point cloud
|
||||
│ ├── odm_georeferenced_model.ply.laz # LAZ format point cloud
|
||||
│ ├── odm_georeferenced_model.csv # XYZ format point cloud
|
||||
│ ├── odm_georeferencing_log.txt # Georeferencing log
|
||||
│ └── odm_georeferencing_utm_log.txt # Log for the extract_utm portion
|
||||
├── odm_orthophoto/
|
||||
│ ├── odm_orthophoto.png # Orthophoto image (no coordinates)
|
||||
│ ├── odm_orthophoto.tif # Orthophoto GeoTiff
|
||||
│ ├── odm_orthophoto_log.txt # Log file
|
||||
│ └── gdal_translate_log.txt # Log for georeferencing the png file
|
||||
└── odm_dem/
|
||||
├── odm_dsm.tif # Digital Surface Model Geotiff - the tops of everything
|
||||
└── odm_dtm.tif # Digital Terrain Model Geotoff - the ground.
|
||||
|
||||
Outputs
|
||||
```````
|
||||
Listed below are some of the useful outputs ODM produces
|
||||
|
||||
3D Models (unreferenced)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
``pmvs/recon0/models/option-0000.ply`` -- The point cloud file
|
||||
|
||||
``odm_meshing/odm_mesh.ply`` -- The meshed surface
|
||||
|
||||
Textured Models
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
``odm_texturing/odm_textured_model.obj`` -- The textured surface mesh
|
||||
|
||||
You can access the point cloud and textured meshes using MeshLab. Open MeshLab, and choose File:Import Mesh and choose your textured mesh from a location similar to the following: ``odm_texturing\odm_textured_model.obj``
|
||||
|
||||
Georeferencing
|
||||
^^^^^^^^^^^^^^
|
||||
|
||||
``odm_texturing/odm_textured_model_geo.obj`` -- The georeferenced and textured surface mesh
|
||||
|
||||
``odm_georeferenced_model.ply/laz/csv`` -- The georeferenced point cloud in different file formats
|
||||
|
||||
Orthophoto
|
||||
^^^^^^^^^^
|
||||
|
||||
``odm_orthphoto.png`` -- The orthophoto, but this is a simple png, which doesn't have any georeferencing information
|
||||
|
||||
``odm_orthphoto.tif`` -- GeoTIFF Orthophoto. You can use it in QGIS as a raster layer.
|
||||
|
||||
|
||||
DEM/DSM
|
||||
^^^^^^^
|
||||
|
||||
DEM/DSM will only be created if the ``--dem`` or ``--dsm`` options are used.
|
||||
|
||||
``odm_dem.tif``
|
||||
|
||||
``odm_dsm.tif``
|
||||
|
||||
2.5D Meshing Reconstruction
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The ``--use-25dmesh`` option will generate two extra directories, ``odm_texturing_25d/`` and ``odm_25dgeoreferencing``. The items inside correspond to those inside the 3D meshed output folders, only they were run using the 2.5D meshing algorithm.
|
|
@ -0,0 +1,18 @@
|
|||
.. Where to download the stable/ unstable versions
|
||||
|
||||
.. _download:
|
||||
|
||||
Download
|
||||
========
|
||||
|
||||
Stable
|
||||
------
|
||||
`<https://github.com/OpenDroneMap/OpenDroneMap/releases/latest>`_
|
||||
|
||||
Development
|
||||
-----------
|
||||
|
||||
Use git to clone the OpenDroneMap repository::
|
||||
|
||||
git clone https://github.com/OpenDroneMap/OpenDroneMap.git
|
||||
|
|
@ -0,0 +1,4 @@
|
|||
Flying tips
|
||||
===========
|
||||
|
||||
Coming soon!
|
|
@ -0,0 +1,16 @@
|
|||
Welcome to OpenDroneMap's documentation!
|
||||
========================================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
:caption: Contents:
|
||||
|
||||
download
|
||||
building
|
||||
using
|
||||
dataset
|
||||
large
|
||||
api
|
||||
flying
|
||||
contributing
|
||||
|
|
@ -0,0 +1,110 @@
|
|||
.. large
|
||||
|
||||
Splitting Large Datasets
|
||||
========================
|
||||
|
||||
A recent ODM update (coined split-merge) introduces a new workflow for splitting up very large datasets into manageable chunks (called submodels), running the pipeline on each chunk, and then producing some merged products.
|
||||
|
||||
Why might you use the split-merge pipeline? If you have a very large number of images in your dataset, split-merge will help make the processing more manageable on a large machine. It will also alleviate the need for Ground Control. GPS information gathered from the UAV is still a long way from being accurate, and those problems only get more noticeable as datasets scale. Obviously the best results will come from survey-grade GCPs, but this is not always possible.
|
||||
|
||||
What split-merge doesn’t solve is the ability to run large datasets on small machines. We have made strides towards reducing processing costs in general, but the goal of split-merge was specifically to solve a scaling problem, not necessarily an efficiency one.
|
||||
|
||||
|
||||
Calibrate images
|
||||
----------------
|
||||
|
||||
Image calibration is essential for large datasets because error propagation due to image distortion will cause a bowl effect on the models. Calibration instructions can be found at :ref:`calibration`.
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
||||
The scripts lay inside the ``scripts/metadataset`` folder. They are:::
|
||||
|
||||
run_all.sh
|
||||
setup.py
|
||||
run_matching.py
|
||||
split.py
|
||||
run_reconstructions.py
|
||||
align.py
|
||||
run_dense.py
|
||||
merge.py
|
||||
|
||||
If you look into ``run_all.sh`` you will see that you run each of these scripts in the order above. It's really just a placeholder file we used for testing, so I will largely ignore it. Instead I will go through each step in order to explain what it does and how to run it.
|
||||
Before you run the following scripts, you should set up the environment variables:::
|
||||
|
||||
RUNPATH=<Set this to your ODM base directory, eg. /home/dmb2/opendronemap>
|
||||
export PYTHONPATH=$RUNPATH:$RUNPATH/SuperBuild/install/lib/python2.7/dist-packages:$RUNPATH/SuperBuild/src/opensfm:$PYTHONPATH
|
||||
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$RUNPATH/SuperBuild/install/lib
|
||||
|
||||
And make sure you have all the parameters you want to pass in the settings.yaml file in the software root directory. You won't be able to put most of those settings into the command line.
|
||||
|
||||
setup.py
|
||||
````````
|
||||
|
||||
This script sets up the metadataset/submodel structure. It takes some arguments:::
|
||||
|
||||
<path-to-project>
|
||||
--resize-to n
|
||||
--min-num-features n
|
||||
--num-cores n
|
||||
--matcher-neighbors n
|
||||
--submodel-size n
|
||||
--submodel-overlap float
|
||||
|
||||
``<path-to-project>`` is where the data lies, set up like an ODM project (images should be in an "images" subdirectory), `n` in each of the other parameters is an integer. See https://github.com/OpenDroneMap/OpenDroneMap/wiki/Run-Time-Parameters for more info on the first four. The submodel-size parameter sets how many images are put in each submodel cluster on average. The default is 80 images. It is good to keep the size small because it decreases the bowling effect overall. There also needs to be sufficient overlap between clusters for alignment and merging the models. The submodel-overlap parameter determines how large a radius (in meters) around the cluster to include neighboring images. ::
|
||||
|
||||
python setup.py ~/ODMProjects/split-merge-test/
|
||||
|
||||
run_matching.py
|
||||
```````````````
|
||||
|
||||
Before we split the dataset up, we have to match the images so that the software knows where to make the splits. This is done on the whole dataset, so it can take a while if there are a lot of photos.
|
||||
This one takes nothing except the project path:::
|
||||
|
||||
python run_matching.py ~/ODMProjects/split-merge-test/
|
||||
|
||||
split.py
|
||||
````````
|
||||
|
||||
Now we split the model. This will create a directory called “submodels” within which is a set of directories for each submodel. Each of these is set up like a typical ODM project folder, except the images are symlinked to the root images folder. This is an important concept to know because moving the project folder will break the symlinks if you are not careful.::
|
||||
|
||||
python split.py ~/ODMProjects/split-merge-test/
|
||||
|
||||
run_reconstructions.py
|
||||
``````````````````````
|
||||
|
||||
Now that we have the submodels, we can run the sparse reconstruction on each. There is an optional argument in this script to run matching on each submodel. You already should have run matching above so we don't need to do it again.::
|
||||
|
||||
--run-matching
|
||||
|
||||
Here's what I ran:::
|
||||
|
||||
python run_reconstructions.py ~/ODMProjects/split-merge-test/
|
||||
|
||||
align.py
|
||||
````````
|
||||
|
||||
Each submodel is self-referenced, so it’s important to realign the point clouds before getting to the next steps of the pipeline:::
|
||||
|
||||
python align.py ~/ODMProjects/split-merge-test/
|
||||
|
||||
run_dense.py
|
||||
````````````
|
||||
|
||||
This is the one that will take a while. It basically runs the rest of the toolchain for each aligned submodel.::
|
||||
|
||||
python run_dense.py ~/ODMProjects/split-merge-test/
|
||||
|
||||
And then we wait....
|
||||
|
||||
merge.py
|
||||
````````
|
||||
|
||||
The previous script generated an orthophoto for each submodel, and now we have to merge them into a single file. By default it will not overwrite the resulting TIFF so if you need to rerun, make sure you append ``--overwrite``.::
|
||||
|
||||
python merge.py ~/ODMProjects/split-merge-test/ --overwrite
|
||||
|
||||
|
||||
Next steps
|
||||
----------
|
||||
This process is a pretty great start to scaling our image processing capabilities, although there is always work to be done. Overall, I would like to see the process streamlined into the standard OpenDroneMap flow. I would also like to see more merged outputs than only the GeoTIFF: the point cloud, textured mesh, and DSM/DTM for starters. Huge props to Pau and the folks at Mapillary for their amazing contributions to OpenDroneMap through their OpenSfM code. I look forward to further pushing the limits of OpenDroneMap and seeing how big a dataset we can process.
|
|
@ -0,0 +1,487 @@
|
|||
.. Usage
|
||||
|
||||
Usage
|
||||
=====
|
||||
|
||||
|
||||
.. _docker-usage:
|
||||
|
||||
Docker
|
||||
------
|
||||
|
||||
There are two methods for running with docker. One pulls a pre-built image from the docker hub. This is the most reliable. You can also :ref:`build your own image <docker-installation>`. In either case, the run command is the same, what you will change is the name of the image. For the docker hub image, use ``opendronemap/opendronemap``. For an image you built yourself, use that image name (in our case, ``my_odm_image``).::
|
||||
|
||||
docker run -it --rm \
|
||||
-v $(pwd)/images:/code/images \
|
||||
-v $(pwd)/odm_texturing:/code/odm_texturing \
|
||||
-v $(pwd)/odm_orthophoto:/code/odm_orthophoto \
|
||||
<docker-image>
|
||||
|
||||
``-v`` is used to connect folders in the docker container to local folders. See :doc:`dataset` for reference on the project layout.
|
||||
|
||||
If you want to get all intermediate outputs, run the following command:::
|
||||
|
||||
docker run -it --rm \
|
||||
-v $(pwd)/images:/code/images \
|
||||
-v $(pwd)/odm_meshing:/code/odm_meshing \
|
||||
-v $(pwd)/odm_orthophoto:/code/odm_orthophoto \
|
||||
-v $(pwd)/odm_georeferencing:/code/odm_georeferencing \
|
||||
-v $(pwd)/odm_texturing:/code/odm_texturing \
|
||||
-v $(pwd)/opensfm:/code/opensfm \
|
||||
-v $(pwd)/pmvs:/code/pmvs \
|
||||
opendronemap/opendronemap
|
||||
|
||||
To pass in custom parameters to the run.py script, simply pass it as arguments to the docker run command. For example:::
|
||||
|
||||
docker run -it --rm \
|
||||
-v $(pwd)/images:/code/images \
|
||||
-v $(pwd)/odm_orthophoto:/code/odm_orthophoto \
|
||||
-v $(pwd)/odm_texturing:/code/odm_texturing \
|
||||
opendronemap/opendronemap --resize-to 1800 --force-ccd 6.16
|
||||
|
||||
If you want to pass in custom parameters using the settings.yaml file, you can pass it as a -v volume binding:::
|
||||
|
||||
docker run -it --rm \
|
||||
-v $(pwd)/images:/code/images \
|
||||
-v $(pwd)/odm_orthophoto:/code/odm_orthophoto \
|
||||
-v $(pwd)/odm_texturing:/code/odm_texturing \
|
||||
-v $(pwd)/settings.yaml:/code/settings.yaml \
|
||||
opendronemap/opendronemap
|
||||
|
||||
For more information about Docker, check out their `docs <https://docs.docker.com/>`_.
|
||||
|
||||
.. _native-usage:
|
||||
|
||||
Native
|
||||
------
|
||||
|
||||
|
||||
First thing you need to do is set the project path. Edit the ``settings.yaml`` file to add your projects folder::
|
||||
|
||||
# This line is really important to set up properly
|
||||
project_path: '' # Example: '/home/user/ODMProjects'
|
||||
|
||||
# The rest of the settings will default to the values set unless you uncomment and change them
|
||||
#resize_to: 2400
|
||||
|
||||
You must change ``project_path: ''`` to add an absolute path to somewhere on your machine. Whenever you run a new project, it will be saved here.
|
||||
|
||||
To use OpenDroneMap run the following command::
|
||||
|
||||
python run.py --images </path/to/images> [arguments] <project-name>
|
||||
|
||||
Then sit back, grab a coffee and wait. You only have to specify ``--images </path/to/images>`` on the first run.
|
||||
|
||||
.. _arguments:
|
||||
|
||||
Additional Arguments
|
||||
````````````````````
|
||||
|
||||
Args::
|
||||
|
||||
-h, --help show this help message and exit
|
||||
--images <path>, -i <path>
|
||||
Path to input images
|
||||
--project-path <path>
|
||||
Path to the project folder
|
||||
--resize-to <integer>
|
||||
resizes images by the largest side for opensfm. Set to
|
||||
-1 to disable. Default: 2048
|
||||
--start-with <string>, -s <string>
|
||||
Can be one of: dataset | opensfm | slam | cmvs | pmvs
|
||||
| odm_meshing | odm_25dmeshing | mvs_texturing |
|
||||
odm_georeferencing | odm_dem | odm_orthophoto
|
||||
--end-with <string>, -e <string>
|
||||
Can be one of:dataset | opensfm | slam | cmvs | pmvs |
|
||||
odm_meshing | odm_25dmeshing | mvs_texturing |
|
||||
odm_georeferencing | odm_dem | odm_orthophoto
|
||||
--rerun <string>, -r <string>
|
||||
Can be one of:dataset | opensfm | slam | cmvs | pmvs |
|
||||
odm_meshing | odm_25dmeshing | mvs_texturing |
|
||||
odm_georeferencing | odm_dem | odm_orthophoto
|
||||
--rerun-all force rerun of all tasks
|
||||
--rerun-from <string>
|
||||
Can be one of:dataset | opensfm | slam | cmvs | pmvs |
|
||||
odm_meshing | odm_25dmeshing | mvs_texturing |
|
||||
odm_georeferencing | odm_dem | odm_orthophoto
|
||||
--video <string> Path to the video file to process
|
||||
--slam-config <string>
|
||||
Path to config file for orb-slam
|
||||
--force-focal <positive float>
|
||||
Override the focal length information for the images
|
||||
--proj <PROJ4 string>
|
||||
Projection used to transform the model into geographic
|
||||
coordinates
|
||||
--force-ccd <positive float>
|
||||
Override the ccd width information for the images
|
||||
--min-num-features <integer>
|
||||
Minimum number of features to extract per image. More
|
||||
features leads to better results but slower execution.
|
||||
Default: 8000
|
||||
--matcher-neighbors <integer>
|
||||
Number of nearest images to pre-match based on GPS
|
||||
exif data. Set to 0 to skip pre-matching. Neighbors
|
||||
works together with Distance parameter, set both to 0
|
||||
to not use pre-matching. OpenSFM uses both parameters
|
||||
at the same time, Bundler uses only one which has
|
||||
value, prefering the Neighbors parameter. Default: 8
|
||||
--matcher-distance <integer>
|
||||
Distance threshold in meters to find pre-matching
|
||||
images based on GPS exif data. Set both matcher-
|
||||
neighbors and this to 0 to skip pre-matching. Default:
|
||||
0
|
||||
--use-fixed-camera-params
|
||||
Turn off camera parameter optimization during bundler
|
||||
--opensfm-processes <positive integer>
|
||||
The maximum number of processes to use in dense
|
||||
reconstruction. Default: 16
|
||||
--use-hybrid-bundle-adjustment
|
||||
Run local bundle adjustment for every image added to
|
||||
the reconstruction and a global adjustment every 100
|
||||
images. Speeds up reconstruction for very large
|
||||
datasets.
|
||||
--use-25dmesh Use a 2.5D mesh to compute the orthophoto. This option
|
||||
tends to provide better results for planar surfaces.
|
||||
Experimental.
|
||||
--use-pmvs Use pmvs to compute point cloud alternatively
|
||||
--cmvs-maxImages <integer>
|
||||
The maximum number of images per cluster. Default: 500
|
||||
--pmvs-level <positive integer>
|
||||
The level in the image pyramid that is used for the
|
||||
computation. see
|
||||
http://www.di.ens.fr/pmvs/documentation.html for more
|
||||
pmvs documentation. Default: 1
|
||||
--pmvs-csize <positive integer>
|
||||
Cell size controls the density of
|
||||
reconstructionsDefault: 2
|
||||
--pmvs-threshold <float: -1.0 <= x <= 1.0>
|
||||
A patch reconstruction is accepted as a success and
|
||||
kept if its associated photometric consistency measure
|
||||
is above this threshold. Default: 0.7
|
||||
--pmvs-wsize <positive integer>
|
||||
pmvs samples wsize x wsize pixel colors from each
|
||||
image to compute photometric consistency score. For
|
||||
example, when wsize=7, 7x7=49 pixel colors are sampled
|
||||
in each image. Increasing the value leads to more
|
||||
stable reconstructions, but the program becomes
|
||||
slower. Default: 7
|
||||
--pmvs-min-images <positive integer>
|
||||
Each 3D point must be visible in at least minImageNum
|
||||
images for being reconstructed. 3 is suggested in
|
||||
general. Default: 3
|
||||
--pmvs-num-cores <positive integer>
|
||||
The maximum number of cores to use in dense
|
||||
reconstruction. Default: 16
|
||||
--mesh-size <positive integer>
|
||||
The maximum vertex count of the output mesh Default:
|
||||
100000
|
||||
--mesh-octree-depth <positive integer>
|
||||
Oct-tree depth used in the mesh reconstruction,
|
||||
increase to get more vertices, recommended values are
|
||||
8-12. Default: 9
|
||||
--mesh-samples <float >= 1.0>
|
||||
Number of points per octree node, recommended and
|
||||
default value: 1.0
|
||||
--mesh-solver-divide <positive integer>
|
||||
Oct-tree depth at which the Laplacian equation is
|
||||
solved in the surface reconstruction step. Increasing
|
||||
this value increases computation times slightly but
|
||||
helps reduce memory usage. Default: 9
|
||||
--mesh-neighbors <positive integer>
|
||||
Number of neighbors to select when estimating the
|
||||
surface model used to compute the mesh and for
|
||||
statistical outlier removal. Higher values lead to
|
||||
smoother meshes but take longer to process. Applies to
|
||||
2.5D mesh only. Default: 24
|
||||
--mesh-resolution <positive float>
|
||||
Size of the interpolated surface model used for
|
||||
deriving the 2.5D mesh, expressed in pixels per meter.
|
||||
Higher values work better for complex or urban
|
||||
terrains. Lower values work better on flat areas.
|
||||
Resolution has no effect on the number of vertices,
|
||||
but high values can severely impact runtime speed and
|
||||
memory usage. When set to zero, the program
|
||||
automatically attempts to find a good value based on
|
||||
the point cloud extent and target vertex count.
|
||||
Applies to 2.5D mesh only. Default: 0
|
||||
--fast-orthophoto Skips dense reconstruction and 3D model generation. It
|
||||
generates an orthophoto directly from the sparse
|
||||
reconstruction. If you just need an orthophoto and do
|
||||
not need a full 3D model, turn on this option.
|
||||
Experimental.
|
||||
--crop <positive float>
|
||||
Automatically crop image outputs by creating a smooth
|
||||
buffer around the dataset boundaries, shrinked by N
|
||||
meters. Use 0 to disable cropping. Default: 3
|
||||
--pc-classify <string>
|
||||
Classify the .LAS point cloud output using either a
|
||||
Simple Morphological Filter or a Progressive
|
||||
Morphological Filter. If --dtm is set this parameter
|
||||
defaults to smrf. You can control the behavior of both
|
||||
smrf and pmf by tweaking the --dem-* parameters.
|
||||
Default: none
|
||||
--texturing-data-term <string>
|
||||
Data term: [area, gmi]. Default: gmi
|
||||
--texturing-outlier-removal-type <string>
|
||||
Type of photometric outlier removal method: [none,
|
||||
gauss_damping, gauss_clamping]. Default:
|
||||
gauss_clamping
|
||||
--texturing-skip-visibility-test
|
||||
Skip geometric visibility test. Default: False
|
||||
--texturing-skip-global-seam-leveling
|
||||
Skip global seam leveling. Useful for IR data.Default:
|
||||
False
|
||||
--texturing-skip-local-seam-leveling
|
||||
Skip local seam blending. Default: False
|
||||
--texturing-skip-hole-filling
|
||||
Skip filling of holes in the mesh. Default: False
|
||||
--texturing-keep-unseen-faces
|
||||
Keep faces in the mesh that are not seen in any
|
||||
camera. Default: False
|
||||
--texturing-tone-mapping <string>
|
||||
Turn on gamma tone mapping or none for no tone
|
||||
mapping. Choices are 'gamma' or 'none'. Default: none
|
||||
--gcp <path string> path to the file containing the ground control points
|
||||
used for georeferencing. Default: None. The file needs
|
||||
to be on the following line format: easting northing
|
||||
height pixelrow pixelcol imagename
|
||||
--use-exif Use this tag if you have a gcp_list.txt but want to
|
||||
use the exif geotags instead
|
||||
--dtm Use this tag to build a DTM (Digital Terrain Model,
|
||||
ground only) using a progressive morphological filter.
|
||||
Check the --dem* parameters for fine tuning.
|
||||
--dsm Use this tag to build a DSM (Digital Surface Model,
|
||||
ground + objects) using a progressive morphological
|
||||
filter. Check the --dem* parameters for fine tuning.
|
||||
--dem-gapfill-steps <positive integer>
|
||||
Number of steps used to fill areas with gaps. Set to 0
|
||||
to disable gap filling. Starting with a radius equal
|
||||
to the output resolution, N different DEMs are
|
||||
generated with progressively bigger radius using the
|
||||
inverse distance weighted (IDW) algorithm and merged
|
||||
together. Remaining gaps are then merged using nearest
|
||||
neighbor interpolation. Default=4
|
||||
--dem-resolution <float>
|
||||
Length of raster cell edges in meters. Default: 0.1
|
||||
--dem-maxangle <positive float>
|
||||
Points that are more than maxangle degrees off-nadir
|
||||
are discarded. Default: 20
|
||||
--dem-maxsd <positive float>
|
||||
Points that deviate more than maxsd standard
|
||||
deviations from the local mean are discarded. Default:
|
||||
2.5
|
||||
--dem-initial-distance <positive float>
|
||||
Used to classify ground vs non-ground points. Set this
|
||||
value to account for Z noise in meters. If you have an
|
||||
uncertainty of around 15 cm, set this value large
|
||||
enough to not exclude these points. Too small of a
|
||||
value will exclude valid ground points, while too
|
||||
large of a value will misclassify non-ground points
|
||||
for ground ones. Default: 0.15
|
||||
--dem-approximate Use this tag use the approximate progressive
|
||||
morphological filter, which computes DEMs faster but
|
||||
is not as accurate.
|
||||
--dem-decimation <positive integer>
|
||||
Decimate the points before generating the DEM. 1 is no
|
||||
decimation (full quality). 100 decimates ~99% of the
|
||||
points. Useful for speeding up generation. Default=1
|
||||
--dem-terrain-type <string>
|
||||
One of: FlatNonForest, FlatForest, ComplexNonForest,
|
||||
ComplexForest. Specifies the type of terrain. This
|
||||
mainly helps reduce processing time. FlatNonForest:
|
||||
Relatively flat region with little to no vegetation
|
||||
FlatForest: Relatively flat region that is forested
|
||||
ComplexNonForest: Varied terrain with little to no
|
||||
vegetation ComplexForest: Varied terrain that is
|
||||
forested Default=ComplexForest
|
||||
--orthophoto-resolution <float > 0.0>
|
||||
Orthophoto ground resolution in pixels/meterDefault:
|
||||
20.0
|
||||
--orthophoto-target-srs <EPSG:XXXX>
|
||||
Target spatial reference for orthophoto creation. Not
|
||||
implemented yet. Default: None
|
||||
--orthophoto-no-tiled
|
||||
Set this parameter if you want a stripped geoTIFF.
|
||||
Default: False
|
||||
--orthophoto-compression <string>
|
||||
Set the compression to use. Note that this could break
|
||||
gdal_translate if you don't know what you are doing.
|
||||
Options: JPEG, LZW, PACKBITS, DEFLATE, LZMA, NONE.
|
||||
Default: DEFLATE
|
||||
--orthophoto-bigtiff {YES,NO,IF_NEEDED,IF_SAFER}
|
||||
Control whether the created orthophoto is a BigTIFF or
|
||||
classic TIFF. BigTIFF is a variant for files larger
|
||||
than 4GiB of data. Options are YES, NO, IF_NEEDED,
|
||||
IF_SAFER. See GDAL specs:
|
||||
https://www.gdal.org/frmt_gtiff.html for more info.
|
||||
Default: IF_SAFER
|
||||
--build-overviews Build orthophoto overviews using gdaladdo.
|
||||
--zip-results compress the results using gunzip
|
||||
--verbose, -v Print additional messages to the console Default:
|
||||
False
|
||||
--time Generates a benchmark file with runtime info Default:
|
||||
False
|
||||
--version Displays version number and exits.
|
||||
|
||||
.. _ground-control:
|
||||
|
||||
Ground Control Points
|
||||
`````````````````````
|
||||
|
||||
If you supply a GCP file called gcp_list.txt then ODM will automatically detect it. If it has another name you can specify using ``--gcp <path>``. If you have a gcp file and want to do georeferencing with exif instead, then you can specify ``--use-exif``.
|
||||
|
||||
`This post has some information about placing Ground Control Targets before a flight <http://diydrones.com/profiles/blogs/ground-control-points-gcps-for-aerial-photography>`_, but if you already have images, you can find your own points in the images post facto. It's important that you find high-contrast objects that are found in **at least** 3 photos, and that you find a minimum of 5 objects.
|
||||
|
||||
For example, in this image, I would use the sharp corners of the diamond-shaped bioswales in the parking lot:
|
||||
|
||||
.. image:: _static/tol_sm.jpg
|
||||
|
||||
You should also place/find the GCPs evenly around your survey area.
|
||||
|
||||
The ``gcp_list.txt`` file must then be created in the base of your project folder:
|
||||
|
||||
The format of the GCP file is simple. The header line is a description of the coordinate system, which must be written as a http://spatialreference.org/ is a good resource for finding that information. proj4 string. Please note that currently angular coordinates (like lat/lon) do not work. Subsequent lines are the X, Y & Z coordinate in your coordinate system, your associated pixel and line number in the image, and the image name itself::
|
||||
|
||||
coordinate system description
|
||||
x1 y1 z1 pixelx1 pixely1 imagename1
|
||||
x2 y2 z2 pixelx2 pixely2 imagename2
|
||||
x3 y3 z3 pixelx3 pixely3 imagename3
|
||||
|
||||
e.g. for the Langley dataset::
|
||||
|
||||
WGS84 UTM 10N
|
||||
544256.7 5320919.9 5 3044 2622 IMG_0525.jpg
|
||||
544157.7 5320899.2 5 4193 1552 IMG_0585.jpg
|
||||
544033.4 5320876.0 5 1606 2763 IMG_0690.jpg
|
||||
|
||||
|
||||
Given the recommendations above, your file should have a minimum of 15 lines after the header (5 points with 3 images to each point).
|
||||
|
||||
Video Reconstruction (Experimental)
|
||||
```````````````````````````````````
|
||||
|
||||
**Note: This is an experimental feature**
|
||||
|
||||
It is possible to build a reconstruction using a video file instead of still images. The technique for reconstructing the camera trajectory from a video is called Simultaneous Localization And Mapping (SLAM). OpenDroneMap uses the opensource `ORB_SLAM2 <https://github.com/raulmur/ORB_SLAM2>`_ library for this task.
|
||||
|
||||
We will explain here how to use it. We will need to build the SLAM module, calibrate the camera and finally run the reconstruction from a video.
|
||||
|
||||
|
||||
Building with SLAM support
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
By default, OpenDroneMap does not build the SLAM module. To build it we need to do the following two steps
|
||||
|
||||
**Build SLAM dependencies**::
|
||||
|
||||
sudo apt-get install libglew-dev
|
||||
cd SuperBuild/build
|
||||
cmake -DODM_BUILD_SLAM=ON .
|
||||
make
|
||||
cd ../..
|
||||
|
||||
**Build the SLAM module**::
|
||||
|
||||
cd build
|
||||
cmake -DODM_BULID_SLAM=ON .
|
||||
make
|
||||
cd ..
|
||||
|
||||
|
||||
.. _calibration:
|
||||
|
||||
Calibrating the camera
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The SLAM algorithm requires the camera to be calibrated. It is difficult to extract calibration parameters from the video's metadata as we do when using still images. Thus, it is required to run a calibration procedure that will compute the calibration from a video of a checkerboard.
|
||||
|
||||
We will start by **recording the calibration video**. Display this `chessboard pattern <https://dl.dropboxusercontent.com/u/2801164/odm/chessboard.pdf>`_ on a large screen, or `print it on a large paper and stick it on a flat surface <http://www.instructables.com/id/How-to-make-a-camera-calibration-pattern/>`_. Now record a video pointing the camera to the chessboard.
|
||||
|
||||
While recording move the camera to both sides and up and down always maintaining the entire pattern framed. The goal is to capture the pattern from different points of views.
|
||||
|
||||
|
||||
Now you can **run the calibration script** as follows::
|
||||
|
||||
python modules/odm_slam/src/calibrate_video.py --visual PATH_TO_CHESSBOARD_VIDEO.mp4
|
||||
|
||||
You will see a window displaying the video and the detected corners. When it finish, it will print the computed calibration parameters. They should look like this (with different values)::
|
||||
|
||||
# Camera calibration and distortion parameters (OpenCV)
|
||||
Camera.fx: 1512.91332401
|
||||
Camera.fy: 1512.04223185
|
||||
Camera.cx: 956.585155225
|
||||
Camera.cy: 527.321715394
|
||||
|
||||
Camera.k1: 0.140581949184
|
||||
Camera.k2: -0.292250537695
|
||||
Camera.p1: 0.000188785464717
|
||||
Camera.p2: 0.000611510377372
|
||||
Camera.k3: 0.181424769625
|
||||
|
||||
Keep this text. We will use it on the next section.
|
||||
|
||||
|
||||
Running OpenDroneMap from a video
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
We are now ready to run the OpenDroneMap pipeline from a video. For this we need the video and a config file for ORB_SLAM2. Here's an `example config.yaml <https://dl.dropboxusercontent.com/u/2801164/odm/config.yaml>`_. Before using it, copy-paste the calibration parameters for your camera that you just computed on the previous section.
|
||||
|
||||
Put the video and the `config.yaml` file on an empty folder. Then run OpenDroneMap using the following command::
|
||||
|
||||
python run.py --project-path PROJECT_PATH --video VIDEO.mp4 --slam-config config.yaml --resize-to VIDEO_WIDTH
|
||||
|
||||
where ``PROJECT_PATH`` is the path to the folder containing the video and config file, ``VIDEO.mp4`` is the name of your video, and ``VIDEO_WIDTH`` is the width of the video (for example, 1920 for an HD video).
|
||||
|
||||
That command will run the pipeline starting with SLAM and continuing with stereo matching and mesh reconstruction and texturing.
|
||||
|
||||
When done, the textured model will be in ``PROJECT_PATH/odm_texturing/odm_textured_model.obj``. The point cloud created by the stereo matching algorithm will be in ``PROJECT_PATH/pmvs/recon0/models/option-0000.ply``
|
||||
|
||||
|
||||
.. _camera-calibration:
|
||||
|
||||
Camera Calibration
|
||||
------------------
|
||||
|
||||
It is highly recommended that you calibrate your images to reduce lens distortion. Doing so will increase the likelihood of finding quality matches between photos and reduce processing time. You can do this in Photoshop or `ImageMagick <http://www.imagemagick.org/Usage/lens/>`_. We also have some simple scripts to perform this task: https://github.com/OpenDroneMap/CameraCalibration . This suite of scripts will find camera matrix and distortion parameters with a set of checkerboard images, then use those parameters to remove distortion from photos.
|
||||
|
||||
Installation
|
||||
````````````
|
||||
|
||||
You need to install numpy and opencv:::
|
||||
|
||||
pip install numpy
|
||||
sudo apt-get install python-opencv exiftool
|
||||
|
||||
Usage: Calibrate chessboard
|
||||
```````````````````````````
|
||||
|
||||
First you will need to take some photos of a black and white chessboard with a white border, `like this one <https://raw.githubusercontent.com/LongerVision/OpenCV_Examples/master/markers/pattern_chessboard.png>`_.
|
||||
|
||||
Then you will run the opencv_calibrate.py script to generate the matrix and distortion files.::
|
||||
|
||||
python opencv_calibrate.py ./sample/chessboard/ 10 7
|
||||
|
||||
The first argument is the path to the chessboard. You will also have to input the chessboard dimensions (the number of squares in x and y) Optional arguments:::
|
||||
|
||||
--out path if you want to output the parameters and the image outputs to a specific path. otherwise it gets writting to ./out
|
||||
--square_size float if your chessboard squares are not square, you can change this. default is 1.0
|
||||
|
||||
Usage: undistort photos
|
||||
```````````````````````
|
||||
|
||||
With the photos and the produced matrix.txt and distortion.txt, run the following:::
|
||||
|
||||
python undistort.py --matrix matrix.txt --distortion distortion.txt "/path/to/images/"
|
||||
|
||||
Note: Do not forget the quotes in "/path/to/images"
|
||||
|
||||
Docker Usage for undistorting images
|
||||
````````````````````````````````````
|
||||
|
||||
The ``undistort.py`` script depends on exiftool to copy exif metadata to the new images, so on Windows you may have to use Docker for the undistort step. Put the matrix.txt and distortion.txt in their own directory (eg. sample/config) and do the following:::
|
||||
|
||||
docker build -t cc_undistort .
|
||||
docker run -v ~/CameraCalibration/sample/images:/app/images \
|
||||
-v ~/CameraCalibration/sample/config:/app/config \
|
||||
cc_undistort
|
||||
|
Ładowanie…
Reference in New Issue