There are two methods for running with docker. One pulls a pre-built image from the docker hub. This is the most reliable. You can also :ref:`build your own image <docker-installation>`. In either case, the run command is the same, what you will change is the name of the image. For the docker hub image, use ``opendronemap/opendronemap``. For an image you built yourself, use that image name (in our case, ``my_odm_image``).::
docker run -it --rm \
-v $(pwd)/images:/code/images \
-v $(pwd)/odm_texturing:/code/odm_texturing \
-v $(pwd)/odm_orthophoto:/code/odm_orthophoto \
<docker-image>
``-v`` is used to connect folders in the docker container to local folders. See :doc:`dataset` for reference on the project layout.
If you want to get all intermediate outputs, run the following command:::
* The header line is a description of a UTM coordinate system, which must be written as a proj4 string. http://spatialreference.org/ is a good resource for finding that information. Please note that currently angular coordinates (like lat/lon) DO NOT work.
* Subsequent lines are the X, Y & Z coordinates, your associated pixels and the image filename:
If you supply a GCP file called gcp_list.txt then ODM will automatically detect it. If it has another name you can specify using ``--gcp <path>``. If you have a gcp file and want to do georeferencing with exif instead, then you can specify ``--use-exif``.
`This post has some information about placing Ground Control Targets before a flight <http://diydrones.com/profiles/blogs/ground-control-points-gcps-for-aerial-photography>`_, but if you already have images, you can find your own points in the images post facto. It's important that you find high-contrast objects that are found in **at least** 3 photos, and that you find a minimum of 5 objects.
Sharp corners are good picks for GCPs. You should also place/find the GCPs evenly around your survey area.
The ``gcp_list.txt`` file must be created in the base of your project folder.
It is possible to build a reconstruction using a video file instead of still images. The technique for reconstructing the camera trajectory from a video is called Simultaneous Localization And Mapping (SLAM). OpenDroneMap uses the opensource `ORB_SLAM2 <https://github.com/raulmur/ORB_SLAM2>`_ library for this task.
We will explain here how to use it. We will need to build the SLAM module, calibrate the camera and finally run the reconstruction from a video.
Building with SLAM support
^^^^^^^^^^^^^^^^^^^^^^^^^^
By default, OpenDroneMap does not build the SLAM module. To build it we need to do the following two steps
**Build SLAM dependencies**::
sudo apt-get install libglew-dev
cd SuperBuild/build
cmake -DODM_BUILD_SLAM=ON .
make
cd ../..
**Build the SLAM module**::
cd build
cmake -DODM_BULID_SLAM=ON .
make
cd ..
.._calibration:
Calibrating the camera
^^^^^^^^^^^^^^^^^^^^^^
The SLAM algorithm requires the camera to be calibrated. It is difficult to extract calibration parameters from the video's metadata as we do when using still images. Thus, it is required to run a calibration procedure that will compute the calibration from a video of a checkerboard.
We will start by **recording the calibration video**. Display this `chessboard pattern <https://dl.dropboxusercontent.com/u/2801164/odm/chessboard.pdf>`_ on a large screen, or `print it on a large paper and stick it on a flat surface <http://www.instructables.com/id/How-to-make-a-camera-calibration-pattern/>`_. Now record a video pointing the camera to the chessboard.
While recording move the camera to both sides and up and down always maintaining the entire pattern framed. The goal is to capture the pattern from different points of views.
Now you can **run the calibration script** as follows::
You will see a window displaying the video and the detected corners. When it finish, it will print the computed calibration parameters. They should look like this (with different values)::
# Camera calibration and distortion parameters (OpenCV)
Camera.fx: 1512.91332401
Camera.fy: 1512.04223185
Camera.cx: 956.585155225
Camera.cy: 527.321715394
Camera.k1: 0.140581949184
Camera.k2: -0.292250537695
Camera.p1: 0.000188785464717
Camera.p2: 0.000611510377372
Camera.k3: 0.181424769625
Keep this text. We will use it on the next section.
Running OpenDroneMap from a video
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We are now ready to run the OpenDroneMap pipeline from a video. For this we need the video and a config file for ORB_SLAM2. Here's an `example config.yaml <https://dl.dropboxusercontent.com/u/2801164/odm/config.yaml>`_. Before using it, copy-paste the calibration parameters for your camera that you just computed on the previous section.
Put the video and the `config.yaml` file on an empty folder. Then run OpenDroneMap using the following command::
where ``PROJECT_PATH`` is the path to the folder containing the video and config file, ``VIDEO.mp4`` is the name of your video, and ``VIDEO_WIDTH`` is the width of the video (for example, 1920 for an HD video).
That command will run the pipeline starting with SLAM and continuing with stereo matching and mesh reconstruction and texturing.
When done, the textured model will be in ``PROJECT_PATH/odm_texturing/odm_textured_model.obj``. The point cloud created by the stereo matching algorithm will be in ``PROJECT_PATH/pmvs/recon0/models/option-0000.ply``
.._camera-calibration:
Camera Calibration
------------------
It is highly recommended that you calibrate your images to reduce lens distortion. Doing so will increase the likelihood of finding quality matches between photos and reduce processing time. You can do this in Photoshop or `ImageMagick <http://www.imagemagick.org/Usage/lens/>`_. We also have some simple scripts to perform this task: https://github.com/OpenDroneMap/CameraCalibration . This suite of scripts will find camera matrix and distortion parameters with a set of checkerboard images, then use those parameters to remove distortion from photos.
First you will need to take some photos of a black and white chessboard with a white border, `like this one <https://raw.githubusercontent.com/LongerVision/OpenCV_Examples/master/markers/pattern_chessboard.png>`_.
Then you will run the opencv_calibrate.py script to generate the matrix and distortion files.::
The first argument is the path to the chessboard. You will also have to input the chessboard dimensions (the number of squares in x and y) Optional arguments:::
--out path if you want to output the parameters and the image outputs to a specific path. otherwise it gets writting to ./out
--square_size float if your chessboard squares are not square, you can change this. default is 1.0
The ``undistort.py`` script depends on exiftool to copy exif metadata to the new images, so on Windows you may have to use Docker for the undistort step. Put the matrix.txt and distortion.txt in their own directory (eg. sample/config) and do the following:::
docker build -t cc_undistort .
docker run -v ~/CameraCalibration/sample/images:/app/images \