<liclass="toctree-l4"><aclass="reference internal"href="../installation/#step-1-check-virtualization-support">Krok 1. Zkontrolujte podporu virtualizace</a></li>
<liclass="toctree-l4"><aclass="reference internal"href="../installation/#step-2-install-requirements">Krok 2. Požadavky na instalaci</a></li>
<liclass="toctree-l4"><aclass="reference internal"href="../installation/#step-3-check-memory-and-cpu-allocation">Krok 3. Zkontrolujte přidělení paměti a procesoru</a></li>
<liclass="toctree-l4"><aclass="reference internal"href="../installation/#step-4-download-webodm">Krok 4. Stáhněte si WebODM</a></li>
<liclass="toctree-l4"><aclass="reference internal"href="../installation/#step-3a-download-and-launch-webodm-with-webodm-sh-call-of-docker-compose">Step 3a. Download and Launch WebODM with webodm.sh call of docker compose</a></li>
<liclass="toctree-l4"><aclass="reference internal"href="../installation/#step-3b-alternatively-start-via-docker-compose-without-webodm-sh">Step 3b. Alternatively, start via docker compose without webodm.sh</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="../installation/#basic-commands-and-troubleshooting">Základní příkazy a řešení potíží</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="../installation/#hello-webodm">Dobrý den, WebODM!</a><ul>
<liclass="toctree-l3"><aclass="reference internal"href="../installation/#running-on-more-than-one-machine">Spuštění na více než jednom počítači</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="../tutorials/#creating-high-quality-orthophotos">Vytváření vysoce kvalitních ortofotomap</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="../tutorials/#using-potree-3d-viewer-module-on-webodm">Použití modulu Potree 3D viewer na WebODM</a><ul>
<liclass="toctree-l2"><aclass="reference internal"href="../tutorials/#using-docker">Použití aplikace Docker</a><ul>
<liclass="toctree-l3"><aclass="reference internal"href="../tutorials/#listing-docker-machines">Výpis strojů Docker</a></li>
<liclass="toctree-l3"><aclass="reference internal"href="../tutorials/#accessing-logs-on-the-instance">Přístup k protokolům instance</a></li>
<liclass="toctree-l3"><aclass="reference internal"href="../tutorials/#command-line-access-to-instances">Přístup k instancím z příkazového řádku</a></li>
<liclass="toctree-l3"><aclass="reference internal"href="../tutorials/#cleaning-up-after-docker">Úklid po Dockeru</a></li>
</ul>
</li>
<liclass="toctree-l2"><aclass="reference internal"href="../tutorials/#using-odm-from-low-bandwidth-location">Použití ODM z místa s nízkou šířkou pásma</a><ul>
<liclass="toctree-l3"><aclass="reference internal"href="../tutorials/#what-is-this-and-who-is-it-for">Co to je a pro koho je to určeno?</a></li>
<liclass="toctree-l4"><aclass="reference internal"href="../tutorials/#prep-data-and-project">Příprava dat a projektu</a></li>
<liclass="toctree-l4"><aclass="reference internal"href="../tutorials/#resize-droplet-pull-pin-run-away">Resize droplet, pull pin, run away</a></li>
<liclass="toctree-l4"><aclass="reference internal"href="../tutorials/#after-it-finishes-assuming-you-survive-that-long">Po jeho skončení (za předpokladu, že přežijete tak dlouho)</a></li>
<liclass="toctree-l3"><aclass="reference internal"href="../tutorials/#clusterodm-nodeodm-slurm-with-singularity-on-hpc">ClusterODM, NodeODM, SLURM, with Singularity on HPC</a></li>
<liclass="toctree-l3"><aclass="reference internal"href="../tutorials/#development-and-testing-of-odm">Development and testing of ODM</a><ul>
<liclass="toctree-l4"><aclass="reference internal"href="../tutorials/#fork-and-clone-repository">Fork and clone repository</a></li>
<liclass="toctree-l4"><aclass="reference internal"href="../tutorials/#set-up-local-nodeodm-docker-instance">Set up local NodeODM docker instance</a></li>
<liclass="toctree-l3"><aclass="reference internal"href="../arguments/auto-boundary/#what-is-auto-boundary">Co je automatická hranice?</a></li>
<liclass="toctree-l3"><aclass="reference internal"href="../arguments/auto-boundary/#when-is-auto-boundary-helpful">Kdy je automatická hranice užitečná?</a></li>
<liclass="toctree-l3"><aclass="reference internal"href="../arguments/auto-boundary/#why-would-one-use-auto-boundary">Proč by měl někdo používat automatické ohraničení?</a></li>
<liclass="toctree-l3"><aclass="reference internal"href="../arguments/boundary/#what-is-boundary-geojson">Co je to Hranice [GeoJSON]?</a></li>
<liclass="toctree-l3"><aclass="reference internal"href="../arguments/boundary/#when-is-boundary-geojson-appropriate">Kdy je vhodné použít Hranice [GeoJSON]?</a></li>
<liclass="toctree-l3"><aclass="reference internal"href="../arguments/boundary/#why-would-one-use-boundary-geojson">Proč by měl někdo použít Hranice [GeoJSON]?</a></li>
<liclass="toctree-l3"><aclass="reference internal"href="../arguments/boundary/#how-would-one-create-boundary-geojson">Jak se vytváří Hranice [GeoJSON]?</a><ul>
<liclass="toctree-l3"><aclass="reference internal"href="../arguments/build-overviews/#what-are-overviews">What Are Overviews?</a></li>
<liclass="toctree-l3"><aclass="reference internal"href="../arguments/build-overviews/#when-are-overviews-appropriate">When are Overviews appropriate?</a></li>
<liclass="toctree-l3"><aclass="reference internal"href="../arguments/build-overviews/#why-would-one-use-overviews">Why would one use Overviews?</a></li>
<liclass="toctree-l3"><aclass="reference internal"href="../arguments/camera-lens/#what-are-camera-lens-models">Co jsou modely objektivů fotoaparátů?</a></li>
<liclass="toctree-l3"><aclass="reference internal"href="../arguments/camera-lens/#when-are-manual-selections-appropriate">Kdy je vhodný ruční výběr?</a></li>
<liclass="toctree-l3"><aclass="reference internal"href="../arguments/camera-lens/#why-would-one-use-a-particular-camera-lens-model">Proč používat určitý model objektivu fotoaparátu?</a></li>
<liclass="toctree-l3"><aclass="reference internal"href="../multispectral/#creating-orthophotos-from-multispectral-data">Creating Orthophotos from Multispectral Data</a></li>
<liclass="toctree-l3"><aclass="reference internal"href="../multispectral/#workflows-for-non-supported-sensors">Workflows for Non-supported Sensors</a></li>
<liclass="toctree-l3"><aclass="reference internal"href="../large/#getting-started-with-distributed-split-merge">Getting Started with Distributed Split-Merge</a></li>
<liclass="toctree-l3"><aclass="reference internal"href="../large/#understanding-the-cluster">Understanding the Cluster</a></li>
<liclass="toctree-l3"><aclass="reference internal"href="../large/#accessing-the-logs">Accessing the Logs</a></li>
<p>Accuracy can be defined as the degree or closeness to which the information on a map matches the values in the real world. Therefore, when we refer to accuracy, we are talking about quality of data and about number of errors contained in a certain dataset (Pascual 2011).</p>
<p><strong>Relative or Local accuracy</strong></p>
<p>Local or relative accuracy can be defined as the degree to which de distances between two points on a map correspond to the actually distances between those points in the real world.</p>
<p>Relative accuracy is independent of the location of the map in the world, so a map can have a high relative accuracy (in size and shape) but its position in the world can be shifted (Figure 1).</p>
<p><em>Figure 1. Model showing high relative accuracy but misplaced according to its real world position</em></p>
<p><strong>Absolute or global Accuracy</strong></p>
<p>Absolute accuracy is the accuracy of the reconstruction in relation to its true position on the planet (Pix4D 2019). Figure 2 shows a relative and absolute accurate model, as the points are correctly placed according to its real world position.</p>
<p><em>Figure 2. Model showing high relative and absolute accuracy. Placed correctly according to its real world position</em></p>
<p><strong>An Accuracy level for each project</strong></p>
<p>Each project has specific accuracy needs to be met. For instance assessing the progress in a construction site or measuring an area affected by a fire does not require the use of GCP, since absolute accuracy will not impact the decision making process. In the other hand, there are tasks on which accuracy is critical, for example project compliance evaluations and land title surveying, which require a higher relative and absolute accuracy.</p>
<p>In general terms, one can expect the relative accuracy to be in the order of 1 to 3 times the average GSD for the dataset. And as for the absolute accuracy, one must consider that it is dependent of the GPS unit mounted in the UAV but the horizontal accuracy of a standard GPS is usually in the range of 2 to 6 meters and the vertical accuracy between 3 to 4 times the horizontal accuracy.</p>
<p>When using GCP, absolute accuracy can be improved to 2.5 times GSD for the horizontal accuracy and 4 times the GSD for the vertical accuracy (Madawalagama 2016).</p>
<p>At a GSD of 1cm, the accuracy is to that of the RTK GNSS, and is within 1:200 scales according to NSDI & FGDC mapping accuracy standards during sub-optimal conditions (Barry 2013).</p>
<p>Weather conditions have direct impact in the photogrammetry results, so it is important to consider cloud coverage, wind speed, humidity, sun’s altitude and other factors influencing the UAV stability and terrain illumination.</p>
<p><strong>Cameras</strong></p>
<p>Bigger and better sensors produce less noise and more clearly focused images. Also consider that rolling shutter cameras produce distorted images when the UAV is moving, so global or mechanical shutter cameras are advised for mapping jobs.</p>
<p><strong>Flight altitude</strong></p>
<p>The higher the flight altitude, the larger the image footprint and GSD. The resulting larger GSD the accuracy will be decreased as there will be less detail in the recognizable features. When a smaller GSD is required an altitude of 3 to 4 times the height of the highest point is recommended.</p>
<p><strong>Flight speed</strong></p>
<p>Flight speed have special effect in cameras equipped with rolling shutter, while those equipped with global or mechanical shutter tends to reduce this effect. UAV equipped with RTK positioning systems are also affected with the speed, but with hover at each photo taken, you can get very good accuracy. If instead you are moving during each photo take, the accuracy is going to be limited by two factors: the speed at which you are moving multiplied by the 1 second increments of RTK (Mather 2020).</p>
<p>Barry, P., & Coakley, R. «Accuracy of UAV photogrammetry compared with Network RTK GPS.» Baseline Surveys. 2013. <aclass="reference external"href="http://uav.ie/PDF/Accuracy_UAV_compare_RTK_GPS.pdf">http://uav.ie/PDF/Accuracy_UAV_compare_RTK_GPS.pdf</a> (accessed 10 13, 2020).</p>
<p>Drone Deploy. How Do I Use Ground Control Points?: A guide to using ground control points with drone mapping software. 5 8, 2017. <aclass="reference external"href="https://www.dronedeploy.com/blog/what-are-ground-control-points-gcps/">https://www.dronedeploy.com/blog/what-are-ground-control-points-gcps/</a> (accessed 7 9, 2020).</p>
<p>Madawalagama, S.L., Munasinghe, N., Dampegama, S.D.P.J. and Samarakoon, L. «Low-cost aerial mapping with consumer grade.» 37th Asian Conference on Remote Sensing. Colombo, Sri Lanka, 2016.</p>
<p>Mather, Stephen. OpenDroneMap. 30 de Marzo de 2020. <aclass="reference external"href="https://community.opendronemap.org/t/the-accuracy-of-webodm-using-rtk-uavs/3937">https://community.opendronemap.org/t/the-accuracy-of-webodm-using-rtk-uavs/3937</a> (accessed 10 12, 2020).</p>
<p>Pascual, Manuel S. GIS Lounge: GIS Data: A Look at Accuracy, Precision, and Types of Errors. 11 6, 2011. <aclass="reference external"href="https://www.gislounge.com/gis-data-a-look-at-accuracy-precision-and-types-of-errors/">https://www.gislounge.com/gis-data-a-look-at-accuracy-precision-and-types-of-errors/</a> (accessed 07 09, 2020).</p>
<p>Pix4D. «What is accuracy in an aerial mapping project?» Pix4D. 25 de 05 de 2019. <aclass="reference external"href="https://www.pix4d.com/blog/accuracy-aerial-mapping">https://www.pix4d.com/blog/accuracy-aerial-mapping</a> (accessed 10 13, 2020).</p>
<p>Ground control points are useful for correcting distortions in the data and referencing the data to know coordinate systems.</p>
<p>A Ground Control Point (GCP) is a position measurement made on the ground, typically using a high precision GPS. (Toffanin 2019)</p>
<p>Ground control points can be set existing structures like pavement corners, lines on a parking lot or contrasting color floor tiles, otherwise can be set using targets placed on the ground.</p>
<p>Targets can be purchased or build with an ample variety of materials ranging from bucket lids to floor tiles.</p>
<p>Keep ground control points visible for all camera locations. Consider the expected ground sampling distance, illumination, vegetation, buildings and all the existing obstacles.</p>
<p>Procure an evenly horizontal distribution of the GCPs within the project, covering high and low elevations. A minimum of 5 GCP works for most of the jobs, and for larger projects 8 – 10 are sufficient. Locate some points near the corners and others in the center, considering that GCP spacing should be larger than the image footprint so that you can’t see more than one GCP in a single image.</p>
<p>In order to ensure each GCP are found in at least 5 images, separate the points 10 to 30 meters from the perimeter of the project. This distance is dependent of the overlapping, so increasing overlapping should reduce the required distance from the perimeter.</p>
<li><p>The first line should contain the name of the projection used for the geo coordinates. This can be specified either as a PROJ string (e.g. <codeclass="docutils literal notranslate"><spanclass="pre">+proj=utm</span><spanclass="pre">+zone=10</span><spanclass="pre">+ellps=WGS84</span><spanclass="pre">+datum=WGS84</span><spanclass="pre">+units=m</span><spanclass="pre">+no_defs</span></code>), EPSG code (e.g. <codeclass="docutils literal notranslate"><spanclass="pre">EPSG:4326</span></code>) or as a <codeclass="docutils literal notranslate"><spanclass="pre">WGS84</span><spanclass="pre">UTM</span><spanclass="pre"><zone>[N|S]</span></code> value (eg. <codeclass="docutils literal notranslate"><spanclass="pre">WGS84</span><spanclass="pre">UTM</span><spanclass="pre">16N</span></code>)</p></li>
<li><p>Subsequent lines are the X, Y & Z coordinates, your associated pixels, the image filename and optional extra fields, separated by tabs or spaces:</p></li>
<li><p>Avoid setting elevation values to „NaN“ to indicate no value. This can cause processing failures. Instead use 0.0</p></li>
<li><p>Similarly decreasing the no. of digits after the decimal place for <cite>geo_x</cite> and <cite>geo_y</cite> can also reduce processing failures.</p></li>
<li><p>The 7th column (optional) typically contains the label of the GCP.</p></li>
<p>If you supply a GCP file called <codeclass="docutils literal notranslate"><spanclass="pre">gcp_list.txt</span></code> then ODM will automatically detect it. If it has another name you can specify using <codeclass="docutils literal notranslate"><spanclass="pre">--gcp</span><spanclass="pre"><path></span></code>. If you have a gcp file and want to do georeferencing with exif instead, then you can specify <codeclass="docutils literal notranslate"><spanclass="pre">--use-exif</span></code>. If you have high precision GPS measurements in your images (RTK) and want to use that information along with a gcp file, you can specify <codeclass="docutils literal notranslate"><spanclass="pre">--force-gps</span></code>.</p>
<p><aclass="reference external"href="http://diydrones.com/profiles/blogs/ground-control-points-gcps-for-aerial-photography">This post has some information about placing Ground Control Targets before a flight</a>, but if you already have images, you can find your own points in the images post facto. It’s important that you find high-contrast objects that are found in <strong>at least</strong> 3 photos, and that you find a minimum of 5 objects.</p>
<p>Sharp corners are good picks for GCPs. You should also place/find the GCPs evenly around your survey area.</p>
<p>The <codeclass="docutils literal notranslate"><spanclass="pre">gcp_list.txt</span></code> file must be created in the base of your project folder.</p>
<p>For good results your file should have a minimum of 15 lines after the header (5 points with 3 images to each point).</p>
<p>The POSM GCPi is loaded by default on WebODM. An example is available at <aclass="reference external"href="http://demo.webodm.org/plugins/posm-gcpi/">the WebODM Demo</a>. To use this with known ground control XYZ values, one would do the following:</p>
<p>Create a GCP list that only includes gcp name (this is the label that will be seen in the GCP interface), x, y, and z, with a header with a proj4 string of your GCPs (make sure they are in a planar coordinate system, such as UTM. It should look something like this:</p>
<p>This app needs to be installed separately or can be loaded as a WebODM plugin from <aclass="reference external"href="https://github.com/uav4geo/GCPEditorPro">https://github.com/uav4geo/GCPEditorPro</a></p>
<p>Create a CSV file that includes the gcp name, northing, easting and elevation.</p>
<p>Then import the CSV from the main screen and type <codeclass="docutils literal notranslate"><spanclass="pre">+proj=utm</span><spanclass="pre">+zone=37</span><spanclass="pre">+south</span><spanclass="pre">+ellps=WGS84</span><spanclass="pre">+datum=WGS84</span><spanclass="pre">+units=m</span><spanclass="pre">+no_defs</span></code> in the <codeclass="docutils literal notranslate"><spanclass="pre">EPSG/PROJ</span></code> box.</p>
<p>The following screen will display a map from where to select the GCPs to tag and import the respective images.</p>
<p>By default ODM will use the GPS information embedded in the images, if it is available. Sometimes images do not contain GPS information, or a user wishes to override the information with more accurate data (such as RTK).</p>
<p>Starting from ODM <codeclass="docutils literal notranslate"><spanclass="pre">2.0</span></code> people can supply an image geolocation file (geo) for this purpose.</p>
<p>The format of the image geolocation file is simple.</p>
<blockquote>
<div><ulclass="simple">
<li><p>The first line should contain the name of the projection used for the geo coordinates. This can be specified either as a PROJ string (e.g. <codeclass="docutils literal notranslate"><spanclass="pre">+proj=utm</span><spanclass="pre">+zone=10</span><spanclass="pre">+ellps=WGS84</span><spanclass="pre">+datum=WGS84</span><spanclass="pre">+units=m</span><spanclass="pre">+no_defs</span></code>), EPSG code (e.g. <codeclass="docutils literal notranslate"><spanclass="pre">EPSG:4326</span></code>) or as a <codeclass="docutils literal notranslate"><spanclass="pre">WGS84</span><spanclass="pre">UTM</span><spanclass="pre"><zone>[N|S]</span></code> value (eg. <codeclass="docutils literal notranslate"><spanclass="pre">WGS84</span><spanclass="pre">UTM</span><spanclass="pre">16N</span></code>)</p></li>
<li><p>Subsequent lines are the image filename, X, Y & Z (optional) coordinates, the camera angles (optional, currently used only for radiometric calibration) and the horizontal/vertical accuracy (optional):</p></li>
<li><p>Camera angles can be set to <codeclass="docutils literal notranslate"><spanclass="pre">0</span></code> if they are not available.</p></li>
<li><p>The 10th column (optional) can contain extra fields, such as a label.</p></li>
<p>If you supply a file called <codeclass="docutils literal notranslate"><spanclass="pre">geo.txt</span></code> then ODM will automatically detect it. If it has another name you can specify using <codeclass="docutils literal notranslate"><spanclass="pre">--geo</span><spanclass="pre"><path></span></code>.</p>
<p>The <codeclass="docutils literal notranslate"><spanclass="pre">geo.txt</span></code> file must be created in the base of your project folder or when using WebODM, uploaded with the raw jpg or tif input files.</p>
<p>Georeferencing by default is done using GPS (GNSS) or GCPs (if provided).</p>
<p>Starting from ODM <codeclass="docutils literal notranslate"><spanclass="pre">3.0.2</span></code> people can supply a reference alignment file to georeference the program outputs. The reconstruction will be initially performed using GPS/GCPs and will subsequently be aligned to the reference model via a linear scaling/rotation/shift operation.</p>
<p>If you supply a file called <codeclass="docutils literal notranslate"><spanclass="pre">align.laz</span></code>, <codeclass="docutils literal notranslate"><spanclass="pre">align.las</span></code> or <codeclass="docutils literal notranslate"><spanclass="pre">align.tif</span></code> (single band GeoTIFF DEM) then ODM will automatically detect it and attempt to align outputs to this reference model. If it has another name you can specify using <codeclass="docutils literal notranslate"><spanclass="pre">--align</span><spanclass="pre"><path></span></code>.</p>
<p>The alignment file must be created in the base of your project folder. The base folder is usually where you have stored your images. If you are using WebODM or NodeODM, then upload the align file with your images. If resizing your images in WebODM, use an <codeclass="docutils literal notranslate"><spanclass="pre">align.laz</span></code> or <codeclass="docutils literal notranslate"><spanclass="pre">align.las</span></code> file instead of a tif.</p>
<p>When previously mapped sites need revisited, OpenDroneMap can align multiple versions of a dataset through time by using a prior point cloud or digital elevation model. As the prior point cloud <aclass="reference external"href="https://community.opendronemap.org/t/tips-to-increase-same-site-temporal-consistency/22161/7">seems to provide better results</a>, that is the approach we will review here.</p>
<li><p>Process your original data. This step doesn’t require a ground control point file, but use one if absolute accuracy is a project requirement</p></li>
<li><p>Download the Point Cloud from your first processed dataset as an LAZ file type (default). Rename the point cloud to align.laz</p></li>
<li><p>Include that LAZ file with each of your subsequent processing. If you are using command line ODM, include it in the images directory. If uploading, simply upload with your raw images for processing</p></li>
<li><p>Check your log. It should include a line near the top that indicates it has set align to a path value, something like this:</p>
<h4>Plugin Time-SIFT<aclass="headerlink"href="#plugin-time-sift"title="Link to this heading"></a></h4>
<p>The script at contrib/time-sift in the ODM repository does Time-SIFT processing with ODM. Time-SIFT is a method for multi-temporal analysis without the need to co-registrate the data.</p>
<blockquote>
<div><p>D. Feurer, F. Vinatier, Joining multi-epoch archival aerial images in
a single SfM block allows 3-D change detection with almost
exclusively image information, ISPRS Journal of Photogrammetry and
<p>It should make the Time-SIFT processing on the downloaded example data, stopping after the filtered dense clouds step.</p>
<p>In the destination dir, you should obtain new directories, <codeclass="docutils literal notranslate"><spanclass="pre">0_before</span></code> and <codeclass="docutils literal notranslate"><spanclass="pre">1_after</span></code> at the same level as the <codeclass="docutils literal notranslate"><spanclass="pre">time-sift-block</span></code> directory. These new directories contain all the results natively co-registered.</p>
<p>You can then use <aclass="reference external"href="https://cloudcompare.org/">CloudCompare</a> to compute distance between the <codeclass="docutils literal notranslate"><spanclass="pre">datasets/0_before/odm_filterpoints/point_cloud.ply</span></code> and the <codeclass="docutils literal notranslate"><spanclass="pre">datasets/1_after/odm_filterpoints</span><spanclass="pre">point_cloud.ply</span></code> and obtain this image showing the difference between the two 3D surfaces. Here, two soil samples were excavated as can be seen on the image below.</p>
<figureclass="align-center">
<imgalt="Distance between two point clouds showing soil surface before and after soil excavation (soil sampling for bulk density measurement)."src="../_images/timeSIFTexampleSoilExcavation.webp"/>
<h6>Your own data<aclass="headerlink"href="#your-own-data"title="Link to this heading"></a></h6>
<p>In your dataset directory (usually <codeclass="docutils literal notranslate"><spanclass="pre">datasets</span></code>, but you can have chosen another name) you have to prepare a Time-SIFT project directory (default name : <codeclass="docutils literal notranslate"><spanclass="pre">time-sift-block</span></code>, <em>can be tuned via a parameter</em>) that contains : * <codeclass="docutils literal notranslate"><spanclass="pre">images/</span></code> : a subdirectory with all images of all epochs. This directory name is fixed as it is the one expected by ODM *
<codeclass="docutils literal notranslate"><spanclass="pre">images_epochs.txt</span></code> : a file that has the same format as the file used for the split and merge ODM function. This file name <em>can be tuned via a parameter</em>.</p>
<p>The <codeclass="docutils literal notranslate"><spanclass="pre">images_epochs.txt</span></code> file has two columns, the first column contains image names and the second contains the epoch name as follows</p>
<p>At the end of the script you obtain a directory by epoch (at the same level as the Time-SIFT project directory). Each directory is processed with images of each epoch and all results are natively co-registered due to the initial sfm step done with all images.</p>
<p>When attempting to process very large datasets it may very well be the case that one needs to devide a large set of images into smaller more manageable chunks for ease of processing.This process however, may introduce some uncertainty with respect to the alignment of all the processed outputs.To make sure that all point clouds and terrain/suface models are seamlessly alighn in preparation for merging we follow the simple techniques outlined below.</p>
<p><strong>Workflow for aligning large datasets:</strong></p>
<p>#.Split the full compliment of images into manageable chunks. E.g. If you have flown and collected a total of 1000 images but you know your processor cannot handle all these images at once, you may want to devide these images into four sets of submodels with 250 images each.
#.Process the first dataset with theDigital Surface Model (DSM) option enabled.
#.Download the DSM from first dataseta in its raw-tiff format and rename it to ‚align.tif‘
#.Load the second dataset together with the align.tif
#.Process the second dataset (including the align.tif file)
#.Repeat until all submodels have been processed.</p>
<p><aclass="reference external"href="https://github.com/opendronemap/docs#how-to-make-your-first-contribution">Learn to edit</a> and help improve <aclass="reference external"href="https://github.com/OpenDroneMap/docs/blob/publish/source/map-accuracy.rst">this page</a>!</p>