Porównaj commity

...

147 Commity

Autor SHA1 Wiadomość Data
Piero Toffanin ae6726e536
Merge pull request #1760 from pierotofy/fastcut
Skip feathered raster generation when possible
2024-05-17 15:51:32 -04:00
Piero Toffanin 6da366f806 Windows fix 2024-05-17 15:23:10 -04:00
Piero Toffanin e4e27c21f2 Skip feathered raster generation when possible 2024-05-17 14:55:26 -04:00
Piero Toffanin f9136f7a0d
Merge pull request #1758 from idimitrovski/master
Support for DJI Mavic 2 Zoom srt files
2024-05-10 11:24:52 -04:00
idimitrovski a2d9eccad5 Support for DJI Mavic 2 Zoom srt files 2024-05-10 09:29:37 +02:00
Piero Toffanin 424d9e28a0
Merge pull request #1756 from andrewharvey/patch-1
Fix PoissonRecon failed with n threads log message
2024-04-18 11:53:46 -04:00
Andrew Harvey a0fbd71d41
Fix PoissonRecon failed with n threads log message
The message was reporting failure with n threads and retrying with n // 2, however a few lines up threads was already set to n // 2 representing the next thread count to try.
2024-04-18 15:35:53 +10:00
Piero Toffanin 6084d1dca0
Merge pull request #1754 from pierotofy/minviews
Use min views filter = 1
2024-04-11 23:18:26 -04:00
Piero Toffanin aef4182cf9 More solid OpenMVS clustering fallback 2024-04-11 14:18:20 -04:00
Piero Toffanin 6c0fe6e79d Bump version 2024-04-10 22:07:36 -04:00
Piero Toffanin 17dfc7599a Update pc-filter value to 5 2024-04-10 13:48:11 -04:00
Piero Toffanin a70e7445ad Update default feature-type, pc-filter values 2024-04-10 12:26:34 -04:00
Piero Toffanin 981bf88b48 Use min views filter = 1 2024-04-10 11:13:58 -04:00
Piero Toffanin ad63392e1a
Merge pull request #1752 from pierotofy/geobom
Fix BOM encoding bug with geo files
2024-04-02 12:53:35 -04:00
Piero Toffanin 77f8ffc8cd Fix BOM encoding bug with geo files 2024-04-02 12:46:20 -04:00
Piero Toffanin 4d7cf32a8c
Merge pull request #1751 from smathermather/fish-aye
replace fisheye with fisheye_opencv but keep API the same until 4.0
2024-03-11 23:32:19 -04:00
Stephen Mather 5a439c0ab6 replace fisheye with fisheye_opencv but keep API the same until 4.0 2024-03-11 22:56:48 -04:00
Piero Toffanin ffcda0dc57
Merge pull request #1749 from smathermather/increase-default-GPS-Accuracy
increase default GPS-Accuracy to 3m
2024-03-08 22:16:44 -05:00
Stephen Mather 2c6fd1dd9f
increase default GPS-Accuracy to 3m 2024-03-08 22:13:54 -05:00
Sylvain POULAIN cb3229a3d4
Add Mavic 3 rolling shutter, not enterprise version (#1747)
* Add Mavic 3 rolling shutter

* M3
2024-02-12 09:50:22 -05:00
Piero Toffanin fc9c94880f
Merge pull request #1746 from kielnino/set-extensionsused
GLTF - obj2glb - Set extensionsUsed in all cases to be consistent with the GLTF standard
2024-02-09 10:23:22 -05:00
kielnino b204a2eb98
set extensionsUsed in all cases 2024-02-09 15:06:02 +01:00
Piero Toffanin d9f77bea54
Merge pull request #1744 from kielnino/remove-unuses-mvs_tmp_dir
Update comment on mvs_tmp_dir
2024-02-01 09:14:23 -05:00
kielnino 10947ecddf
clarify usage of tmp directory 2024-02-01 12:02:06 +01:00
kielnino f7c7044823
remove unused mvs_tmp_dir 2024-02-01 09:25:10 +01:00
Piero Toffanin ae50133886
Merge pull request #1742 from pierotofy/eptclass
Classify point cloud before generating derivative outputs
2024-01-25 12:56:36 -05:00
Piero Toffanin 9fd3bf3edd Improve SRT parser to handle abs_alt altitude reference 2024-01-23 22:24:38 +00:00
Piero Toffanin fb85b754fb Classify point cloud before generating derivative outputs 2024-01-23 17:03:36 -05:00
Piero Toffanin 30f89c068c
Merge pull request #1739 from pierotofy/smartband
Fix build
2024-01-15 19:41:20 -05:00
Piero Toffanin 260b4ef864 Manually install numpy 2024-01-15 16:21:08 -05:00
Piero Toffanin fb5d88366e
Merge pull request #1738 from pierotofy/smartband
Ignore multispectral band groups that are missing images
2024-01-15 11:38:53 -05:00
Piero Toffanin f793627402 Ignore multispectral band groups that are missing images 2024-01-15 09:51:17 -05:00
Piero Toffanin 9183218f1b Bump version 2024-01-12 00:20:35 -05:00
Piero Toffanin 1283df206e
Merge pull request #1732 from OpenDroneMap/nolocalseam
Deprecate texturing-skip-local-seam-leveling
2023-12-11 15:25:51 -05:00
Piero Toffanin 76a061b86a Deprecate texturing-skip-local-seam-leveling 2023-12-11 14:57:21 -05:00
Piero Toffanin 32d933027e
Merge pull request #1731 from pierotofy/median
C++ median smoothing filter
2023-12-08 11:34:05 -05:00
Piero Toffanin a29280157e Add radius parameter 2023-12-07 16:12:15 -05:00
Piero Toffanin 704c285b8f Remove eigen dep 2023-12-07 15:58:12 -05:00
Piero Toffanin 5674e68e9f Median filtering using fastrasterfilter 2023-12-07 18:49:43 +00:00
Piero Toffanin d419d9f038
Merge pull request #1729 from pierotofy/corridor
dem2mesh improvements
2023-12-06 19:34:47 -05:00
Piero Toffanin b3ae35f5e5 Update dem2mesh 2023-12-06 13:52:24 -05:00
Piero Toffanin 18d4d31be7 Fix pc2dem.py 2023-12-06 12:34:07 -05:00
Piero Toffanin 16ccd277ec
Merge pull request #1728 from pierotofy/corridor
Improved DEM generation efficiency
2023-12-05 14:06:54 -05:00
Piero Toffanin 7048868f28 Improved DEM generation efficiency 2023-12-05 14:01:15 -05:00
Piero Toffanin b14ffd919a Remove need for one intermediate raster 2023-12-05 12:26:14 -05:00
Piero Toffanin 4d1d0350a5
Update issue-triage.yml 2023-12-05 10:12:06 -05:00
Piero Toffanin 7261c29efc Respect max_tile_size parameter 2023-12-04 22:49:06 -05:00
Piero Toffanin 2ccad6ee9d Fix renderdem bounds calculation 2023-12-04 22:26:04 -05:00
Piero Toffanin 6acf9835e5 Update issue-triage.yml 2023-11-29 23:34:52 -05:00
Piero Toffanin 5b5df3aaf7 Add issue-trage.yml 2023-11-29 23:33:36 -05:00
Piero Toffanin 26cc9fbf93
Merge pull request #1725 from pierotofy/renderdem
Render DEM tiles using RenderDEM
2023-11-28 13:10:35 -05:00
Piero Toffanin b08f955963 Use URL 2023-11-28 11:33:09 -05:00
Piero Toffanin d028873f63 Use PDAL fork 2023-11-28 11:24:50 -05:00
Piero Toffanin 2d2b809530 Set maxTiles check only in absence of georeferenced photos 2023-11-28 00:43:11 -05:00
Piero Toffanin 7e05a5b04e Minor fix 2023-11-27 16:34:08 -05:00
Piero Toffanin e0ab6ae7ed Bump version 2023-11-27 16:25:11 -05:00
Piero Toffanin eceae8d2e4 Render DEM tiles using RenderDEM 2023-11-27 16:20:21 -05:00
Piero Toffanin 55570385c1
Merge pull request #1720 from pierotofy/autorerun
Feat: Auto rerun-from
2023-11-13 13:42:04 -05:00
Piero Toffanin eed840c9bb Always auto-rerun from beginning with split 2023-11-13 13:40:55 -05:00
Piero Toffanin 8376f24f08 Remove duplicate stmt 2023-11-08 11:40:22 -05:00
Piero Toffanin 6d70a4f0be Fix processopts slice 2023-11-08 11:14:15 -05:00
Piero Toffanin 6df5e0b711 Feat: Auto rerun-from 2023-11-08 11:07:20 -05:00
Piero Toffanin 5d9564fda3
Merge pull request #1717 from pierotofy/fcp
Pin Eigen 3.4
2023-11-05 15:52:28 -05:00
Piero Toffanin eccb203d7a Pin eigen34 2023-11-05 15:45:12 -05:00
Piero Toffanin 2df4afaecf
Merge pull request #1716 from pierotofy/fcp
Fix fast_floor in FPC Filter, Invalid PLY file (expected 'property uint8 views')
2023-11-04 20:22:44 -04:00
Piero Toffanin e5ed68846e Fix OpenMVS subscene logic 2023-11-04 20:00:57 -04:00
Piero Toffanin 7cf71628f3 Fix fast_floor in FPC Filter 2023-11-04 13:32:40 -04:00
Piero Toffanin 237bf8fb87 Remove snap build 2023-10-30 00:39:09 -04:00
Piero Toffanin a542e7b78d
Merge pull request #1714 from pierotofy/dsp
Adaptive feature quality
2023-10-30 00:36:54 -04:00
Piero Toffanin 52fa5d12e6 Adaptive feature quality 2023-10-29 19:19:20 -04:00
Piero Toffanin e3296f0379
Merge pull request #1712 from pierotofy/dsp
Adds support for DSP SIFT
2023-10-29 18:29:11 -04:00
Piero Toffanin a06f6f19b2 Update OpenSfM 2023-10-29 17:54:12 -04:00
Piero Toffanin 2d94934595 Adds support for DSP SIFT 2023-10-27 22:33:43 -04:00
Piero Toffanin 08d03905e6
Merge pull request #1705 from MertenF/master
Make tiler zoom level configurable
2023-10-16 12:30:29 -04:00
Merten Fermont f70e55c9eb
Limit maximum tiler zoom level to 23 2023-10-16 09:59:48 +02:00
Merten Fermont a89803c2eb Use dem instead of orthophoto resolution for generating DEM tiles 2023-10-15 23:47:43 +02:00
Piero Toffanin de7595aeef
Merge pull request #1708 from pierotofy/reportmv
Add extra report file op, disable snap builds
2023-10-14 14:20:06 -04:00
Piero Toffanin aa0e9f68df Rename build file 2023-10-14 01:57:12 -04:00
Piero Toffanin 7ca122dbf6 Remove WSL install 2023-10-14 01:55:40 -04:00
Piero Toffanin 0d303aab16 Disable snap 2023-10-14 01:32:55 -04:00
Piero Toffanin 6dc0c98fa0 Remove previous report before move 2023-10-14 01:22:58 -04:00
Merten Fermont c679d400c8 Tiler zoom level is calculated from GSD
Instead of hardcoding a value, calculate the maximum zoomlevel in which there is still an increase in detail. By using the configured orthophoto resolution or GSD.

The higher the latitude, the higher the resolution will be of the tile.
Resulting in a chance of generating useless tiles, as there is no compensation for this.
At the moment it'll use the worst-case resolution from the equator.

Zoom level calulation from: https://wiki.openstreetmap.org/wiki/Zoom_levels
2023-10-12 22:22:10 +02:00
Piero Toffanin 38af615657
Merge pull request #1704 from pierotofy/gpurunner
Run GPU build on self-hosted runner
2023-10-05 15:30:31 -04:00
Piero Toffanin fc8dd7c5c5 Run GPU build on self-hosted runner 2023-10-05 15:29:14 -04:00
Piero Toffanin 6eca279c4b
Merge pull request #1702 from pierotofy/altumpt
Altum-PT support
2023-10-04 13:51:48 -04:00
Piero Toffanin 681ee18925 Adds support for Altum-PT 2023-10-03 13:06:36 -04:00
Piero Toffanin f9a3c5eb0e
Merge pull request #1701 from pierotofy/windate
Fix start/end date on Windows and enforce band order normalization
2023-10-02 10:14:25 -04:00
Piero Toffanin a56b52d0df Pick green band by default, improve mavic 3M support 2023-09-29 13:54:01 -04:00
Piero Toffanin f6be28db2a Give RGB, Blue priority 2023-09-29 13:11:11 -04:00
Piero Toffanin 5988be1f57 Bump version 2023-09-29 13:05:06 -04:00
Piero Toffanin d9600741d1 Enforce band order normalization 2023-09-29 13:00:32 -04:00
Piero Toffanin 57c61d918d Fix start/end date on Windows 2023-09-29 12:11:19 -04:00
Piero Toffanin 7277eabd0b
Merge pull request #1697 from pierotofy/321
Compress GCP data before VLR inclusion
2023-09-08 13:56:24 -04:00
Piero Toffanin d78b8ff399 GCP file size check 2023-09-08 13:54:45 -04:00
Piero Toffanin d10bef2631 Compress GCP data before inclusion in VLR 2023-09-08 13:44:50 -04:00
Piero Toffanin 2930927207
Merge pull request #1696 from pierotofy/321
More memory efficient find_features_homography
2023-09-08 13:30:43 -04:00
Piero Toffanin 83fef16cb1 Increase max_size 2023-09-08 13:22:03 -04:00
Piero Toffanin 2fea4d9f3d More memory efficient find_features_homography 2023-09-08 13:12:52 -04:00
Piero Toffanin 50162147ce Bump version 2023-09-07 21:59:39 -04:00
Piero Toffanin 07b641dc09
Merge pull request #1695 from pierotofy/matcherfix
Fix minimum number of pictures for matcher neighbors
2023-09-06 10:15:27 -04:00
Piero Toffanin d2cd5d9336 2 --> 3 2023-09-06 10:13:36 -04:00
Piero Toffanin 340e32af8f
Merge pull request #1694 from pierotofy/matcherfix
Always use matcher-neighbors if less than 2 pictures
2023-09-06 10:11:09 -04:00
Piero Toffanin 8276751d07 Always use matcher-neighbors if less than 2 pictures 2023-09-06 10:09:14 -04:00
Piero Toffanin ebba01aad5
Merge pull request #1690 from pierotofy/mvsup
Fix ReconstructMesh segfault
2023-08-23 09:28:59 -04:00
Piero Toffanin f4549846de Fix ReconstructMesh segfault 2023-08-23 09:23:32 -04:00
Piero Toffanin f5604a05a8
Merge pull request #1689 from pierotofy/mvsup
Tower mode (OpenMVS update), fixes
2023-08-22 11:23:21 -04:00
Piero Toffanin 3fc46a1e04 Fix pc-filter 0 2023-08-21 19:59:38 +00:00
Piero Toffanin 4b8cf9af3d Upgrade OpenMVS 2023-08-21 19:42:21 +00:00
Piero Toffanin e9e18050a2
Merge pull request #1674 from Adrien-LUDWIG/median_smoothing_memory_optimization
Use windowed read/write in median_smoothing
2023-08-12 22:38:32 +02:00
Piero Toffanin 9d15982850
Merge pull request #1684 from mdchia/master
Adding README and reformatting of DJI image binner script
2023-08-07 09:42:46 +02:00
mdchia 820ea4a4e3 minor refactor for readability, add credits + README 2023-08-07 17:27:58 +10:00
Saijin-Naib e84c77dd56
Update config.py
Syntax fix for unterminated single quote
2023-07-29 01:12:54 -04:00
Stephen Mather d929d7b8fa
Update docs to reflect dem resolution defaults (#1683)
* Update docs to reflect dem resolution defaults
* Also ignore ignore-gsd, but also don't advertise it in orthophoto resolution. Replaces https://github.com/OpenDroneMap/docs/pull/176#issuecomment-1656550757
* Helpful note on GSD limit for elevation models too!
* Change ignore-gsd language to have greater clarity
2023-07-29 01:05:18 -04:00
Piero Toffanin b948109e8f
Merge pull request #1681 from sbonaime/gflags_2.2.2
Update CMakeLists.txt
2023-07-21 19:02:38 +02:00
Sebastien c3593c0f69
Update CMakeLists.txt
Fix https://github.com/OpenDroneMap/ODM/issues/1679

Update from gflags 2.1.2  Mar 24, 2015to gflags 2.2.2 from Nov 11, 2018
2023-07-21 15:43:20 +02:00
Sebastien 5a20a22a1a
Update Dev instructions (#1678)
* Update utils.py

* Update README.md

* Update README.md

Update Dev instructions

* Update README.md

* Update README.md

Update Dev instructions

* Update utils.py
2023-07-19 12:50:45 +02:00
Adrien-ANTON-LUDWIG b4aa3a9be0 Avoid using rasterio "r+" open mode (ugly patch)
When using rasterio "r+" open mode, the file is well updated while
opened but completely wrond once saved.
2023-07-17 16:36:56 +00:00
Adrien-ANTON-LUDWIG 65c20796be Use temporary files to avoid reading altered data 2023-07-17 16:15:49 +00:00
Piero Toffanin 8bc251aea2 Semantic int_values fix 2023-07-15 18:39:41 +02:00
Piero Toffanin c32a8a5c59
Merge pull request #1677 from pierotofy/rflyfix
Fix RFLY EXIF parsing
2023-07-15 12:40:57 +02:00
Piero Toffanin f75a87977e Handle malformed GPS GPSAltitudeRef tags 2023-07-15 12:37:52 +02:00
Piero Toffanin e329c9a77b
Merge pull request #1676 from rexliuser/master
update cuda ver
2023-07-15 11:12:26 +02:00
rexliuser be1fec2bd7 update cuda ver 2023-07-15 14:03:57 +08:00
Adrien-ANTON-LUDWIG 87f82a1582 Add locks to fix racing conditions 2023-07-13 11:51:13 +00:00
Adrien-ANTON-LUDWIG 9b9ba724c6 Remove forgotten exit call
Uh oh. Sorry for this.
2023-07-13 10:25:14 +00:00
Adrien-ANTON-LUDWIG ee5ff3258f Use windowed read/write in median_smoothing
See the issue description in this forum comment:
https://community.opendronemap.org/t/post-processing-after-odm/16314/16?u=adrien-anton-ludwig

TL;DR:
Median smoothing used windowing to go through the array but read it
entirely in RAM. Now the full potential of windowing is exploited to
read/write by chunks.
2023-07-12 16:55:14 +00:00
Piero Toffanin 80fd9dffdc
Merge pull request #1673 from fr-damo/master
Update rollingshutter.py
2023-07-12 16:40:33 +02:00
fr-damo df0ea97321
Update rollingshutter.py
added line 45, Autel EVO II pro
2023-07-12 18:55:59 +10:00
Piero Toffanin 967fec0974
Merge pull request #1672 from fr-damo/patch-1
Update rollingshutter.py
2023-07-07 12:01:55 +02:00
fr-damo e1b5a5ef65
Update rollingshutter.py
added line 43 'parrot anafi': 39, # Parrot Anafi
2023-07-07 08:39:10 +10:00
Piero Toffanin 8121fca607 Increase auto-boundary distance factor 2023-07-05 16:19:15 +02:00
Piero Toffanin 80c4ce517c
Merge pull request #1671 from udaf-mcq/patch-1
Update rollingshutter.py
2023-07-01 10:24:31 +02:00
udaf-mcq afd38f631d
Update rollingshutter.py 2023-06-30 13:41:31 -06:00
Piero Toffanin eb95137a4c
Merge pull request #1669 from sbonaime/master
no_ansiesc env
2023-06-23 12:58:11 +02:00
Sebastien eb4f30651e
no_ansiesc env
Environment variable no_ansiesc  disable ansiesc codes in logs
2023-06-23 10:59:36 +02:00
Piero Toffanin cefcfde07d
Merge pull request #1667 from vinsonliux/rtk-srt-parser
Added DJI Phantom4 rtk srt parsing
2023-06-18 16:59:45 +02:00
Piero Toffanin b620e4e6cc Parse RTK prefix 2023-06-18 11:27:28 +02:00
Liuxuyang 8a4a309ceb Added DJI Phantom4 rtk srt parsing 2023-06-18 08:50:34 +08:00
Piero Toffanin cfa689b5da
Merge pull request #1664 from pierotofy/flags
Keep only best SIFT features and other fixes
2023-06-12 23:51:58 +02:00
Piero Toffanin 0b8c75ca10 Fix message 2023-06-12 21:24:17 +02:00
Piero Toffanin 3a4b98a7eb Keep only best SIFT features and other fixes 2023-06-12 21:12:13 +02:00
Piero Toffanin c2ab760dd9
Merge pull request #1662 from pierotofy/exiftoolfix
Fix Exiftool Installation
2023-06-01 22:40:54 -04:00
Piero Toffanin dee9feed17 Bump version 2023-06-01 22:39:29 -04:00
Piero Toffanin 542dd6d053 Fix Exiftool 2023-06-01 22:38:23 -04:00
Piero Toffanin 5deab15e5f no need for swap space setup on self hosted runner 2023-05-31 14:47:15 -04:00
Piero Toffanin 6d37355d6b Yet tighter max tiles check 2023-05-30 19:50:31 -04:00
Piero Toffanin ba1cc39adb Tighter max_tiles check 2023-05-27 01:48:46 -04:00
49 zmienionych plików z 779 dodań i 603 usunięć

Wyświetl plik

@ -0,0 +1,33 @@
name: Issue Triage
on:
issues:
types:
- opened
jobs:
issue_triage:
runs-on: ubuntu-latest
permissions:
issues: write
steps:
- uses: pierotofy/issuewhiz@v1
with:
ghToken: ${{ secrets.GITHUB_TOKEN }}
openAI: ${{ secrets.OPENAI_TOKEN }}
filter: |
- "#"
variables: |
- Q: "A question about using a software or seeking guidance on doing something?"
- B: "Reporting an issue or a software bug?"
- P: "Describes an issue with processing a set of images or a particular dataset?"
- D: "Contains a link to a dataset or images?"
- E: "Contains a suggestion for an improvement or a feature request?"
- SC: "Describes an issue related to compiling or building source code?"
logic: |
- 'Q and (not B) and (not P) and (not E) and (not SC) and not (title_lowercase ~= ".*bug: .+")': [comment: "Could we move this conversation over to the forum at https://community.opendronemap.org? The forum is the right place to ask questions (we try to keep the GitHub issue tracker for feature requests and bugs only). Thank you!", close: true, stop: true]
- "B and (not P) and (not E) and (not SC)": [label: "software fault", stop: true]
- "P and D": [label: "possible software fault", stop: true]
- "P and (not D) and (not SC) and (not E)": [comment: "Thanks for the report, but it looks like you didn't include a copy of your dataset for us to reproduce this issue? Please make sure to follow our [issue guidelines](https://github.com/OpenDroneMap/ODM/blob/master/docs/issue_template.md) :pray: ", close: true, stop: true]
- "E": [label: enhancement, stop: true]
- "SC": [label: "possible software fault"]
signature: "p.s. I'm just an automated script, not a human being."

Wyświetl plik

@ -1,98 +0,0 @@
name: Publish Docker and WSL Images
on:
push:
branches:
- master
tags:
- v*
jobs:
build:
runs-on: self-hosted
timeout-minutes: 2880
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set Swap Space
uses: pierotofy/set-swap-space@master
with:
swap-size-gb: 12
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
with:
config-inline: |
[worker.oci]
max-parallelism = 1
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# Use the repository information of the checked-out code to format docker tags
- name: Docker meta
id: docker_meta
uses: crazy-max/ghaction-docker-meta@v1
with:
images: opendronemap/odm
tag-semver: |
{{version}}
- name: Build and push Docker image
id: docker_build
uses: docker/build-push-action@v2
with:
file: ./portable.Dockerfile
platforms: linux/amd64,linux/arm64
push: true
no-cache: true
tags: |
${{ steps.docker_meta.outputs.tags }}
opendronemap/odm:latest
- name: Export WSL image
id: wsl_export
run: |
docker pull opendronemap/odm
docker export $(docker create opendronemap/odm) --output odm-wsl-rootfs-amd64.tar.gz
gzip odm-wsl-rootfs-amd64.tar.gz
echo ::set-output name=amd64-rootfs::"odm-wsl-rootfs-amd64.tar.gz"
# Convert tag into a GitHub Release if we're building a tag
- name: Create Release
if: github.event_name == 'tag'
id: create_release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ github.ref }}
release_name: Release ${{ github.ref }}
draft: false
prerelease: false
# Upload the WSL image to the new Release if we're building a tag
- name: Upload amd64 Release Asset
if: github.event_name == 'tag'
id: upload-amd64-wsl-rootfs
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }} # This pulls from the CREATE RELEASE step above, referencing it's ID to get its outputs object, which include a `upload_url`. See this blog post for more info: https://jasonet.co/posts/new-features-of-github-actions/#passing-data-to-future-steps
asset_path: ./${{ steps.wsl_export.outputs.amd64-rootfs }}
asset_name: ${{ steps.wsl_export.outputs.amd64-rootfs }}
asset_content_type: application/gzip
# Always archive the WSL rootfs
- name: Upload amd64 Artifact
uses: actions/upload-artifact@v2
with:
name: wsl-rootfs
path: ${{ steps.wsl_export.outputs.amd64-rootfs }}
- name: Docker image digest and WSL rootfs download URL
run: |
echo "Docker image digest: ${{ steps.docker_build.outputs.digest }}"
echo "WSL AMD64 rootfs URL: ${{ steps.upload-amd64-wsl-rootfs.browser_download_url }}"
# Trigger NodeODM build
- name: Dispatch NodeODM Build Event
id: nodeodm_dispatch
run: |
curl -X POST -u "${{secrets.PAT_USERNAME}}:${{secrets.PAT_TOKEN}}" -H "Accept: application/vnd.github.everest-preview+json" -H "Content-Type: application/json" https://api.github.com/repos/OpenDroneMap/NodeODM/actions/workflows/publish-docker.yaml/dispatches --data '{"ref": "master"}'

Wyświetl plik

@ -9,14 +9,11 @@ on:
jobs:
build:
runs-on: ubuntu-latest
runs-on: self-hosted
timeout-minutes: 2880
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set Swap Space
uses: pierotofy/set-swap-space@master
with:
swap-size-gb: 12
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx

Wyświetl plik

@ -0,0 +1,53 @@
name: Publish Docker and WSL Images
on:
push:
branches:
- master
tags:
- v*
jobs:
build:
runs-on: self-hosted
timeout-minutes: 2880
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
with:
config-inline: |
[worker.oci]
max-parallelism = 1
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# Use the repository information of the checked-out code to format docker tags
- name: Docker meta
id: docker_meta
uses: crazy-max/ghaction-docker-meta@v1
with:
images: opendronemap/odm
tag-semver: |
{{version}}
- name: Build and push Docker image
id: docker_build
uses: docker/build-push-action@v2
with:
file: ./portable.Dockerfile
platforms: linux/amd64,linux/arm64
push: true
no-cache: true
tags: |
${{ steps.docker_meta.outputs.tags }}
opendronemap/odm:latest
# Trigger NodeODM build
- name: Dispatch NodeODM Build Event
id: nodeodm_dispatch
run: |
curl -X POST -u "${{secrets.PAT_USERNAME}}:${{secrets.PAT_TOKEN}}" -H "Accept: application/vnd.github.everest-preview+json" -H "Content-Type: application/json" https://api.github.com/repos/OpenDroneMap/NodeODM/actions/workflows/publish-docker.yaml/dispatches --data '{"ref": "master"}'

Wyświetl plik

@ -1,51 +0,0 @@
name: Publish Snap
on:
push:
branches:
- master
tags:
- v**
jobs:
build-and-release:
runs-on: ubuntu-latest
strategy:
matrix:
architecture:
- amd64
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set Swap Space
uses: pierotofy/set-swap-space@master
with:
swap-size-gb: 12
- name: Build
id: build
uses: diddlesnaps/snapcraft-multiarch-action@v1
with:
architecture: ${{ matrix.architecture }}
- name: Publish unstable builds to Edge
if: github.ref == 'refs/heads/master'
uses: snapcore/action-publish@v1
with:
store_login: ${{ secrets.STORE_LOGIN }}
snap: ${{ steps.build.outputs.snap }}
release: edge
- name: Publish tagged prerelease builds to Beta
# These are identified by having a hyphen in the tag name, e.g.: v1.0.0-beta1
if: startsWith(github.ref, 'refs/tags/v') && contains(github.ref, '-')
uses: snapcore/action-publish@v1
with:
store_login: ${{ secrets.STORE_LOGIN }}
snap: ${{ steps.build.outputs.snap }}
release: beta
- name: Publish tagged stable or release-candidate builds to Candidate
# These are identified by NOT having a hyphen in the tag name, OR having "-RC" or "-rc" in the tag name.
if: startsWith(github.ref, 'refs/tags/v1') && ( ( ! contains(github.ref, '-') ) || contains(github.ref, '-RC') || contains(github.ref, '-rc') )
uses: snapcore/action-publish@v1
with:
store_login: ${{ secrets.STORE_LOGIN }}
snap: ${{ steps.build.outputs.snap }}
release: candidate

Wyświetl plik

@ -83,30 +83,6 @@ ODM can be installed natively on Windows. Just download the latest setup from th
run C:\Users\youruser\datasets\project [--additional --parameters --here]
```
## Snap Package
ODM is now available as a Snap Package from the Snap Store. To install you may use the Snap Store (available itself as a Snap Package) or the command line:
```bash
sudo snap install --edge opendronemap
```
To run, you will need a terminal window into which you can type:
```bash
opendronemap
# or
snap run opendronemap
# or
/snap/bin/opendronemap
```
Snap packages will be kept up-to-date automatically, so you don't need to update ODM manually.
## GPU Acceleration
ODM has support for doing SIFT feature extraction on a GPU, which is about 2x faster than the CPU on a typical consumer laptop. To use this feature, you need to use the `opendronemap/odm:gpu` docker image instead of `opendronemap/odm` and you need to pass the `--gpus all` flag:
@ -147,52 +123,6 @@ You're in good shape!
See https://github.com/NVIDIA/nvidia-docker and https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker for information on docker/NVIDIA setup.
## WSL or WSL2 Install
Note: This requires that you have installed WSL already by following [the instructions on Microsoft's Website](https://docs.microsoft.com/en-us/windows/wsl/install-win10).
You can run ODM via WSL or WSL2 by downloading the `rootfs.tar.gz` file from [the releases page on GitHub](https://github.com/OpenDroneMap/ODM/releases). Once you have the file saved to your `Downloads` folder in Windows, open a PowerShell or CMD window by right-clicking the Flag Menu (bottom left by default) and selecting "Windows PowerShell", or alternatively by using the [Windows Terminal from the Windows Store](https://www.microsoft.com/store/productId/9N0DX20HK701).
Inside a PowerShell window, or Windows Terminal running PowerShell, type the following:
```powershell
# PowerShell
wsl.exe --import ODM $env:APPDATA\ODM C:\path\to\your\Downloads\rootfs.tar.gz
```
Alternatively if you're using `CMD.exe` or the `CMD` support in Windows Terminal type:
```cmd
# CMD
wsl.exe --import ODM %APPDATA%\ODM C:\path\to\your\Downloads\rootfs.tar.gz
```
In either case, make sure you replace `C:\path\to\your\Downloads\rootfs.tar.gz` with the actual path to your `rootfs.tar.gz` file.
This will save a new Hard Disk image to your Windows `AppData` folder at `C:\Users\username\AppData\roaming\ODM` (where `username` is your Username in Windows), and will set-up a new WSL "distro" called `ODM`.
You may start the ODM distro by using the relevant option in the Windows Terminal (from the Windows Store) or by executing `wsl.exe -d ODM` in a PowerShell or CMD window.
ODM is installed to the distro's `/code` directory. You may execute it with:
```bash
/code/run.sh
```
### Updating ODM in WSL
The easiest way to update the installation of ODM is to download the new `rootfs.tar.gz` file and import it as another distro. You may then unregister the original instance the same way you delete ODM from WSL (see next heading).
### Deleting an ODM in WSL instance
```cmd
wsl.exe --unregister ODM
```
Finally you'll want to delete the files by using your Windows File Manager (Explorer) to navigate to `%APPDATA%`, find the `ODM` directory, and delete it by dragging it to the recycle bin. To permanently delete it empty the recycle bin.
If you have installed to a different directory by changing the `--import` command you ran to install you must use that directory name to delete the correct files. This is likely the case if you have multiple ODM installations or are updating an already-installed installation.
## Native Install (Ubuntu 21.04)
You can run ODM natively on Ubuntu 21.04 (although we don't recommend it):
@ -267,6 +197,8 @@ Starting from version 3.0.4, ODM can automatically extract images from video fil
Help improve our software! We welcome contributions from everyone, whether to add new features, improve speed, fix existing bugs or add support for more cameras. Check our [code of conduct](https://github.com/OpenDroneMap/documents/blob/master/CONDUCT.md), the [contributing guidelines](https://github.com/OpenDroneMap/documents/blob/master/CONTRIBUTING.md) and [how decisions are made](https://github.com/OpenDroneMap/documents/blob/master/GOVERNANCE.md#how-decisions-are-made).
### Installation and first run
For Linux users, the easiest way to modify the software is to make sure docker is installed, clone the repository and then run from a shell:
```bash
@ -285,6 +217,18 @@ You can now make changes to the ODM source. When you are ready to test the chang
```bash
(odmdev) [user:/code] master+* ± ./run.sh --project-path /datasets mydataset
```
### Stop dev container
```bash
docker stop odmdev
```
### To come back to dev environement
change your_username to your username
```bash
docker start odmdev
docker exec -ti odmdev bash
su your_username
```
If you have questions, join the developer's chat at https://community.opendronemap.org/c/developers-chat/21

Wyświetl plik

@ -142,7 +142,7 @@ SETUP_EXTERNAL_PROJECT(OpenCV ${ODM_OpenCV_Version} ${ODM_BUILD_OpenCV})
# ---------------------------------------------------------------------------------------------
# Google Flags library (GFlags)
#
set(ODM_GFlags_Version 2.1.2)
set(ODM_GFlags_Version 2.2.2)
option(ODM_BUILD_GFlags "Force to build GFlags library" OFF)
SETUP_EXTERNAL_PROJECT(GFlags ${ODM_GFlags_Version} ${ODM_BUILD_GFlags})
@ -179,6 +179,7 @@ set(custom_libs OpenSfM
Obj2Tiles
OpenPointClass
ExifTool
RenderDEM
)
externalproject_add(mve
@ -222,7 +223,7 @@ externalproject_add(poissonrecon
externalproject_add(dem2mesh
GIT_REPOSITORY https://github.com/OpenDroneMap/dem2mesh.git
GIT_TAG 313
GIT_TAG 334
PREFIX ${SB_BINARY_DIR}/dem2mesh
SOURCE_DIR ${SB_SOURCE_DIR}/dem2mesh
CMAKE_ARGS -DCMAKE_INSTALL_PREFIX:PATH=${SB_INSTALL_DIR}
@ -250,6 +251,15 @@ externalproject_add(odm_orthophoto
${WIN32_CMAKE_ARGS} ${WIN32_GDAL_ARGS}
)
externalproject_add(fastrasterfilter
GIT_REPOSITORY https://github.com/OpenDroneMap/FastRasterFilter.git
GIT_TAG main
PREFIX ${SB_BINARY_DIR}/fastrasterfilter
SOURCE_DIR ${SB_SOURCE_DIR}/fastrasterfilter
CMAKE_ARGS -DCMAKE_INSTALL_PREFIX:PATH=${SB_INSTALL_DIR}
${WIN32_CMAKE_ARGS} ${WIN32_GDAL_ARGS}
)
externalproject_add(lastools
GIT_REPOSITORY https://github.com/OpenDroneMap/LAStools.git
GIT_TAG 250

Wyświetl plik

@ -32,7 +32,7 @@ externalproject_add(${_proj_name}
UPDATE_COMMAND ""
CONFIGURE_COMMAND ""
BUILD_IN_SOURCE 1
BUILD_COMMAND perl Makefile.PL PREFIX=${SB_INSTALL_DIR}
BUILD_COMMAND perl Makefile.PL PREFIX=${SB_INSTALL_DIR} LIB=${SB_INSTALL_DIR}/bin/lib
INSTALL_COMMAND make install && rm -fr ${SB_INSTALL_DIR}/man
)
endif()

Wyświetl plik

@ -8,7 +8,7 @@ ExternalProject_Add(${_proj_name}
#--Download step--------------
DOWNLOAD_DIR ${SB_DOWNLOAD_DIR}
GIT_REPOSITORY https://github.com/OpenDroneMap/FPCFilter
GIT_TAG 305
GIT_TAG 331
#--Update/Patch step----------
UPDATE_COMMAND ""
#--Configure step-------------

Wyświetl plik

@ -14,7 +14,7 @@ externalproject_add(vcg
externalproject_add(eigen34
GIT_REPOSITORY https://gitlab.com/libeigen/eigen.git
GIT_TAG 3.4
GIT_TAG 7176ae16238ded7fb5ed30a7f5215825b3abd134
UPDATE_COMMAND ""
SOURCE_DIR ${SB_SOURCE_DIR}/eigen34
CONFIGURE_COMMAND ""
@ -53,7 +53,7 @@ ExternalProject_Add(${_proj_name}
#--Download step--------------
DOWNLOAD_DIR ${SB_DOWNLOAD_DIR}
GIT_REPOSITORY https://github.com/OpenDroneMap/openMVS
GIT_TAG 317
GIT_TAG 320
#--Update/Patch step----------
UPDATE_COMMAND ""
#--Configure step-------------

Wyświetl plik

@ -25,7 +25,7 @@ ExternalProject_Add(${_proj_name}
#--Download step--------------
DOWNLOAD_DIR ${SB_DOWNLOAD_DIR}
GIT_REPOSITORY https://github.com/OpenDroneMap/OpenSfM/
GIT_TAG 316
GIT_TAG 330
#--Update/Patch step----------
UPDATE_COMMAND git submodule update --init --recursive
#--Configure step-------------

Wyświetl plik

@ -16,7 +16,7 @@ ExternalProject_Add(${_proj_name}
STAMP_DIR ${_SB_BINARY_DIR}/stamp
#--Download step--------------
DOWNLOAD_DIR ${SB_DOWNLOAD_DIR}
URL https://github.com/PDAL/PDAL/archive/refs/tags/2.4.3.zip
URL https://github.com/OpenDroneMap/PDAL/archive/refs/heads/333.zip
#--Update/Patch step----------
UPDATE_COMMAND ""
#--Configure step-------------

Wyświetl plik

@ -0,0 +1,30 @@
set(_proj_name renderdem)
set(_SB_BINARY_DIR "${SB_BINARY_DIR}/${_proj_name}")
ExternalProject_Add(${_proj_name}
DEPENDS pdal
PREFIX ${_SB_BINARY_DIR}
TMP_DIR ${_SB_BINARY_DIR}/tmp
STAMP_DIR ${_SB_BINARY_DIR}/stamp
#--Download step--------------
DOWNLOAD_DIR ${SB_DOWNLOAD_DIR}
GIT_REPOSITORY https://github.com/OpenDroneMap/RenderDEM
GIT_TAG main
#--Update/Patch step----------
UPDATE_COMMAND ""
#--Configure step-------------
SOURCE_DIR ${SB_SOURCE_DIR}/${_proj_name}
CMAKE_ARGS
-DPDAL_DIR=${SB_INSTALL_DIR}/lib/cmake/PDAL
-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}
-DCMAKE_INSTALL_PREFIX:PATH=${SB_INSTALL_DIR}
${WIN32_CMAKE_ARGS}
#--Build step-----------------
BINARY_DIR ${_SB_BINARY_DIR}
#--Install step---------------
INSTALL_DIR ${SB_INSTALL_DIR}
#--Output logging-------------
LOG_DOWNLOAD OFF
LOG_CONFIGURE OFF
LOG_BUILD OFF
)

Wyświetl plik

@ -1 +1 @@
3.1.7
3.5.1

Wyświetl plik

@ -127,6 +127,9 @@ installreqs() {
installdepsfromsnapcraft build openmvs
set -e
# edt requires numpy to build
pip install --ignore-installed numpy==1.23.1
pip install --ignore-installed -r requirements.txt
#if [ ! -z "$GPU_INSTALL" ]; then
#fi

Wyświetl plik

@ -0,0 +1,26 @@
# exif_binner.py
Bins multispectral drone images by spectral band, using EXIF data. Also verifies that each bin is complete (i.e. contains all expected bands) and can log errors to a CSV file. Excludes RGB images by default.
## Requirements
- [Pillow](https://pillow.readthedocs.io/en/stable/installation.html) library for reading images and EXIF data.
- [tqdm](https://github.com/tqdm/tqdm#installation) for progress bars - can be removed
## Usage
```
exif_binner.py <args> <path to folder of images to rename> <output folder>
```
Optional arguments:
- `-b`/`--bands <integer>`: Number of expected bands per capture. Default: `5`
- `-s`/`--sequential <True/False>`: Use sequential capture group in filenames rather than original capture ID. Default: `True`
- `-z`/`--zero_pad <integer>`: If using sequential capture groups, zero-pad the group number to this many digits. 0 for no padding, -1 for auto padding. Default: `5`
- `-w`/`--whitespace_replace <string>`: Replace whitespace characters with this character. Default: `-`
- `-l`/`--logfile <filename>`: Write processed image metadata to this CSV file
- `-r`/`--replace_filename <string>`: Use this instead of using the original filename in new filenames.
- `-f`/`--force`: Do not ask for processing confirmation.
- `-g`/`--no_grouping`: Do not apply grouping, only validate and add band name.
- Show these on the command line with `-h`/`--help`.

25
contrib/exif-binner/exif_binner.py 100644 → 100755
Wyświetl plik

@ -1,23 +1,25 @@
#!/usr/bin/env python3
# Originally developed by Ming Chia at the Australian Plant Phenomics Facility (Australian National University node)
# Usage:
# exif_binner.py <args> <path to folder of images to rename> <output folder>
# standard libraries
import sys
import os
import PIL
from PIL import Image, ExifTags
import shutil
from tqdm import tqdm
import re
import csv
import math
import argparse
parser = argparse.ArgumentParser()
# Usage:
# python exif_binner.py <args> <path to folder of images to rename> <output folder>
# other imports
import PIL
from PIL import Image, ExifTags
from tqdm import tqdm # optional: see "swap with this for no tqdm" below
parser = argparse.ArgumentParser()
# required args
parser.add_argument("file_dir", help="input folder of images")
@ -77,9 +79,8 @@ images = []
print("Indexing images ...")
# Uses tqdm() for the progress bar, if not needed swap with
# for filename in os.listdir(file_dir):
# for filename in os.listdir(file_dir): # swap with this for no tqdm
for filename in tqdm(os.listdir(file_dir)):
old_path = os.path.join(file_dir, filename)
file_name, file_ext = os.path.splitext(filename)
@ -143,6 +144,7 @@ images = sorted(images, key=lambda img: (img["DateTime"], img["name"]))
# now sort and identify valid entries
if not args.no_grouping:
# for this_img in images: # swap with this for no tqdm
for this_img in tqdm(images):
if not this_img["valid"]: # prefiltered in last loop
continue
@ -166,6 +168,7 @@ os.makedirs(output_invalid, exist_ok=True)
identifier = ""
# then do the actual copy
# for this_img in images: # swap with this for no tqdm
for this_img in tqdm(images):
old_path = os.path.join(file_dir, this_img["name"])
file_name, file_ext = os.path.splitext(this_img["name"])

Wyświetl plik

@ -51,6 +51,5 @@ commands.create_dem(args.point_cloud,
outdir=outdir,
resolution=args.resolution,
decimation=1,
max_workers=multiprocessing.cpu_count(),
keep_unfilled_copy=False
max_workers=multiprocessing.cpu_count()
)

Wyświetl plik

@ -1,4 +1,4 @@
FROM nvidia/cuda:11.2.0-devel-ubuntu20.04 AS builder
FROM nvidia/cuda:11.2.2-devel-ubuntu20.04 AS builder
# Env variables
ENV DEBIAN_FRONTEND=noninteractive \
@ -21,7 +21,7 @@ RUN bash configure.sh clean
### Use a second image for the final asset to reduce the number and
# size of the layers.
FROM nvidia/cuda:11.2.0-runtime-ubuntu20.04
FROM nvidia/cuda:11.2.2-runtime-ubuntu20.04
#FROM nvidia/cuda:11.2.0-devel-ubuntu20.04
# Env variables

Wyświetl plik

@ -0,0 +1,76 @@
from opendm import log
from shlex import _find_unsafe
import json
import os
def double_quote(s):
"""Return a shell-escaped version of the string *s*."""
if not s:
return '""'
if _find_unsafe(s) is None:
return s
# use double quotes, and prefix double quotes with a \
# the string $"b is then quoted as "$\"b"
return '"' + s.replace('"', '\\\"') + '"'
def args_to_dict(args):
args_dict = vars(args)
result = {}
for k in sorted(args_dict.keys()):
# Skip _is_set keys
if k.endswith("_is_set"):
continue
# Don't leak token
if k == 'sm_cluster' and args_dict[k] is not None:
result[k] = True
else:
result[k] = args_dict[k]
return result
def save_opts(opts_json, args):
try:
with open(opts_json, "w", encoding='utf-8') as f:
f.write(json.dumps(args_to_dict(args)))
except Exception as e:
log.ODM_WARNING("Cannot save options to %s: %s" % (opts_json, str(e)))
def compare_args(opts_json, args, rerun_stages):
if not os.path.isfile(opts_json):
return {}
try:
diff = {}
with open(opts_json, "r", encoding="utf-8") as f:
prev_args = json.loads(f.read())
cur_args = args_to_dict(args)
for opt in cur_args:
cur_value = cur_args[opt]
prev_value = prev_args.get(opt, None)
stage = rerun_stages.get(opt, None)
if stage is not None and cur_value != prev_value:
diff[opt] = prev_value
return diff
except:
return {}
def find_rerun_stage(opts_json, args, rerun_stages, processopts):
# Find the proper rerun stage if one is not explicitly set
if not ('rerun_is_set' in args or 'rerun_from_is_set' in args or 'rerun_all_is_set' in args):
args_diff = compare_args(opts_json, args, rerun_stages)
if args_diff:
if 'split_is_set' in args:
return processopts[processopts.index('dataset'):], args_diff
try:
stage_idxs = [processopts.index(rerun_stages[opt]) for opt in args_diff.keys() if rerun_stages[opt] is not None]
return processopts[min(stage_idxs):], args_diff
except ValueError as e:
print(str(e))
return None, {}

Wyświetl plik

@ -13,6 +13,100 @@ processopts = ['dataset', 'split', 'merge', 'opensfm', 'openmvs', 'odm_filterpoi
'odm_meshing', 'mvs_texturing', 'odm_georeferencing',
'odm_dem', 'odm_orthophoto', 'odm_report', 'odm_postprocess']
rerun_stages = {
'3d_tiles': 'odm_postprocess',
'align': 'odm_georeferencing',
'auto_boundary': 'odm_filterpoints',
'auto_boundary_distance': 'odm_filterpoints',
'bg_removal': 'dataset',
'boundary': 'odm_filterpoints',
'build_overviews': 'odm_orthophoto',
'camera_lens': 'dataset',
'cameras': 'dataset',
'cog': 'odm_dem',
'copy_to': 'odm_postprocess',
'crop': 'odm_georeferencing',
'dem_decimation': 'odm_dem',
'dem_euclidean_map': 'odm_dem',
'dem_gapfill_steps': 'odm_dem',
'dem_resolution': 'odm_dem',
'dsm': 'odm_dem',
'dtm': 'odm_dem',
'end_with': None,
'fast_orthophoto': 'odm_filterpoints',
'feature_quality': 'opensfm',
'feature_type': 'opensfm',
'force_gps': 'opensfm',
'gcp': 'dataset',
'geo': 'dataset',
'gltf': 'mvs_texturing',
'gps_accuracy': 'dataset',
'help': None,
'ignore_gsd': 'opensfm',
'matcher_neighbors': 'opensfm',
'matcher_order': 'opensfm',
'matcher_type': 'opensfm',
'max_concurrency': None,
'merge': 'Merge',
'mesh_octree_depth': 'odm_meshing',
'mesh_size': 'odm_meshing',
'min_num_features': 'opensfm',
'name': None,
'no_gpu': None,
'optimize_disk_space': None,
'orthophoto_compression': 'odm_orthophoto',
'orthophoto_cutline': 'odm_orthophoto',
'orthophoto_kmz': 'odm_orthophoto',
'orthophoto_no_tiled': 'odm_orthophoto',
'orthophoto_png': 'odm_orthophoto',
'orthophoto_resolution': 'odm_orthophoto',
'pc_classify': 'odm_georeferencing',
'pc_copc': 'odm_georeferencing',
'pc_csv': 'odm_georeferencing',
'pc_ept': 'odm_georeferencing',
'pc_filter': 'openmvs',
'pc_las': 'odm_georeferencing',
'pc_quality': 'opensfm',
'pc_rectify': 'odm_georeferencing',
'pc_sample': 'odm_filterpoints',
'pc_skip_geometric': 'openmvs',
'primary_band': 'dataset',
'project_path': None,
'radiometric_calibration': 'opensfm',
'rerun': None,
'rerun_all': None,
'rerun_from': None,
'rolling_shutter': 'opensfm',
'rolling_shutter_readout': 'opensfm',
'sfm_algorithm': 'opensfm',
'sfm_no_partial': 'opensfm',
'skip_3dmodel': 'odm_meshing',
'skip_band_alignment': 'opensfm',
'skip_orthophoto': 'odm_orthophoto',
'skip_report': 'odm_report',
'sky_removal': 'dataset',
'sm_cluster': 'split',
'sm_no_align': 'split',
'smrf_scalar': 'odm_dem',
'smrf_slope': 'odm_dem',
'smrf_threshold': 'odm_dem',
'smrf_window': 'odm_dem',
'split': 'split',
'split_image_groups': 'split',
'split_overlap': 'split',
'texturing_keep_unseen_faces': 'mvs_texturing',
'texturing_single_material': 'mvs_texturing',
'texturing_skip_global_seam_leveling': 'mvs_texturing',
'tiles': 'odm_dem',
'use_3dmesh': 'mvs_texturing',
'use_exif': 'dataset',
'use_fixed_camera_params': 'opensfm',
'use_hybrid_bundle_adjustment': 'opensfm',
'version': None,
'video_limit': 'dataset',
'video_resolution': 'dataset',
}
with open(os.path.join(context.root_path, 'VERSION')) as version_file:
__version__ = version_file.read().strip()
@ -123,8 +217,8 @@ def config(argv=None, parser=None):
parser.add_argument('--feature-type',
metavar='<string>',
action=StoreValue,
default='sift',
choices=['akaze', 'hahog', 'orb', 'sift'],
default='dspsift',
choices=['akaze', 'dspsift', 'hahog', 'orb', 'sift'],
help=('Choose the algorithm for extracting keypoints and computing descriptors. '
'Can be one of: %(choices)s. Default: '
'%(default)s'))
@ -159,7 +253,7 @@ def config(argv=None, parser=None):
action=StoreValue,
default=0,
type=int,
help='Perform image matching with the nearest N images based on image filename order. Can speed up processing of sequential images, such as those extracted from video. Set to 0 to disable. Default: %(default)s')
help='Perform image matching with the nearest N images based on image filename order. Can speed up processing of sequential images, such as those extracted from video. It is applied only on non-georeferenced datasets. Set to 0 to disable. Default: %(default)s')
parser.add_argument('--use-fixed-camera-params',
action=StoreTrue,
@ -272,10 +366,11 @@ def config(argv=None, parser=None):
action=StoreTrue,
nargs=0,
default=False,
help='Ignore Ground Sampling Distance (GSD). GSD '
'caps the maximum resolution of image outputs and '
'resizes images when necessary, resulting in faster processing and '
'lower memory usage. Since GSD is an estimate, sometimes ignoring it can result in slightly better image output quality. Default: %(default)s')
help='Ignore Ground Sampling Distance (GSD).'
'A memory and processor hungry change relative to the default behavior if set to true. '
'Ordinarily, GSD estimates are used to cap the maximum resolution of image outputs and resizes images when necessary, resulting in faster processing and lower memory usage. '
'Since GSD is an estimate, sometimes ignoring it can result in slightly better image output quality. '
'Never set --ignore-gsd to true unless you are positive you need it, and even then: do not use it. Default: %(default)s')
parser.add_argument('--no-gpu',
action=StoreTrue,
@ -390,7 +485,7 @@ def config(argv=None, parser=None):
metavar='<positive float>',
action=StoreValue,
type=float,
default=2.5,
default=5,
help='Filters the point cloud by removing points that deviate more than N standard deviations from the local mean. Set to 0 to disable filtering. '
'Default: %(default)s')
@ -447,12 +542,6 @@ def config(argv=None, parser=None):
default=False,
help=('Skip normalization of colors across all images. Useful when processing radiometric data. Default: %(default)s'))
parser.add_argument('--texturing-skip-local-seam-leveling',
action=StoreTrue,
nargs=0,
default=False,
help='Skip the blending of colors near seams. Default: %(default)s')
parser.add_argument('--texturing-keep-unseen-faces',
action=StoreTrue,
nargs=0,
@ -543,7 +632,7 @@ def config(argv=None, parser=None):
action=StoreValue,
type=float,
default=5,
help='DSM/DTM resolution in cm / pixel. Note that this value is capped to 2x the ground sampling distance (GSD) estimate. To remove the cap, check --ignore-gsd also.'
help='DSM/DTM resolution in cm / pixel. Note that this value is capped by a ground sampling distance (GSD) estimate.'
' Default: %(default)s')
parser.add_argument('--dem-decimation',
@ -570,7 +659,7 @@ def config(argv=None, parser=None):
action=StoreValue,
default=5,
type=float,
help=('Orthophoto resolution in cm / pixel. Note that this value is capped by a ground sampling distance (GSD) estimate. To remove the cap, check --ignore-gsd also. '
help=('Orthophoto resolution in cm / pixel. Note that this value is capped by a ground sampling distance (GSD) estimate.'
'Default: %(default)s'))
parser.add_argument('--orthophoto-no-tiled',
@ -749,7 +838,7 @@ def config(argv=None, parser=None):
type=float,
action=StoreValue,
metavar='<positive float>',
default=10,
default=3,
help='Set a value in meters for the GPS Dilution of Precision (DOP) '
'information for all images. If your images are tagged '
'with high precision GPS information (RTK), this value will be automatically '
@ -791,7 +880,7 @@ def config(argv=None, parser=None):
'Default: %(default)s'))
args, unknown = parser.parse_known_args(argv)
DEPRECATED = ["--verbose", "--debug", "--time", "--resize-to", "--depthmap-resolution", "--pc-geometric", "--texturing-data-term", "--texturing-outlier-removal-type", "--texturing-tone-mapping"]
DEPRECATED = ["--verbose", "--debug", "--time", "--resize-to", "--depthmap-resolution", "--pc-geometric", "--texturing-data-term", "--texturing-outlier-removal-type", "--texturing-tone-mapping", "--texturing-skip-local-seam-leveling"]
unknown_e = [p for p in unknown if p not in DEPRECATED]
if len(unknown_e) > 0:
raise parser.error("unrecognized arguments: %s" % " ".join(unknown_e))

Wyświetl plik

@ -5,22 +5,17 @@ import numpy
import math
import time
import shutil
import functools
import glob
import re
from joblib import delayed, Parallel
from opendm.system import run
from opendm import point_cloud
from opendm import io
from opendm import system
from opendm.concurrency import get_max_memory, parallel_map, get_total_memory
from scipy import ndimage
from datetime import datetime
from opendm.vendor.gdal_fillnodata import main as gdal_fillnodata
from opendm import log
try:
import Queue as queue
except:
import queue
import threading
from .ground_rectification.rectify import run_rectification
from . import pdal
@ -68,119 +63,51 @@ error = None
def create_dem(input_point_cloud, dem_type, output_type='max', radiuses=['0.56'], gapfill=True,
outdir='', resolution=0.1, max_workers=1, max_tile_size=4096,
decimation=None, keep_unfilled_copy=False,
decimation=None, with_euclidean_map=False,
apply_smoothing=True, max_tiles=None):
""" Create DEM from multiple radii, and optionally gapfill """
global error
error = None
start = datetime.now()
if not os.path.exists(outdir):
log.ODM_INFO("Creating %s" % outdir)
os.mkdir(outdir)
extent = point_cloud.get_extent(input_point_cloud)
log.ODM_INFO("Point cloud bounds are [minx: %s, maxx: %s] [miny: %s, maxy: %s]" % (extent['minx'], extent['maxx'], extent['miny'], extent['maxy']))
ext_width = extent['maxx'] - extent['minx']
ext_height = extent['maxy'] - extent['miny']
w, h = (int(math.ceil(ext_width / float(resolution))),
int(math.ceil(ext_height / float(resolution))))
# Set a floor, no matter the resolution parameter
# (sometimes a wrongly estimated scale of the model can cause the resolution
# to be set unrealistically low, causing errors)
RES_FLOOR = 64
if w < RES_FLOOR and h < RES_FLOOR:
prev_w, prev_h = w, h
if w >= h:
w, h = (RES_FLOOR, int(math.ceil(ext_height / ext_width * RES_FLOOR)))
else:
w, h = (int(math.ceil(ext_width / ext_height * RES_FLOOR)), RES_FLOOR)
floor_ratio = prev_w / float(w)
resolution *= floor_ratio
radiuses = [str(float(r) * floor_ratio) for r in radiuses]
log.ODM_WARNING("Really low resolution DEM requested %s will set floor at %s pixels. Resolution changed to %s. The scale of this reconstruction might be off." % ((prev_w, prev_h), RES_FLOOR, resolution))
final_dem_pixels = w * h
num_splits = int(max(1, math.ceil(math.log(math.ceil(final_dem_pixels / float(max_tile_size * max_tile_size)))/math.log(2))))
num_tiles = num_splits * num_splits
log.ODM_INFO("DEM resolution is %s, max tile size is %s, will split DEM generation into %s tiles" % ((h, w), max_tile_size, num_tiles))
tile_bounds_width = ext_width / float(num_splits)
tile_bounds_height = ext_height / float(num_splits)
tiles = []
for r in radiuses:
minx = extent['minx']
for x in range(num_splits):
miny = extent['miny']
if x == num_splits - 1:
maxx = extent['maxx']
else:
maxx = minx + tile_bounds_width
for y in range(num_splits):
if y == num_splits - 1:
maxy = extent['maxy']
else:
maxy = miny + tile_bounds_height
filename = os.path.join(os.path.abspath(outdir), '%s_r%s_x%s_y%s.tif' % (dem_type, r, x, y))
tiles.append({
'radius': r,
'bounds': {
'minx': minx,
'maxx': maxx,
'miny': miny,
'maxy': maxy
},
'filename': filename
})
miny = maxy
minx = maxx
# Safety check
if max_tiles is not None:
if len(tiles) > max_tiles and final_dem_pixels > get_total_memory() * 10:
raise system.ExitException("Max tiles limit exceeded (%s). This is a strong indicator that the reconstruction failed and we would probably run out of memory trying to process this" % max_tiles)
# Sort tiles by increasing radius
tiles.sort(key=lambda t: float(t['radius']), reverse=True)
def process_tile(q):
log.ODM_INFO("Generating %s (%s, radius: %s, resolution: %s)" % (q['filename'], output_type, q['radius'], resolution))
d = pdal.json_gdal_base(q['filename'], output_type, q['radius'], resolution, q['bounds'])
if dem_type == 'dtm':
d = pdal.json_add_classification_filter(d, 2)
if decimation is not None:
d = pdal.json_add_decimation_filter(d, decimation)
pdal.json_add_readers(d, [input_point_cloud])
pdal.run_pipeline(d)
parallel_map(process_tile, tiles, max_workers)
kwargs = {
'input': input_point_cloud,
'outdir': outdir,
'outputType': output_type,
'radiuses': ",".join(map(str, radiuses)),
'resolution': resolution,
'maxTiles': 0 if max_tiles is None else max_tiles,
'decimation': 1 if decimation is None else decimation,
'classification': 2 if dem_type == 'dtm' else -1,
'tileSize': max_tile_size
}
system.run('renderdem "{input}" '
'--outdir "{outdir}" '
'--output-type {outputType} '
'--radiuses {radiuses} '
'--resolution {resolution} '
'--max-tiles {maxTiles} '
'--decimation {decimation} '
'--classification {classification} '
'--tile-size {tileSize} '
'--force '.format(**kwargs), env_vars={'OMP_NUM_THREADS': max_workers})
output_file = "%s.tif" % dem_type
output_path = os.path.abspath(os.path.join(outdir, output_file))
# Verify tile results
for t in tiles:
if not os.path.exists(t['filename']):
raise Exception("Error creating %s, %s failed to be created" % (output_file, t['filename']))
# Fetch tiles
tiles = []
for p in glob.glob(os.path.join(os.path.abspath(outdir), "*.tif")):
filename = os.path.basename(p)
m = re.match("^r([\d\.]+)_x\d+_y\d+\.tif", filename)
if m is not None:
tiles.append({'filename': p, 'radius': float(m.group(1))})
if len(tiles) == 0:
raise system.ExitException("No DEM tiles were generated, something went wrong")
log.ODM_INFO("Generated %s tiles" % len(tiles))
# Sort tiles by decreasing radius
tiles.sort(key=lambda t: float(t['radius']), reverse=True)
# Create virtual raster
tiles_vrt_path = os.path.abspath(os.path.join(outdir, "tiles.vrt"))
@ -192,7 +119,6 @@ def create_dem(input_point_cloud, dem_type, output_type='max', radiuses=['0.56']
run('gdalbuildvrt -input_file_list "%s" "%s" ' % (tiles_file_list, tiles_vrt_path))
merged_vrt_path = os.path.abspath(os.path.join(outdir, "merged.vrt"))
geotiff_tmp_path = os.path.abspath(os.path.join(outdir, 'tiles.tmp.tif'))
geotiff_small_path = os.path.abspath(os.path.join(outdir, 'tiles.small.tif'))
geotiff_small_filled_path = os.path.abspath(os.path.join(outdir, 'tiles.small_filled.tif'))
geotiff_path = os.path.abspath(os.path.join(outdir, 'tiles.tif'))
@ -204,7 +130,6 @@ def create_dem(input_point_cloud, dem_type, output_type='max', radiuses=['0.56']
'tiles_vrt': tiles_vrt_path,
'merged_vrt': merged_vrt_path,
'geotiff': geotiff_path,
'geotiff_tmp': geotiff_tmp_path,
'geotiff_small': geotiff_small_path,
'geotiff_small_filled': geotiff_small_filled_path
}
@ -213,31 +138,27 @@ def create_dem(input_point_cloud, dem_type, output_type='max', radiuses=['0.56']
# Sometimes, for some reason gdal_fillnodata.py
# behaves strangely when reading data directly from a .VRT
# so we need to convert to GeoTIFF first.
# Scale to 10% size
run('gdal_translate '
'-co NUM_THREADS={threads} '
'-co BIGTIFF=IF_SAFER '
'-co COMPRESS=DEFLATE '
'--config GDAL_CACHEMAX {max_memory}% '
'"{tiles_vrt}" "{geotiff_tmp}"'.format(**kwargs))
# Scale to 10% size
run('gdal_translate '
'-co NUM_THREADS={threads} '
'-co BIGTIFF=IF_SAFER '
'--config GDAL_CACHEMAX {max_memory}% '
'-outsize 10% 0 '
'"{geotiff_tmp}" "{geotiff_small}"'.format(**kwargs))
'-outsize 10% 0 '
'"{tiles_vrt}" "{geotiff_small}"'.format(**kwargs))
# Fill scaled
gdal_fillnodata(['.',
'-co', 'NUM_THREADS=%s' % kwargs['threads'],
'-co', 'BIGTIFF=IF_SAFER',
'-co', 'COMPRESS=DEFLATE',
'--config', 'GDAL_CACHE_MAX', str(kwargs['max_memory']) + '%',
'-b', '1',
'-of', 'GTiff',
kwargs['geotiff_small'], kwargs['geotiff_small_filled']])
# Merge filled scaled DEM with unfilled DEM using bilinear interpolation
run('gdalbuildvrt -resolution highest -r bilinear "%s" "%s" "%s"' % (merged_vrt_path, geotiff_small_filled_path, geotiff_tmp_path))
run('gdalbuildvrt -resolution highest -r bilinear "%s" "%s" "%s"' % (merged_vrt_path, geotiff_small_filled_path, tiles_vrt_path))
run('gdal_translate '
'-co NUM_THREADS={threads} '
'-co TILED=YES '
@ -260,14 +181,14 @@ def create_dem(input_point_cloud, dem_type, output_type='max', radiuses=['0.56']
else:
os.replace(geotiff_path, output_path)
if os.path.exists(geotiff_tmp_path):
if not keep_unfilled_copy:
os.remove(geotiff_tmp_path)
else:
os.replace(geotiff_tmp_path, io.related_file_path(output_path, postfix=".unfilled"))
if os.path.exists(tiles_vrt_path):
if with_euclidean_map:
emap_path = io.related_file_path(output_path, postfix=".euclideand")
compute_euclidean_map(tiles_vrt_path, emap_path, overwrite=True)
for cleanup_file in [tiles_vrt_path, tiles_file_list, merged_vrt_path, geotiff_small_path, geotiff_small_filled_path]:
if os.path.exists(cleanup_file): os.remove(cleanup_file)
for t in tiles:
if os.path.exists(t['filename']): os.remove(t['filename'])
@ -283,12 +204,20 @@ def compute_euclidean_map(geotiff_path, output_path, overwrite=False):
with rasterio.open(geotiff_path) as f:
nodata = f.nodatavals[0]
if not os.path.exists(output_path) or overwrite:
if not os.path.isfile(output_path) or overwrite:
if os.path.isfile(output_path):
os.remove(output_path)
log.ODM_INFO("Computing euclidean distance: %s" % output_path)
if gdal_proximity is not None:
try:
gdal_proximity(['gdal_proximity.py', geotiff_path, output_path, '-values', str(nodata)])
gdal_proximity(['gdal_proximity.py',
geotiff_path, output_path, '-values', str(nodata),
'-co', 'TILED=YES',
'-co', 'BIGTIFF=IF_SAFER',
'-co', 'COMPRESS=DEFLATE',
])
except Exception as e:
log.ODM_WARNING("Cannot compute euclidean distance: %s" % str(e))
@ -304,68 +233,31 @@ def compute_euclidean_map(geotiff_path, output_path, overwrite=False):
return output_path
def median_smoothing(geotiff_path, output_path, smoothing_iterations=1, window_size=512, num_workers=1):
def median_smoothing(geotiff_path, output_path, window_size=512, num_workers=1, radius=4):
""" Apply median smoothing """
start = datetime.now()
if not os.path.exists(geotiff_path):
raise Exception('File %s does not exist!' % geotiff_path)
log.ODM_INFO('Starting smoothing...')
with rasterio.open(geotiff_path) as img:
nodata = img.nodatavals[0]
dtype = img.dtypes[0]
shape = img.shape
arr = img.read()[0]
for i in range(smoothing_iterations):
log.ODM_INFO("Smoothing iteration %s" % str(i + 1))
rows, cols = numpy.meshgrid(numpy.arange(0, shape[0], window_size), numpy.arange(0, shape[1], window_size))
rows = rows.flatten()
cols = cols.flatten()
rows_end = numpy.minimum(rows + window_size, shape[0])
cols_end= numpy.minimum(cols + window_size, shape[1])
windows = numpy.dstack((rows, cols, rows_end, cols_end)).reshape(-1, 4)
filter = functools.partial(ndimage.median_filter, size=9, output=dtype, mode='nearest')
# threading backend and GIL released filter are important for memory efficiency and multi-core performance
window_arrays = Parallel(n_jobs=num_workers, backend='threading')(delayed(window_filter_2d)(arr, nodata , window, 9, filter) for window in windows)
for window, win_arr in zip(windows, window_arrays):
arr[window[0]:window[2], window[1]:window[3]] = win_arr
log.ODM_INFO("Smoothing completed in %s" % str(datetime.now() - start))
# write output
with rasterio.open(output_path, 'w', BIGTIFF="IF_SAFER", **img.profile) as imgout:
imgout.write(arr, 1)
kwargs = {
'input': geotiff_path,
'output': output_path,
'window': window_size,
'radius': radius,
}
system.run('fastrasterfilter "{input}" '
'--output "{output}" '
'--window-size {window} '
'--radius {radius} '
'--co TILED=YES '
'--co BIGTIFF=IF_SAFER '
'--co COMPRESS=DEFLATE '.format(**kwargs), env_vars={'OMP_NUM_THREADS': num_workers})
log.ODM_INFO('Completed smoothing to create %s in %s' % (output_path, datetime.now() - start))
return output_path
def window_filter_2d(arr, nodata, window, kernel_size, filter):
"""
Apply a filter to dem within a window, expects to work with kernal based filters
:param geotiff_path: path to the geotiff to filter
:param window: the window to apply the filter, should be a list contains row start, col_start, row_end, col_end
:param kernel_size: the size of the kernel for the filter, works with odd numbers, need to test if it works with even numbers
:param filter: the filter function which takes a 2d array as input and filter results as output.
"""
shape = arr.shape[:2]
if window[0] < 0 or window[1] < 0 or window[2] > shape[0] or window[3] > shape[1]:
raise Exception('Window is out of bounds')
expanded_window = [ max(0, window[0] - kernel_size // 2), max(0, window[1] - kernel_size // 2), min(shape[0], window[2] + kernel_size // 2), min(shape[1], window[3] + kernel_size // 2) ]
win_arr = arr[expanded_window[0]:expanded_window[2], expanded_window[1]:expanded_window[3]]
# Should have a better way to handle nodata, similar to the way the filter algorithms handle the border (reflection, nearest, interpolation, etc).
# For now will follow the old approach to guarantee identical outputs
nodata_locs = win_arr == nodata
win_arr = filter(win_arr)
win_arr[nodata_locs] = nodata
win_arr = win_arr[window[0] - expanded_window[0] : window[2] - expanded_window[0], window[1] - expanded_window[1] : window[3] - expanded_window[1]]
return win_arr
def get_dem_radius_steps(stats_file, steps, resolution, multiplier = 1.0):
radius_steps = [point_cloud.get_spacing(stats_file, resolution) * multiplier]
for _ in range(steps - 1):

Wyświetl plik

@ -12,6 +12,9 @@ class GeoFile:
with open(self.geo_path, 'r') as f:
contents = f.read().strip()
# Strip eventual BOM characters
contents = contents.replace('\ufeff', '')
lines = list(map(str.strip, contents.split('\n')))
if lines:

Wyświetl plik

@ -279,9 +279,10 @@ def obj2glb(input_obj, output_glb, rtc=(None, None), draco_compression=True, _in
)
gltf.extensionsRequired = ['KHR_materials_unlit']
gltf.extensionsUsed = ['KHR_materials_unlit']
if rtc != (None, None) and len(rtc) >= 2:
gltf.extensionsUsed = ['CESIUM_RTC', 'KHR_materials_unlit']
gltf.extensionsUsed.append('CESIUM_RTC')
gltf.extensions = {
'CESIUM_RTC': {
'center': [float(rtc[0]), float(rtc[1]), 0.0]

Wyświetl plik

@ -7,11 +7,11 @@ import dateutil.parser
import shutil
import multiprocessing
from opendm.loghelpers import double_quote, args_to_dict
from opendm.arghelpers import double_quote, args_to_dict
from vmem import virtual_memory
if sys.platform == 'win32':
# No colors on Windows, sorry!
if sys.platform == 'win32' or os.getenv('no_ansiesc'):
# No colors on Windows (sorry !) or existing no_ansiesc env variable
HEADER = ''
OKBLUE = ''
OKGREEN = ''

Wyświetl plik

@ -1,28 +0,0 @@
from shlex import _find_unsafe
def double_quote(s):
"""Return a shell-escaped version of the string *s*."""
if not s:
return '""'
if _find_unsafe(s) is None:
return s
# use double quotes, and prefix double quotes with a \
# the string $"b is then quoted as "$\"b"
return '"' + s.replace('"', '\\\"') + '"'
def args_to_dict(args):
args_dict = vars(args)
result = {}
for k in sorted(args_dict.keys()):
# Skip _is_set keys
if k.endswith("_is_set"):
continue
# Don't leak token
if k == 'sm_cluster' and args_dict[k] is not None:
result[k] = True
else:
result[k] = args_dict[k]
return result

Wyświetl plik

@ -123,6 +123,7 @@ def dem_to_mesh_gridded(inGeotiff, outMesh, maxVertexCount, maxConcurrency=1):
system.run('"{reconstructmesh}" -i "{infile}" '
'-o "{outfile}" '
'--archive-type 3 '
'--remove-spikes 0 --remove-spurious 0 --smooth 0 '
'--target-face-num {max_faces} -v 0'.format(**cleanupArgs))
@ -186,7 +187,7 @@ def screened_poisson_reconstruction(inPointCloud, outMesh, depth = 8, samples =
if threads < 1:
break
else:
log.ODM_WARNING("PoissonRecon failed with %s threads, let's retry with %s..." % (threads, threads // 2))
log.ODM_WARNING("PoissonRecon failed with %s threads, let's retry with %s..." % (threads * 2, threads))
# Cleanup and reduce vertex count if necessary
@ -199,6 +200,7 @@ def screened_poisson_reconstruction(inPointCloud, outMesh, depth = 8, samples =
system.run('"{reconstructmesh}" -i "{infile}" '
'-o "{outfile}" '
'--archive-type 3 '
'--remove-spikes 0 --remove-spurious 20 --smooth 0 '
'--target-face-num {max_faces} -v 0'.format(**cleanupArgs))

Wyświetl plik

@ -181,8 +181,13 @@ def get_primary_band_name(multi_camera, user_band_name):
if len(multi_camera) < 1:
raise Exception("Invalid multi_camera list")
# multi_camera is already sorted by band_index
# Pick RGB, or Green, or Blue, in this order, if available, otherwise first band
if user_band_name == "auto":
for aliases in [['rgb', 'redgreenblue'], ['green', 'g'], ['blue', 'b']]:
for band in multi_camera:
if band['name'].lower() in aliases:
return band['name']
return multi_camera[0]['name']
for band in multi_camera:
@ -504,6 +509,28 @@ def find_features_homography(image_gray, align_image_gray, feature_retention=0.7
# Detect SIFT features and compute descriptors.
detector = cv2.SIFT_create(edgeThreshold=10, contrastThreshold=0.1)
h,w = image_gray.shape
max_dim = max(h, w)
max_size = 2048
if max_dim > max_size:
if max_dim == w:
f = max_size / w
else:
f = max_size / h
image_gray = cv2.resize(image_gray, None, fx=f, fy=f, interpolation=cv2.INTER_AREA)
h,w = image_gray.shape
if align_image_gray.shape[0] != image_gray.shape[0]:
fx = image_gray.shape[1]/align_image_gray.shape[1]
fy = image_gray.shape[0]/align_image_gray.shape[0]
align_image_gray = cv2.resize(align_image_gray, None,
fx=fx,
fy=fy,
interpolation=(cv2.INTER_AREA if (fx < 1.0 and fy < 1.0) else cv2.INTER_LANCZOS4))
kp_image, desc_image = detector.detectAndCompute(image_gray, None)
kp_align_image, desc_align_image = detector.detectAndCompute(align_image_gray, None)

Wyświetl plik

@ -85,7 +85,7 @@ def generate_kmz(orthophoto_file, output_file=None, outsize=None):
system.run('gdal_translate -of KMLSUPEROVERLAY -co FORMAT=PNG "%s" "%s" %s '
'--config GDAL_CACHEMAX %s%% ' % (orthophoto_file, output_file, bandparam, get_max_memory()))
def post_orthophoto_steps(args, bounds_file_path, orthophoto_file, orthophoto_tiles_dir):
def post_orthophoto_steps(args, bounds_file_path, orthophoto_file, orthophoto_tiles_dir, resolution):
if args.crop > 0 or args.boundary:
Cropper.crop(bounds_file_path, orthophoto_file, get_orthophoto_vars(args), keep_original=not args.optimize_disk_space, warp_options=['-dstalpha'])
@ -99,7 +99,7 @@ def post_orthophoto_steps(args, bounds_file_path, orthophoto_file, orthophoto_ti
generate_kmz(orthophoto_file)
if args.tiles:
generate_orthophoto_tiles(orthophoto_file, orthophoto_tiles_dir, args.max_concurrency)
generate_orthophoto_tiles(orthophoto_file, orthophoto_tiles_dir, args.max_concurrency, resolution)
if args.cog:
convert_to_cogeo(orthophoto_file, max_workers=args.max_concurrency, compression=args.orthophoto_compression)

Wyświetl plik

@ -13,7 +13,7 @@ from opendm import system
from opendm import context
from opendm import camera
from opendm import location
from opendm.photo import find_largest_photo_dim, find_largest_photo
from opendm.photo import find_largest_photo_dims, find_largest_photo
from opensfm.large import metadataset
from opensfm.large import tools
from opensfm.actions import undistort
@ -64,7 +64,6 @@ class OSFMContext:
"Check that the images have enough overlap, "
"that there are enough recognizable features "
"and that the images are in focus. "
"You could also try to increase the --min-num-features parameter."
"The program will now exit.")
if rolling_shutter_correct:
@ -211,11 +210,25 @@ class OSFMContext:
'lowest': 0.0675,
}
max_dim = find_largest_photo_dim(photos)
max_dims = find_largest_photo_dims(photos)
if max_dim > 0:
if max_dims is not None:
w, h = max_dims
max_dim = max(w, h)
log.ODM_INFO("Maximum photo dimensions: %spx" % str(max_dim))
feature_process_size = int(max_dim * feature_quality_scale[args.feature_quality])
lower_limit = 320
upper_limit = 4480
megapixels = (w * h) / 1e6
multiplier = 1
if megapixels < 2:
multiplier = 2
elif megapixels > 42:
multiplier = 0.5
factor = min(1, feature_quality_scale[args.feature_quality] * multiplier)
feature_process_size = min(upper_limit, max(lower_limit, int(max_dim * factor)))
log.ODM_INFO("Photo dimensions for feature extraction: %ipx" % feature_process_size)
else:
log.ODM_WARNING("Cannot compute max image dimensions, going with defaults")
@ -227,6 +240,11 @@ class OSFMContext:
else:
matcher_graph_rounds = 50
matcher_neighbors = 0
# Always use matcher-neighbors if less than 4 pictures
if len(photos) <= 3:
matcher_graph_rounds = 0
matcher_neighbors = 3
config = [
"use_exif_size: no",
@ -248,7 +266,10 @@ class OSFMContext:
]
if args.matcher_order > 0:
config.append("matching_order_neighbors: %s" % args.matcher_order)
if not reconstruction.is_georeferenced():
config.append("matching_order_neighbors: %s" % args.matcher_order)
else:
log.ODM_WARNING("Georeferenced reconstruction, ignoring --matcher-order")
if args.camera_lens != 'auto':
config.append("camera_projection_type: %s" % args.camera_lens.upper())
@ -275,9 +296,8 @@ class OSFMContext:
config.append("matcher_type: %s" % osfm_matchers[matcher_type])
# GPU acceleration?
if has_gpu(args):
max_photo = find_largest_photo(photos)
w, h = max_photo.width, max_photo.height
if has_gpu(args) and max_dims is not None:
w, h = max_dims
if w > h:
h = int((h / w) * feature_process_size)
w = int(feature_process_size)
@ -551,6 +571,8 @@ class OSFMContext:
pdf_report.save_report("report.pdf")
if os.path.exists(osfm_report_path):
if os.path.exists(report_path):
os.unlink(report_path)
shutil.move(osfm_report_path, report_path)
else:
log.ODM_WARNING("Report could not be generated")
@ -771,3 +793,12 @@ def get_all_submodel_paths(submodels_path, *all_paths):
result.append([os.path.join(submodels_path, f, ap) for ap in all_paths])
return result
def is_submodel(opensfm_root):
# A bit hackish, but works without introducing additional markers / flags
# Look at the path of the opensfm directory and see if "submodel_" is part of it
parts = os.path.abspath(opensfm_root).split(os.path.sep)
return (len(parts) >= 2 and parts[-2][:9] == "submodel_") or \
os.path.isfile(os.path.join(opensfm_root, "split_merge_stop_at_reconstruction.txt")) or \
os.path.isfile(os.path.join(opensfm_root, "features", "empty"))

Wyświetl plik

@ -430,7 +430,8 @@ class ODM_Photo:
camera_projection = camera_projection.lower()
# Parrot Sequoia's "fisheye" model maps to "fisheye_opencv"
if camera_projection == "fisheye" and self.camera_make.lower() == "parrot" and self.camera_model.lower() == "sequoia":
# or better yet, replace all fisheye with fisheye_opencv, but wait to change API signature
if camera_projection == "fisheye":
camera_projection = "fisheye_opencv"
if camera_projection in projections:
@ -630,6 +631,8 @@ class ODM_Photo:
def int_values(self, tag):
if isinstance(tag.values, list):
return [int(v) for v in tag.values]
elif isinstance(tag.values, str) and tag.values == '':
return []
else:
return [int(tag.values)]
@ -923,3 +926,6 @@ class ODM_Photo:
return self.width * self.height / 1e6
else:
return 0.0
def is_make_model(self, make, model):
return self.camera_make.lower() == make.lower() and self.camera_model.lower() == model.lower()

Wyświetl plik

@ -9,6 +9,8 @@ from opendm.concurrency import parallel_map
from opendm.utils import double_quote
from opendm.boundary import as_polygon, as_geojson
from opendm.dem.pdal import run_pipeline
from opendm.opc import classify
from opendm.dem import commands
def ply_info(input_ply):
if not os.path.exists(input_ply):
@ -274,6 +276,32 @@ def merge_ply(input_point_cloud_files, output_file, dims=None):
system.run(' '.join(cmd))
def post_point_cloud_steps(args, tree, rerun=False):
# Classify and rectify before generating derivate files
if args.pc_classify:
pc_classify_marker = os.path.join(tree.odm_georeferencing, 'pc_classify_done.txt')
if not io.file_exists(pc_classify_marker) or rerun:
log.ODM_INFO("Classifying {} using Simple Morphological Filter (1/2)".format(tree.odm_georeferencing_model_laz))
commands.classify(tree.odm_georeferencing_model_laz,
args.smrf_scalar,
args.smrf_slope,
args.smrf_threshold,
args.smrf_window
)
log.ODM_INFO("Classifying {} using OpenPointClass (2/2)".format(tree.odm_georeferencing_model_laz))
classify(tree.odm_georeferencing_model_laz, args.max_concurrency)
with open(pc_classify_marker, 'w') as f:
f.write('Classify: smrf\n')
f.write('Scalar: {}\n'.format(args.smrf_scalar))
f.write('Slope: {}\n'.format(args.smrf_slope))
f.write('Threshold: {}\n'.format(args.smrf_threshold))
f.write('Window: {}\n'.format(args.smrf_window))
if args.pc_rectify:
commands.rectify(tree.odm_georeferencing_model_laz)
# XYZ point cloud output
if args.pc_csv:
log.ODM_INFO("Creating CSV file (XYZ format)")

Wyświetl plik

@ -2,6 +2,7 @@ from opendm import log
# Make Model (lowercase) --> readout time (ms)
RS_DATABASE = {
'autel robotics xt701': 25, # Autel Evo II 8k
'dji phantom vision fc200': 74, # Phantom 2
'dji fc300s': 33, # Phantom 3 Advanced
@ -18,6 +19,7 @@ RS_DATABASE = {
'dji fc220': 64, # DJI Mavic Pro (Platinum)
'hasselblad l1d-20c': lambda p: 47 if p.get_capture_megapixels() < 17 else 56, # DJI Mavic 2 Pro (at 16:10 => 16.8MP 47ms, at 3:2 => 19.9MP 56ms. 4:3 has 17.7MP with same image height as 3:2 which can be concluded as same sensor readout)
'hasselblad l2d-20c': 16.6, # DJI Mavic 3 (not enterprise version)
'dji fc3582': lambda p: 26 if p.get_capture_megapixels() < 48 else 60, # DJI Mini 3 pro (at 48MP readout is 60ms, at 12MP it's 26ms)
@ -39,6 +41,10 @@ RS_DATABASE = {
'autel robotics xl724': 29, # Autel Nano+
'parrot anafi': 39, # Parrot Anafi
'autel robotics xt705': 30, # Autel EVO II pro
# Help us add more!
# See: https://github.com/OpenDroneMap/RSCalibration for instructions
}

Wyświetl plik

@ -35,12 +35,12 @@ def dn_to_temperature(photo, image, images_path):
# Every camera stores thermal information differently
# The following will work for MicaSense Altum cameras
# but not necessarily for others
if photo.camera_make == "MicaSense" and photo.camera_model == "Altum":
if photo.camera_make == "MicaSense" and photo.camera_model[:5] == "Altum":
image = image.astype("float32")
image -= (273.15 * 100.0) # Convert Kelvin to Celsius
image *= 0.01
return image
elif photo.camera_make == "DJI" and photo.camera_model == "ZH20T":
elif photo.camera_make == "DJI" and photo.camera_model == "ZH20T":
filename, file_extension = os.path.splitext(photo.filename)
# DJI H20T high gain mode supports measurement of -40~150 celsius degrees
if file_extension.lower() in [".tif", ".tiff"] and image.min() >= 23315: # Calibrated grayscale tif

Wyświetl plik

@ -1,16 +1,25 @@
import os
import sys
import math
from opendm import log
from opendm import system
from opendm import io
def generate_tiles(geotiff, output_dir, max_concurrency):
gdal2tiles = os.path.join(os.path.dirname(__file__), "gdal2tiles.py")
system.run('%s "%s" --processes %s -z 5-21 -n -w none "%s" "%s"' % (sys.executable, gdal2tiles, max_concurrency, geotiff, output_dir))
def generate_tiles(geotiff, output_dir, max_concurrency, resolution):
circumference_earth_cm = 2*math.pi*637_813_700
px_per_tile = 256
resolution_equator_cm = circumference_earth_cm/px_per_tile
zoom = math.ceil(math.log(resolution_equator_cm/resolution, 2))
def generate_orthophoto_tiles(geotiff, output_dir, max_concurrency):
min_zoom = 5 # 4.89 km/px
max_zoom = min(zoom, 23) # No deeper zoom than 23 (1.86 cm/px at equator)
gdal2tiles = os.path.join(os.path.dirname(__file__), "gdal2tiles.py")
system.run('%s "%s" --processes %s -z %s-%s -n -w none "%s" "%s"' % (sys.executable, gdal2tiles, max_concurrency, min_zoom, max_zoom, geotiff, output_dir))
def generate_orthophoto_tiles(geotiff, output_dir, max_concurrency, resolution):
try:
generate_tiles(geotiff, output_dir, max_concurrency)
generate_tiles(geotiff, output_dir, max_concurrency, resolution)
except Exception as e:
log.ODM_WARNING("Cannot generate orthophoto tiles: %s" % str(e))
@ -37,10 +46,10 @@ def generate_colored_hillshade(geotiff):
log.ODM_WARNING("Cannot generate colored hillshade: %s" % str(e))
return (None, None, None)
def generate_dem_tiles(geotiff, output_dir, max_concurrency):
def generate_dem_tiles(geotiff, output_dir, max_concurrency, resolution):
try:
colored_dem, hillshade_dem, colored_hillshade_dem = generate_colored_hillshade(geotiff)
generate_tiles(colored_hillshade_dem, output_dir, max_concurrency)
generate_tiles(colored_hillshade_dem, output_dir, max_concurrency, resolution)
# Cleanup
for f in [colored_dem, hillshade_dem, colored_hillshade_dem]:

Wyświetl plik

@ -13,6 +13,7 @@ from opendm import log
from opendm import io
from opendm import system
from opendm import context
from opendm import multispectral
from opendm.progress import progressbc
from opendm.photo import ODM_Photo
@ -27,7 +28,7 @@ class ODM_Reconstruction(object):
self.gcp = None
self.multi_camera = self.detect_multi_camera()
self.filter_photos()
def detect_multi_camera(self):
"""
Looks at the reconstruction photos and determines if this
@ -45,22 +46,88 @@ class ODM_Reconstruction(object):
band_photos[p.band_name].append(p)
bands_count = len(band_photos)
if bands_count >= 2 and bands_count <= 8:
# Band name with the minimum number of photos
max_band_name = None
max_photos = -1
for band_name in band_photos:
if len(band_photos[band_name]) > max_photos:
max_band_name = band_name
max_photos = len(band_photos[band_name])
if bands_count >= 2 and bands_count <= 10:
# Validate that all bands have the same number of images,
# otherwise this is not a multi-camera setup
img_per_band = len(band_photos[p.band_name])
for band in band_photos:
if len(band_photos[band]) != img_per_band:
log.ODM_ERROR("Multi-camera setup detected, but band \"%s\" (identified from \"%s\") has only %s images (instead of %s), perhaps images are missing or are corrupted. Please include all necessary files to process all bands and try again." % (band, band_photos[band][0].filename, len(band_photos[band]), img_per_band))
raise RuntimeError("Invalid multi-camera images")
img_per_band = len(band_photos[max_band_name])
mc = []
for band_name in band_indexes:
mc.append({'name': band_name, 'photos': band_photos[band_name]})
# Sort by band index
mc.sort(key=lambda x: band_indexes[x['name']])
filter_missing = False
for band in band_photos:
if len(band_photos[band]) < img_per_band:
log.ODM_WARNING("Multi-camera setup detected, but band \"%s\" (identified from \"%s\") has only %s images (instead of %s), perhaps images are missing or are corrupted." % (band, band_photos[band][0].filename, len(band_photos[band]), len(band_photos[max_band_name])))
filter_missing = True
if filter_missing:
# Calculate files to ignore
_, p2s = multispectral.compute_band_maps(mc, max_band_name)
max_files_per_band = 0
for filename in p2s:
max_files_per_band = max(max_files_per_band, len(p2s[filename]))
for filename in p2s:
if len(p2s[filename]) < max_files_per_band:
photos_to_remove = p2s[filename] + [p for p in self.photos if p.filename == filename]
for photo in photos_to_remove:
log.ODM_WARNING("Excluding %s" % photo.filename)
self.photos = [p for p in self.photos if p != photo]
for i in range(len(mc)):
mc[i]['photos'] = [p for p in mc[i]['photos'] if p != photo]
log.ODM_INFO("New image count: %s" % len(self.photos))
# We enforce a normalized band order for all bands that we can identify
# and rely on the manufacturer's band_indexes as a fallback for all others
normalized_band_order = {
'RGB': '0',
'REDGREENBLUE': '0',
'RED': '1',
'R': '1',
'GREEN': '2',
'G': '2',
'BLUE': '3',
'B': '3',
'NIR': '4',
'N': '4',
'REDEDGE': '5',
'RE': '5',
'PANCHRO': '6',
'LWIR': '7',
'L': '7',
}
for band_name in band_indexes:
if band_name.upper() not in normalized_band_order:
log.ODM_WARNING(f"Cannot identify order for {band_name} band, using manufacturer suggested index instead")
# Sort
mc.sort(key=lambda x: normalized_band_order.get(x['name'].upper(), '9' + band_indexes[x['name']]))
for c, d in enumerate(mc):
log.ODM_INFO(f"Band {c + 1}: {d['name']}")
return mc
return None
@ -82,6 +149,12 @@ class ODM_Reconstruction(object):
if 'rgb' in bands or 'redgreenblue' in bands:
if 'red' in bands and 'green' in bands and 'blue' in bands:
bands_to_remove.append(bands['rgb'] if 'rgb' in bands else bands['redgreenblue'])
# Mavic 3M's RGB camera lens are too different than the multispectral ones
# so we drop the RGB channel instead
elif self.photos[0].is_make_model("DJI", "M3M") and 'red' in bands and 'green' in bands:
bands_to_remove.append(bands['rgb'] if 'rgb' in bands else bands['redgreenblue'])
else:
for b in ['red', 'green', 'blue']:
if b in bands:

Wyświetl plik

@ -4,7 +4,7 @@ import json
from opendm import log
from opendm.photo import find_largest_photo_dims
from osgeo import gdal
from opendm.loghelpers import double_quote
from opendm.arghelpers import double_quote
class NumpyEncoder(json.JSONEncoder):
def default(self, obj):

Wyświetl plik

@ -54,8 +54,10 @@ class SrtFileParser:
if not self.gps_data:
for d in self.data:
lat, lon, alt = d.get('latitude'), d.get('longitude'), d.get('altitude')
if alt is None:
alt = 0
tm = d.get('start')
if lat is not None and lon is not None:
if self.ll_to_utm is None:
self.ll_to_utm, self.utm_to_ll = location.utm_transformers_from_ll(lon, lat)
@ -122,6 +124,25 @@ class SrtFileParser:
# 00:00:00,000 --> 00:00:01,000
# F/2.8, SS 206.14, ISO 150, EV 0, GPS (-82.6669, 27.7716, 10), D 2.80m, H 0.00m, H.S 0.00m/s, V.S 0.00m/s
# DJI Phantom4 RTK
# 36
# 00:00:35,000 --> 00:00:36,000
# F/6.3, SS 60, ISO 100, EV 0, RTK (120.083799, 30.213635, 28), HOME (120.084146, 30.214243, 103.55m), D 75.36m, H 76.19m, H.S 0.30m/s, V.S 0.00m/s, F.PRY (-5.3°, 2.1°, 28.3°), G.PRY (-40.0°, 0.0°, 28.2°)
# DJI Unknown Model #1
# 1
# 00:00:00,000 --> 00:00:00,033
# <font size="28">SrtCnt : 1, DiffTime : 33ms
# 2024-01-18 10:23:26.397
# [iso : 150] [shutter : 1/5000.0] [fnum : 170] [ev : 0] [ct : 5023] [color_md : default] [focal_len : 240] [dzoom_ratio: 10000, delta:0],[latitude: -22.724555] [longitude: -47.602414] [rel_alt: 0.300 abs_alt: 549.679] </font>
# DJI Mavic 2 Zoom
# 1
# 00:00:00,000 --> 00:00:00,041
# <font size="36">FrameCnt : 1, DiffTime : 41ms
# 2023-07-15 11:55:16,320,933
# [iso : 100] [shutter : 1/400.0] [fnum : 280] [ev : 0] [ct : 5818] [color_md : default] [focal_len : 240] [latitude : 0.000000] [longtitude : 0.000000] [altitude: 0.000000] </font>
with open(self.filename, 'r') as f:
iso = None
@ -192,15 +213,21 @@ class SrtFileParser:
latitude = match_single([
("latitude: ([\d\.\-]+)", lambda v: float(v) if v != 0 else None),
("latitude : ([\d\.\-]+)", lambda v: float(v) if v != 0 else None),
("GPS \([\d\.\-]+,? ([\d\.\-]+),? [\d\.\-]+\)", lambda v: float(v) if v != 0 else None),
("RTK \([-+]?\d+\.\d+, (-?\d+\.\d+), -?\d+\)", lambda v: float(v) if v != 0 else None),
], line)
longitude = match_single([
("longitude: ([\d\.\-]+)", lambda v: float(v) if v != 0 else None),
("longtitude : ([\d\.\-]+)", lambda v: float(v) if v != 0 else None),
("GPS \(([\d\.\-]+),? [\d\.\-]+,? [\d\.\-]+\)", lambda v: float(v) if v != 0 else None),
("RTK \((-?\d+\.\d+), [-+]?\d+\.\d+, -?\d+\)", lambda v: float(v) if v != 0 else None),
], line)
altitude = match_single([
("altitude: ([\d\.\-]+)", lambda v: float(v) if v != 0 else None),
("GPS \([\d\.\-]+,? [\d\.\-]+,? ([\d\.\-]+)\)", lambda v: float(v) if v != 0 else None),
("RTK \([-+]?\d+\.\d+, [-+]?\d+\.\d+, (-?\d+)\)", lambda v: float(v) if v != 0 else None),
("abs_alt: ([\d\.\-]+)", lambda v: float(v) if v != 0 else None),
], line)

29
run.py
Wyświetl plik

@ -13,7 +13,7 @@ from opendm import system
from opendm import io
from opendm.progress import progressbc
from opendm.utils import get_processing_results_paths, rm_r
from opendm.loghelpers import args_to_dict
from opendm.arghelpers import args_to_dict, save_opts, compare_args, find_rerun_stage
from stages.odm_app import ODMApp
@ -29,20 +29,26 @@ if __name__ == '__main__':
log.ODM_INFO('Initializing ODM %s - %s' % (odm_version(), system.now()))
progressbc.set_project_name(args.name)
args.project_path = os.path.join(args.project_path, args.name)
if not io.dir_exists(args.project_path):
log.ODM_ERROR('Directory %s does not exist.' % args.name)
exit(1)
opts_json = os.path.join(args.project_path, "options.json")
auto_rerun_stage, opts_diff = find_rerun_stage(opts_json, args, config.rerun_stages, config.processopts)
if auto_rerun_stage is not None and len(auto_rerun_stage) > 0:
log.ODM_INFO("Rerunning from: %s" % auto_rerun_stage[0])
args.rerun_from = auto_rerun_stage
# Print args
args_dict = args_to_dict(args)
log.ODM_INFO('==============')
for k in args_dict.keys():
log.ODM_INFO('%s: %s' % (k, args_dict[k]))
log.ODM_INFO('%s: %s%s' % (k, args_dict[k], ' [changed]' if k in opts_diff else ''))
log.ODM_INFO('==============')
progressbc.set_project_name(args.name)
# Add project dir if doesn't exist
args.project_path = os.path.join(args.project_path, args.name)
if not io.dir_exists(args.project_path):
log.ODM_WARNING('Directory %s does not exist. Creating it now.' % args.name)
system.mkdir_p(os.path.abspath(args.project_path))
# If user asks to rerun everything, delete all of the existing progress directories.
if args.rerun_all:
@ -57,6 +63,9 @@ if __name__ == '__main__':
app = ODMApp(args)
retcode = app.execute()
if retcode == 0:
save_opts(opts_json, args)
# Do not show ASCII art for local submodels runs
if retcode == 0 and not "submodels" in args.project_path:

Wyświetl plik

@ -81,14 +81,11 @@ class ODMMvsTexStage(types.ODM_Stage):
# Format arguments to fit Mvs-Texturing app
skipGlobalSeamLeveling = ""
skipLocalSeamLeveling = ""
keepUnseenFaces = ""
nadir = ""
if args.texturing_skip_global_seam_leveling:
skipGlobalSeamLeveling = "--skip_global_seam_leveling"
if args.texturing_skip_local_seam_leveling:
skipLocalSeamLeveling = "--skip_local_seam_leveling"
if args.texturing_keep_unseen_faces:
keepUnseenFaces = "--keep_unseen_faces"
if (r['nadir']):
@ -102,7 +99,6 @@ class ODMMvsTexStage(types.ODM_Stage):
'dataTerm': 'gmi',
'outlierRemovalType': 'gauss_clamping',
'skipGlobalSeamLeveling': skipGlobalSeamLeveling,
'skipLocalSeamLeveling': skipLocalSeamLeveling,
'keepUnseenFaces': keepUnseenFaces,
'toneMapping': 'none',
'nadirMode': nadir,
@ -114,7 +110,7 @@ class ODMMvsTexStage(types.ODM_Stage):
mvs_tmp_dir = os.path.join(r['out_dir'], 'tmp')
# Make sure tmp directory is empty
# mvstex creates a tmp directory, so make sure it is empty
if io.dir_exists(mvs_tmp_dir):
log.ODM_INFO("Removing old tmp directory {}".format(mvs_tmp_dir))
shutil.rmtree(mvs_tmp_dir)
@ -125,7 +121,6 @@ class ODMMvsTexStage(types.ODM_Stage):
'-t {toneMapping} '
'{intermediate} '
'{skipGlobalSeamLeveling} '
'{skipLocalSeamLeveling} '
'{keepUnseenFaces} '
'{nadirMode} '
'{labelingFile} '

Wyświetl plik

@ -27,6 +27,7 @@ class ODMApp:
Initializes the application and defines the ODM application pipeline stages
"""
json_log_paths = [os.path.join(args.project_path, "log.json")]
if args.copy_to:
json_log_paths.append(args.copy_to)

Wyświetl plik

@ -12,7 +12,6 @@ from opendm.cropper import Cropper
from opendm import pseudogeo
from opendm.tiles.tiler import generate_dem_tiles
from opendm.cogeo import convert_to_cogeo
from opendm.opc import classify
class ODMDEMStage(types.ODM_Stage):
def process(self, args, outputs):
@ -35,7 +34,6 @@ class ODMDEMStage(types.ODM_Stage):
ignore_resolution=ignore_resolution and args.ignore_gsd,
has_gcp=reconstruction.has_gcp())
log.ODM_INFO('Classify: ' + str(args.pc_classify))
log.ODM_INFO('Create DSM: ' + str(args.dsm))
log.ODM_INFO('Create DTM: ' + str(args.dtm))
log.ODM_INFO('DEM input file {0} found: {1}'.format(dem_input, str(pc_model_found)))
@ -45,34 +43,9 @@ class ODMDEMStage(types.ODM_Stage):
if not io.dir_exists(odm_dem_root):
system.mkdir_p(odm_dem_root)
if args.pc_classify and pc_model_found:
pc_classify_marker = os.path.join(odm_dem_root, 'pc_classify_done.txt')
if not io.file_exists(pc_classify_marker) or self.rerun():
log.ODM_INFO("Classifying {} using Simple Morphological Filter (1/2)".format(dem_input))
commands.classify(dem_input,
args.smrf_scalar,
args.smrf_slope,
args.smrf_threshold,
args.smrf_window
)
log.ODM_INFO("Classifying {} using OpenPointClass (2/2)".format(dem_input))
classify(dem_input, args.max_concurrency)
with open(pc_classify_marker, 'w') as f:
f.write('Classify: smrf\n')
f.write('Scalar: {}\n'.format(args.smrf_scalar))
f.write('Slope: {}\n'.format(args.smrf_slope))
f.write('Threshold: {}\n'.format(args.smrf_threshold))
f.write('Window: {}\n'.format(args.smrf_window))
progress = 20
self.update_progress(progress)
if args.pc_rectify:
commands.rectify(dem_input)
# Do we need to process anything here?
if (args.dsm or args.dtm) and pc_model_found:
dsm_output_filename = os.path.join(odm_dem_root, 'dsm.tif')
@ -100,8 +73,8 @@ class ODMDEMStage(types.ODM_Stage):
resolution=resolution / 100.0,
decimation=args.dem_decimation,
max_workers=args.max_concurrency,
keep_unfilled_copy=args.dem_euclidean_map,
max_tiles=math.ceil(len(reconstruction.photos) / 2)
with_euclidean_map=args.dem_euclidean_map,
max_tiles=None if reconstruction.has_geotagged_photos() else math.ceil(len(reconstruction.photos) / 2)
)
dem_geotiff_path = os.path.join(odm_dem_root, "{}.tif".format(product))
@ -111,27 +84,16 @@ class ODMDEMStage(types.ODM_Stage):
# Crop DEM
Cropper.crop(bounds_file_path, dem_geotiff_path, utils.get_dem_vars(args), keep_original=not args.optimize_disk_space)
if args.dem_euclidean_map:
unfilled_dem_path = io.related_file_path(dem_geotiff_path, postfix=".unfilled")
if args.crop > 0 or args.boundary:
# Crop unfilled DEM
Cropper.crop(bounds_file_path, unfilled_dem_path, utils.get_dem_vars(args), keep_original=not args.optimize_disk_space)
commands.compute_euclidean_map(unfilled_dem_path,
io.related_file_path(dem_geotiff_path, postfix=".euclideand"),
overwrite=True)
if pseudo_georeference:
pseudogeo.add_pseudo_georeferencing(dem_geotiff_path)
if args.tiles:
generate_dem_tiles(dem_geotiff_path, tree.path("%s_tiles" % product), args.max_concurrency)
generate_dem_tiles(dem_geotiff_path, tree.path("%s_tiles" % product), args.max_concurrency, resolution)
if args.cog:
convert_to_cogeo(dem_geotiff_path, max_workers=args.max_concurrency)
progress += 30
progress += 40
self.update_progress(progress)
else:
log.ODM_WARNING('Found existing outputs in: %s' % odm_dem_root)

Wyświetl plik

@ -36,7 +36,7 @@ class ODMFilterPoints(types.ODM_Stage):
else:
avg_gsd = gsd.opensfm_reconstruction_average_gsd(tree.opensfm_reconstruction)
if avg_gsd is not None:
boundary_distance = avg_gsd * 20 # 20 is arbitrary
boundary_distance = avg_gsd * 100 # 100 is arbitrary
if boundary_distance is not None:
outputs['boundary'] = compute_boundary_from_shots(tree.opensfm_reconstruction, boundary_distance, reconstruction.get_proj_offset())

Wyświetl plik

@ -5,6 +5,7 @@ import pipes
import fiona
import fiona.crs
import json
import zipfile
from collections import OrderedDict
from pyproj import CRS
@ -32,6 +33,7 @@ class ODMGeoreferencingStage(types.ODM_Stage):
gcp_export_file = tree.path("odm_georeferencing", "ground_control_points.gpkg")
gcp_gml_export_file = tree.path("odm_georeferencing", "ground_control_points.gml")
gcp_geojson_export_file = tree.path("odm_georeferencing", "ground_control_points.geojson")
gcp_geojson_zip_export_file = tree.path("odm_georeferencing", "ground_control_points.zip")
unaligned_model = io.related_file_path(tree.odm_georeferencing_model_laz, postfix="_unaligned")
if os.path.isfile(unaligned_model) and self.rerun():
os.unlink(unaligned_model)
@ -104,6 +106,9 @@ class ODMGeoreferencingStage(types.ODM_Stage):
with open(gcp_geojson_export_file, 'w') as f:
f.write(json.dumps(geojson, indent=4))
with zipfile.ZipFile(gcp_geojson_zip_export_file, 'w', compression=zipfile.ZIP_LZMA) as f:
f.write(gcp_geojson_export_file, arcname=os.path.basename(gcp_geojson_export_file))
else:
log.ODM_WARNING("GCPs could not be loaded for writing to %s" % gcp_export_file)
@ -131,11 +136,14 @@ class ODMGeoreferencingStage(types.ODM_Stage):
f'--writers.las.a_srs="{reconstruction.georef.proj4()}"' # HOBU this should maybe be WKT
]
if reconstruction.has_gcp() and io.file_exists(gcp_geojson_export_file):
log.ODM_INFO("Embedding GCP info in point cloud")
params += [
'--writers.las.vlrs="{\\\"filename\\\": \\\"%s\\\", \\\"user_id\\\": \\\"ODM\\\", \\\"record_id\\\": 1, \\\"description\\\": \\\"Ground Control Points (GeoJSON)\\\"}"' % gcp_geojson_export_file.replace(os.sep, "/")
]
if reconstruction.has_gcp() and io.file_exists(gcp_geojson_zip_export_file):
if os.path.getsize(gcp_geojson_zip_export_file) <= 65535:
log.ODM_INFO("Embedding GCP info in point cloud")
params += [
'--writers.las.vlrs="{\\\"filename\\\": \\\"%s\\\", \\\"user_id\\\": \\\"ODM\\\", \\\"record_id\\\": 2, \\\"description\\\": \\\"Ground Control Points (zip)\\\"}"' % gcp_geojson_zip_export_file.replace(os.sep, "/")
]
else:
log.ODM_WARNING("Cannot embed GCP info in point cloud, %s is too large" % gcp_geojson_zip_export_file)
system.run(cmd + ' ' + ' '.join(stages) + ' ' + ' '.join(params))

Wyświetl plik

@ -60,7 +60,7 @@ class ODMeshingStage(types.ODM_Stage):
available_cores=args.max_concurrency,
method='poisson' if args.fast_orthophoto else 'gridded',
smooth_dsm=True,
max_tiles=math.ceil(len(reconstruction.photos) / 2))
max_tiles=None if reconstruction.has_geotagged_photos() else math.ceil(len(reconstruction.photos) / 2))
else:
log.ODM_WARNING('Found a valid ODM 2.5D Mesh file in: %s' %
tree.odm_25dmesh)

Wyświetl plik

@ -7,6 +7,7 @@ from opendm import context
from opendm import types
from opendm import gsd
from opendm import orthophoto
from opendm.osfm import is_submodel
from opendm.concurrency import get_max_memory_mb
from opendm.cutline import compute_cutline
from opendm.utils import double_quote
@ -28,10 +29,10 @@ class ODMOrthoPhotoStage(types.ODM_Stage):
if not io.file_exists(tree.odm_orthophoto_tif) or self.rerun():
resolution = 1.0 / (gsd.cap_resolution(args.orthophoto_resolution, tree.opensfm_reconstruction,
ignore_gsd=args.ignore_gsd,
ignore_resolution=(not reconstruction.is_georeferenced()) and args.ignore_gsd,
has_gcp=reconstruction.has_gcp()) / 100.0)
resolution = gsd.cap_resolution(args.orthophoto_resolution, tree.opensfm_reconstruction,
ignore_gsd=args.ignore_gsd,
ignore_resolution=(not reconstruction.is_georeferenced()) and args.ignore_gsd,
has_gcp=reconstruction.has_gcp())
# odm_orthophoto definitions
kwargs = {
@ -39,7 +40,7 @@ class ODMOrthoPhotoStage(types.ODM_Stage):
'log': tree.odm_orthophoto_log,
'ortho': tree.odm_orthophoto_render,
'corners': tree.odm_orthophoto_corners,
'res': resolution,
'res': 1.0 / (resolution/100.0),
'bands': '',
'depth_idx': '',
'inpaint': '',
@ -114,6 +115,7 @@ class ODMOrthoPhotoStage(types.ODM_Stage):
# Cutline computation, before cropping
# We want to use the full orthophoto, not the cropped one.
submodel_run = is_submodel(tree.opensfm)
if args.orthophoto_cutline:
cutline_file = os.path.join(tree.odm_orthophoto, "cutline.gpkg")
@ -122,15 +124,18 @@ class ODMOrthoPhotoStage(types.ODM_Stage):
cutline_file,
args.max_concurrency,
scale=0.25)
if submodel_run:
orthophoto.compute_mask_raster(tree.odm_orthophoto_tif, cutline_file,
os.path.join(tree.odm_orthophoto, "odm_orthophoto_cut.tif"),
blend_distance=20, only_max_coords_feature=True)
else:
log.ODM_INFO("Not a submodel run, skipping mask raster generation")
orthophoto.compute_mask_raster(tree.odm_orthophoto_tif, cutline_file,
os.path.join(tree.odm_orthophoto, "odm_orthophoto_cut.tif"),
blend_distance=20, only_max_coords_feature=True)
orthophoto.post_orthophoto_steps(args, bounds_file_path, tree.odm_orthophoto_tif, tree.orthophoto_tiles)
orthophoto.post_orthophoto_steps(args, bounds_file_path, tree.odm_orthophoto_tif, tree.orthophoto_tiles, resolution)
# Generate feathered orthophoto also
if args.orthophoto_cutline:
if args.orthophoto_cutline and submodel_run:
orthophoto.feather_raster(tree.odm_orthophoto_tif,
os.path.join(tree.odm_orthophoto, "odm_orthophoto_feathered.tif"),
blend_distance=20

Wyświetl plik

@ -65,12 +65,13 @@ class ODMOpenMVSStage(types.ODM_Stage):
filter_point_th = -20
config = [
" --resolution-level %s" % int(resolution_level),
"--resolution-level %s" % int(resolution_level),
'--dense-config-file "%s"' % densify_ini_file,
"--max-resolution %s" % int(outputs['undist_image_max_size']),
"--max-threads %s" % args.max_concurrency,
"--number-views-fuse %s" % number_views_fuse,
"--sub-resolution-levels %s" % subres_levels,
"--archive-type 3",
'-w "%s"' % depthmaps_dir,
"-v 0"
]
@ -78,7 +79,6 @@ class ODMOpenMVSStage(types.ODM_Stage):
gpu_config = []
use_gpu = has_gpu(args)
if use_gpu:
#gpu_config.append("--cuda-device -3")
gpu_config.append("--cuda-device -1")
else:
gpu_config.append("--cuda-device -2")
@ -94,12 +94,13 @@ class ODMOpenMVSStage(types.ODM_Stage):
extra_config.append("--ignore-mask-label 0")
with open(densify_ini_file, 'w+') as f:
f.write("Optimize = 7\n")
f.write("Optimize = 7\nMin Views Filter = 1\n")
def run_densify():
system.run('"%s" "%s" %s' % (context.omvs_densify_path,
openmvs_scene_file,
' '.join(config + gpu_config + extra_config)))
try:
run_densify()
except system.SubprocessException as e:
@ -109,7 +110,7 @@ class ODMOpenMVSStage(types.ODM_Stage):
log.ODM_WARNING("OpenMVS failed with GPU, is your graphics card driver up to date? Falling back to CPU.")
gpu_config = ["--cuda-device -2"]
run_densify()
elif (e.errorCode == 137 or e.errorCode == 3221226505) and not pc_tile:
elif (e.errorCode == 137 or e.errorCode == 143 or e.errorCode == 3221226505) and not pc_tile:
log.ODM_WARNING("OpenMVS ran out of memory, we're going to turn on tiling to see if we can process this.")
pc_tile = True
config.append("--fusion-mode 1")
@ -126,10 +127,10 @@ class ODMOpenMVSStage(types.ODM_Stage):
subscene_densify_ini_file = os.path.join(tree.openmvs, 'subscene-config.ini')
with open(subscene_densify_ini_file, 'w+') as f:
f.write("Optimize = 0\n")
f.write("Optimize = 0\nEstimation Geometric Iters = 0\nMin Views Filter = 1\n")
config = [
"--sub-scene-area 660000",
"--sub-scene-area 660000", # 8000
"--max-threads %s" % args.max_concurrency,
'-w "%s"' % depthmaps_dir,
"-v 0",
@ -160,9 +161,13 @@ class ODMOpenMVSStage(types.ODM_Stage):
config = [
'--resolution-level %s' % int(resolution_level),
'--max-resolution %s' % int(outputs['undist_image_max_size']),
"--sub-resolution-levels %s" % subres_levels,
'--dense-config-file "%s"' % subscene_densify_ini_file,
'--number-views-fuse %s' % number_views_fuse,
'--max-threads %s' % args.max_concurrency,
'--archive-type 3',
'--postprocess-dmaps 0',
'--geometric-iters 0',
'-w "%s"' % depthmaps_dir,
'-v 0',
]
@ -178,7 +183,7 @@ class ODMOpenMVSStage(types.ODM_Stage):
else:
# Filter
if args.pc_filter > 0:
system.run('"%s" "%s" --filter-point-cloud %s -v 0 %s' % (context.omvs_densify_path, scene_dense_mvs, filter_point_th, ' '.join(gpu_config)))
system.run('"%s" "%s" --filter-point-cloud %s -v 0 --archive-type 3 %s' % (context.omvs_densify_path, scene_dense_mvs, filter_point_th, ' '.join(gpu_config)))
else:
# Just rename
log.ODM_INFO("Skipped filtering, %s --> %s" % (scene_ply_unfiltered, scene_ply))
@ -218,7 +223,7 @@ class ODMOpenMVSStage(types.ODM_Stage):
try:
system.run('"%s" %s' % (context.omvs_densify_path, ' '.join(config + gpu_config + extra_config)))
except system.SubprocessException as e:
if e.errorCode == 137 or e.errorCode == 3221226505:
if e.errorCode == 137 or e.errorCode == 143 or e.errorCode == 3221226505:
log.ODM_WARNING("OpenMVS filtering ran out of memory, visibility checks will be skipped.")
skip_filtering()
else:

Wyświetl plik

@ -266,7 +266,7 @@ class ODMMergeStage(types.ODM_Stage):
orthophoto_vars = orthophoto.get_orthophoto_vars(args)
orthophoto.merge(all_orthos_and_ortho_cuts, tree.odm_orthophoto_tif, orthophoto_vars)
orthophoto.post_orthophoto_steps(args, merged_bounds_file, tree.odm_orthophoto_tif, tree.orthophoto_tiles)
orthophoto.post_orthophoto_steps(args, merged_bounds_file, tree.odm_orthophoto_tif, tree.orthophoto_tiles, args.orthophoto_resolution)
elif len(all_orthos_and_ortho_cuts) == 1:
# Simply copy
log.ODM_WARNING("A single orthophoto/cutline pair was found between all submodels.")
@ -306,7 +306,7 @@ class ODMMergeStage(types.ODM_Stage):
log.ODM_INFO("Created %s" % dem_file)
if args.tiles:
generate_dem_tiles(dem_file, tree.path("%s_tiles" % human_name.lower()), args.max_concurrency)
generate_dem_tiles(dem_file, tree.path("%s_tiles" % human_name.lower()), args.max_concurrency, args.dem_resolution)
if args.cog:
convert_to_cogeo(dem_file, max_workers=args.max_concurrency)

Wyświetl plik

@ -67,15 +67,15 @@ platform="Linux" # Assumed
uname=$(uname)
case $uname in
"Darwin")
platform="MacOS / OSX"
platform="MacOS"
;;
MINGW*)
platform="Windows"
;;
esac
if [[ $platform != "Linux" ]]; then
echo "This script only works on Linux."
if [[ $platform != "Linux" && $platform != "MacOS" ]]; then
echo "This script only works on Linux and MacOS."
exit 1
fi