Porównaj commity

...

303 Commity

Autor SHA1 Wiadomość Data
Stephen Mather fa058e7567
Merge pull request #1502 from smathermather/drop-gitter
Update README.md to reflect no gitter
2024-05-14 23:23:03 -04:00
Stephen Mather a60dec4e69
Update README.md to reflect no gitter 2024-05-14 23:22:25 -04:00
Piero Toffanin c098b976c6 Bump measure plugin version 2024-05-13 15:20:17 -04:00
Piero Toffanin 32ba4ed707
Merge pull request #1496 from pierotofy/mapapi
Imperial units support, improvements, fixes, plugins API expansion
2024-05-13 15:02:46 -04:00
Piero Toffanin 7272ca55bc Update manifests 2024-05-13 14:35:59 -04:00
Piero Toffanin e1cb0c83bb Update locale strings 2024-05-13 13:56:56 -04:00
Piero Toffanin 7ab95bc8b1 Measure plugin support for imperial units 2024-05-13 13:54:49 -04:00
Piero Toffanin 80a7f2048d Potree units sync 2024-05-13 13:04:56 -04:00
Piero Toffanin e9c2409ea9 Bump version 2024-05-11 17:19:32 -04:00
Piero Toffanin 9ee58f7216 Reformat 2024-05-11 17:18:54 -04:00
Piero Toffanin 289ef48b12 Speed up DSM/DTM tiler 2024-05-11 17:08:53 -04:00
Piero Toffanin 8468fdff5c Refactor get_asset_file_or_stream 2024-05-11 15:01:26 -04:00
Piero Toffanin 35fc60aa2c Elevation layer units update works 2024-05-11 12:51:37 -04:00
Piero Toffanin 57ccd23234 Fix login next redirect 2024-05-11 12:43:40 -04:00
Piero Toffanin 4e2ffbb768 PoC elevation histogram imperial units display 2024-05-10 22:01:37 -04:00
Piero Toffanin 3b55ebd3e5 Add unit options 2024-05-10 21:24:34 -04:00
Piero Toffanin 1fc7e11c86 Contours plugin imperial units export/preview working 2024-05-10 13:44:12 -04:00
Piero Toffanin d76eacabd3 Contours imperial preview working 2024-05-09 15:46:30 -04:00
Piero Toffanin 75678b7a84 Fix unit selector 2024-05-09 14:03:23 -04:00
Piero Toffanin 681482983c Add volume units 2024-05-09 12:35:29 -04:00
Piero Toffanin adf9c7dc5f Add US imperial 2024-05-09 11:08:51 -04:00
Piero Toffanin ece6bba200 Moar unit tests 2024-05-09 09:27:10 -04:00
Piero Toffanin d46b582dcd Merge branch 'master' of https://github.com/OpenDroneMap/WebODM into mapapi 2024-05-08 12:13:21 -04:00
Piero Toffanin dd6b46a2c9
Merge pull request #1499 from pierotofy/upcheck
Add file size check on upload
2024-05-08 11:56:56 -04:00
Piero Toffanin 5375eb8a19 Add file size check on upload 2024-05-08 11:29:35 -04:00
Piero Toffanin e0eb7cad7e Add units tests 2024-05-08 10:48:29 -04:00
Piero Toffanin 9a8013d6ce Revert debug commit 2024-05-07 11:56:33 -04:00
Piero Toffanin f0cd13a464 Merge branch 'master' of https://github.com/OpenDroneMap/WebODM 2024-05-07 11:54:42 -04:00
Piero Toffanin 5c663b8435 More upload retries, don't update totalCount on failure 2024-05-07 11:54:33 -04:00
Piero Toffanin 30eff78d3b Units work 2024-05-07 11:53:21 -04:00
Piero Toffanin 9af1ee018b Merge branch 'master' of https://github.com/OpenDroneMap/WebODM into mapapi 2024-05-07 09:46:57 -04:00
Piero Toffanin 568e80a941
Merge pull request #1498 from pierotofy/kmzfix
Use export format AUTO for KMZ
2024-05-06 00:33:50 -04:00
Piero Toffanin cb173bd48c Use export format AUTO for KMZ 2024-05-06 00:24:32 -04:00
Piero Toffanin b86411c298
Merge pull request #1497 from pierotofy/upfix
Fix file size check on upload error retry
2024-05-05 16:10:07 -04:00
Piero Toffanin 06ccd29d09 Fix file size check on upload error retry 2024-05-05 15:12:39 -04:00
Piero Toffanin bf24be7b72 Started adding app-wide unit selector logic 2024-05-02 16:47:47 -04:00
Piero Toffanin d73558256a Cleaner map controls, fix opacity label alignment 2024-05-02 16:17:06 -04:00
Piero Toffanin 59e104946c Better error message on worker failure 2024-05-02 16:08:41 -04:00
Piero Toffanin 972b06d03b Fix triangle icon, bump version 2024-05-01 15:43:32 -04:00
Piero Toffanin 2352d838cf Shorten Lightning Network --> Lightning 2024-05-01 14:01:40 -04:00
Piero Toffanin e7337f3b5d Silence annoying React deprecation notice of useful functionality 2024-05-01 13:39:10 -04:00
Piero Toffanin a44c2ce86f Add isMobile function 2024-05-01 12:34:47 -04:00
Piero Toffanin 6f5d68d6ed Expand Map JS API 2024-04-30 19:17:28 -04:00
Piero Toffanin 04af329f78 Fix invalid PropType 2024-04-27 11:28:19 -04:00
Piero Toffanin cc2b7d5265 Fix build_plugins on Windows 2024-04-20 13:55:25 -04:00
Piero Toffanin 10a44851ac
Merge pull request #1485 from pierotofy/autoperc
Ability to share particular map type
2024-04-04 11:24:10 -04:00
Piero Toffanin c2c65a2cc2 Ability to share a particular map type 2024-04-04 10:46:53 -04:00
Piero Toffanin b1fdcbb242
Merge pull request #1484 from pierotofy/autoperc
Automatically select 2/98 percentile values in plant health
2024-04-04 10:39:29 -04:00
Piero Toffanin 02b4132daa Better histogram direct input 2024-04-04 10:09:45 -04:00
Piero Toffanin ed6a43699f Automatically select 2/98 percentile values in plant health 2024-04-04 09:46:36 -04:00
Piero Toffanin e8f8f2f130
Merge pull request #1475 from Zuline/patch-2
Update README.md
2024-02-29 21:53:16 +01:00
Zuline 57223fc539
Update README.md
Fixed a missed closing bracket in a new section of Readme.md
2024-02-29 07:40:21 +11:00
Piero Toffanin 9a3123c878
Merge pull request #1474 from Zuline/patch-1
Update README.md
2024-02-28 18:37:33 +01:00
Zuline 860942b80f
Update README.md
Added instructions to the Common Troubleshooting section for adding images to All Assets downloaded from Lightning.
2024-02-28 08:46:29 +11:00
Piero Toffanin 4cd3618442
Merge pull request #1472 from gonzalo-bulnes/fix-user-facing-misspellings
Fix user-facing misspelling, improve a few user-facing strings
2024-02-16 09:58:46 -05:00
Gonzalo Bulnes Guilpain 9209b52a19
Fix user-facing misspellings
- Missing letter.

- Both '.json' and 'JSON' are used in other strings,
  but '.JSON' doesn't seem as clear.

- Fixed capitalization and added verb to ensure the sentence is readable.
  I don't think that reproducing the error message exactly
  helps making it more readable in this case, because it is unlikely
  that people reading it will recognize the capitals in the middle
  of the sentence as an exact quote of some program's output.
2024-02-16 11:51:44 +00:00
Piero Toffanin 8150b822d9
Merge pull request #1468 from gonzalo-bulnes/add-task-id-to-expanded-list-item
Add task ID to expanded list item
2024-02-13 10:47:30 -05:00
Gonzalo Bulnes Guilpain 02b4061379
Add task ID to expanded list item
This ID is useful to locate the task directory for inspection.

Because it is fairly long, it seemed better to keep it relatively
low in the list of properties to avoid making the essential information
more difficult to read or skim through.

Yet placing it above the task output minimizes movement when the console
is toggled open.

See https://community.opendronemap.org/t/19285
2024-02-13 12:52:55 +00:00
Piero Toffanin 206edf1087
Merge pull request #1467 from gonzalo-bulnes/fix-plugin-error-shadowing
Fix error shadowing in log when plugin fails to load
2024-02-08 23:59:22 -05:00
Gonzalo Bulnes Guilpain 7c9b1da92a
Fix error shadowing in log when plugin fails to load
I didn't find a standard way of unwrapping the error,
short of printing an entire stack trace. I don't think printing
a stack trace in a log is useful, so I decided to print
the error.__cause__ explicitly.

Since there is always one single level of nesting in this code,
I think that's OK.

See https://docs.python.org/3/library/exceptions.html
2024-02-08 22:07:15 -05:00
Piero Toffanin a1331d0b0b
Update README.md 2024-02-02 12:16:31 -05:00
Saijin-Naib 864055b0db
Merge pull request #1460 from flyinmryan/patch-1 2024-01-16 13:47:06 -05:00
Mike Ryan faae70991a
Fixed typo in README.md 2024-01-16 10:39:16 -08:00
Piero Toffanin 712705d35a More CSS fixes 2024-01-09 15:32:40 -05:00
Piero Toffanin 863bf2477c Merge branch 'theme' 2024-01-09 15:02:43 -05:00
Piero Toffanin 94851d2ed3 Fix navbar top 2024-01-09 15:02:36 -05:00
Piero Toffanin d415b12806
Merge pull request #1459 from pierotofy/theme
Remove django-compressor
2024-01-09 13:30:21 -05:00
Piero Toffanin ccabacb3dc Remove theme.scss 2024-01-09 13:07:41 -05:00
Piero Toffanin d2755db412 remove libsass 2024-01-09 13:07:17 -05:00
Piero Toffanin 59195124a0 Fix test 2024-01-09 12:54:04 -05:00
Piero Toffanin 362ba7fc9b CSS fixes 2024-01-09 12:36:57 -05:00
Piero Toffanin b3f5b2de5d Vanilla css fix 2024-01-09 12:22:35 -05:00
Piero Toffanin f227bdb08b Remove django-compressor 2024-01-09 12:19:24 -05:00
Piero Toffanin 15ce002d78
Merge pull request #1458 from douw/patch-1
Update README.md
2024-01-04 11:07:43 -05:00
douw a41d4ceff7
Update README.md
The instructions for running the the docker image as a Linux service is incorrect (possibly referring to running WebODM natively). These rewritten instructions refers to how I got WebODM Docker running on my RHEL 9 system
2024-01-04 14:40:15 +02:00
Piero Toffanin be8bd6e7ee
Merge pull request #1439 from chris-bateman/master
Upgrade Node 14 to Node 20
2024-01-02 14:56:32 -05:00
Piero Toffanin 913db4c52b Linear backoff in file upload retries 2023-12-21 09:00:12 -05:00
Piero Toffanin 7bba3a38df
Merge pull request #1447 from pierotofy/pyodmb
Update PyODM
2023-12-18 14:26:03 -05:00
Piero Toffanin 136d270666 Update PyODM 2023-12-18 11:57:59 -05:00
Piero Toffanin 448a2cb2d6
Merge pull request #1443 from pierotofy/workercpu
Add --worker-cpus option
2023-12-04 23:47:07 -05:00
Piero Toffanin ae08c10ec7 Typo 2023-12-04 23:41:51 -05:00
Piero Toffanin bc0c4ac3e0 Add --worker-cpus option 2023-12-04 23:40:23 -05:00
chris-bateman 8ff81f74c4 Buffer Polyfill for WP5 2023-11-27 10:22:38 +11:00
chris-bateman 5ca0cf82db bump to node 20
bump to node 20
2023-11-23 12:54:42 +11:00
chris-bateman dae49a7b2c added buffer package and version change
Added buffer as webpack 5 does not use it anymore.
Version change in package.json
2023-11-23 12:32:35 +11:00
chris-bateman 6ea180dd43 Bump to node 18
Upgraded to node 18 and Webpack 5.
Package cleanup for webpack 5
2023-11-21 10:22:00 +11:00
chris-bateman 3f42eaa824 Upgrade Node 14 to Node 16
Upgrade Node from 14 to 16 and associated packages and WebPack config
2023-11-20 10:37:01 +11:00
Piero Toffanin 24f3b38dce
Merge pull request #1437 from pierotofy/potrec
Record Movie with Potree
2023-11-15 19:12:07 -05:00
Piero Toffanin 0b2e697c15 Potree: record movie (codec update) 2023-11-15 17:04:40 -05:00
Piero Toffanin 9af93014cc Potree: record movie 2023-11-15 17:00:20 -05:00
Piero Toffanin d4dcf3fed1 Potree: remove last camera animation 2023-11-15 16:24:13 -05:00
Piero Toffanin 425710862f
Merge pull request #1436 from pierotofy/volimp
Fix Potree polygon clipping
2023-11-14 19:24:33 -05:00
Piero Toffanin 4a529baace Potree: fix polygon clipping 2023-11-14 19:23:31 -05:00
Piero Toffanin e8ae09b96f
Merge pull request #1435 from pierotofy/volimp
10x volume calculations, base surface definitions. fix 3D vertical area measurements
2023-11-14 18:38:49 -05:00
Piero Toffanin c4b59721a3 Update locale 2023-11-14 17:58:39 -05:00
Piero Toffanin 9880d461d4 Update locales 2023-11-14 17:57:01 -05:00
Piero Toffanin a78a2e4206 Potree: fix vertical area measurements 2023-11-14 17:38:01 -05:00
Piero Toffanin fd7721ee6b Fix GeoJSON export for points/linestring measurements 2023-11-14 17:17:05 -05:00
Piero Toffanin c28d00f0b0 10x volume calculations, remove grass dependency 2023-11-14 16:10:16 -05:00
Piero Toffanin 5a94579a8e
Merge pull request #1432 from pierotofy/dsmonly
Allow display of 2D maps which have only a DSM
2023-11-11 17:56:36 -05:00
Piero Toffanin 512314ba67 Allow display of 2D maps which have only a DSM 2023-11-11 17:28:12 -05:00
Piero Toffanin 782d6ed7a9
Update issue-triage.yml 2023-11-10 10:02:26 -05:00
Piero Toffanin bf76edf4c0
Update issue-triage.yml 2023-11-10 09:31:55 -05:00
Piero Toffanin 824c88cd54 Rename issue triage 2023-11-07 17:36:13 -05:00
Piero Toffanin d0dc128ca9 Fix labels 2023-11-07 17:23:33 -05:00
Piero Toffanin c4c9ed9598 Add issue triage automation 2023-11-07 17:21:12 -05:00
Piero Toffanin 5fc886028e Automatically expand task in project if there's a single task 2023-11-06 16:29:51 -05:00
Piero Toffanin 8976aa51e3 Merge branch 'master' of https://github.com/OpenDroneMap/WebODM 2023-11-06 16:06:11 -05:00
Piero Toffanin 5e7ef34290 Bye dd 2023-11-06 16:05:42 -05:00
Piero Toffanin 9ea217c218
Merge pull request #1426 from pierotofy/upimp
Cancel upload improvements, automatic task cleanup, configurable doc links
2023-11-06 12:45:26 -05:00
Piero Toffanin ce9cd0a8b9 Add configurable docs/task options links 2023-11-06 12:19:41 -05:00
Piero Toffanin 58dcc46e40 Cleanup partial tasks 2023-11-06 11:22:39 -05:00
Piero Toffanin 9fb0c6db67 Cleanup partial tasks 2023-11-06 11:21:10 -05:00
Piero Toffanin 6b4230f233 Improve cancel task logic 2023-11-06 10:35:02 -05:00
Piero Toffanin bd183c6455
Update forest preset 2023-10-25 22:36:18 -04:00
Piero Toffanin 6bc90025aa
Merge pull request #1420 from pierotofy/structfix
Catch piexif.dump exceptions
2023-10-22 13:24:35 -04:00
Piero Toffanin bcc0f24fd5 Catch piexif.dump exceptions 2023-10-22 13:23:21 -04:00
Piero Toffanin 918ec48e6d
Merge pull request #1417 from rion-saeon/patch-noTGI
Removed TGI plant health equation
2023-10-18 10:45:37 -04:00
Rion Lerm 18cab57e9f
I realised the TGI equation may be erroneous thsu I removed it. I am surprised that no one questioned it before so hopefully it was not used much. 2023-10-17 16:34:32 +02:00
Saijin-Naib 38ba80b6ec
Merge pull request #1415 from allandaly/allandaly-readme-patch 2023-10-08 11:04:40 -04:00
allandaly a5cbac0ff8
Update README.md to use correct links to plugins directory.
Looks like the plugins directory changed from /plugins at some point to /coreplugins so this is a simple update to a few links in the readme.
2023-10-08 07:53:34 -07:00
Piero Toffanin 295bf3f99a
Merge pull request #1413 from pierotofy/pagination
Fix paginator overflow
2023-10-05 15:08:23 -04:00
Piero Toffanin bc86c7977b Add max number of pages in paginator 2023-10-05 12:47:28 -04:00
Piero Toffanin 2d5a403109
Merge pull request #1412 from pierotofy/autobands
Adds support for automatically selecting the proper band filter
2023-10-04 16:33:25 -04:00
Piero Toffanin 49c9f2d7b8 Fix test 2023-10-04 15:51:53 -04:00
Piero Toffanin 62d5185a79 Fix test 2023-10-04 13:39:16 -04:00
Piero Toffanin f67f435a1c Ignore celery results when appropriate 2023-10-04 13:34:31 -04:00
Piero Toffanin 13121566ad Update README 2023-10-04 13:13:32 -04:00
Piero Toffanin 80dcff41ca Add unit tests 2023-10-04 13:04:39 -04:00
Piero Toffanin 4897d4e52a Fix var override 2023-10-03 22:37:10 -04:00
Piero Toffanin 44b2495291 Fix raster export 2023-10-03 15:41:05 -04:00
Piero Toffanin b68e622234 Update locales 2023-10-03 15:31:06 -04:00
Piero Toffanin 43b24eb8b6 Bump version 2023-10-03 15:19:28 -04:00
Piero Toffanin 530720b699 Adds support for automatically selecting the proper band filter 2023-10-03 15:13:56 -04:00
Piero Toffanin 474e2d844b Add RGNRe 2023-10-02 10:19:16 -04:00
Piero Toffanin 1f75945fbb
Merge pull request #1411 from pierotofy/borg
Update formulas.py
2023-10-02 10:12:17 -04:00
Piero Toffanin d5e597fee8 Update formulas 2023-10-02 10:10:48 -04:00
Piero Toffanin 36e98818d3
Merge pull request #1407 from pierotofy/borg
Add borg backup media pattern generator
2023-09-28 17:00:57 -04:00
Piero Toffanin d9736cf11f Add borg backup media pattern generator 2023-09-28 16:48:47 -04:00
Piero Toffanin 74e41077cc
Merge pull request #1406 from pierotofy/logs
Disable logging
2023-09-27 19:52:27 -04:00
Piero Toffanin 0501938d61 Disable logging 2023-09-27 16:43:25 -04:00
Piero Toffanin 9b49ad777d
Merge pull request #1405 from pierotofy/contours
GDAL based contours
2023-09-26 13:04:21 -04:00
Piero Toffanin a852dfb04e Bump version 2023-09-26 12:37:10 -04:00
Piero Toffanin 0e7d9ee6f2 Shapefile support 2023-09-26 12:20:27 -04:00
Piero Toffanin c8c0f51805 Assign layer name 2023-09-26 12:08:11 -04:00
Piero Toffanin 1b327fb56e GDAL based contours 2023-09-26 12:04:04 -04:00
Piero Toffanin bf3eec282f
Merge pull request #1404 from pierotofy/encfix
Fix console write encoding
2023-09-22 15:43:50 -04:00
Piero Toffanin 0093ca71cd Fix encoding on Windows 2023-09-22 15:34:27 -04:00
Piero Toffanin 7bb818dda9
Merge pull request #1398 from pierotofy/chunkedimport
Chunked import uploads
2023-09-18 17:44:05 -04:00
Piero Toffanin 92b98389ad Faster map initialization 2023-09-18 17:38:03 -04:00
Piero Toffanin ef4db8f491 Disable rasterio warnings 2023-09-18 17:20:14 -04:00
Piero Toffanin 9ece192f28 Conditional webpack mode 2023-09-18 16:50:28 -04:00
Piero Toffanin cdeae25426 Build webpack with production 2023-09-18 16:05:08 -04:00
Piero Toffanin 9cf533f87c capitalize 2023-09-18 15:54:19 -04:00
Piero Toffanin a364de2176 Better validation error msg 2023-09-18 15:54:02 -04:00
Piero Toffanin 9d336a5c61 Add unit test 2023-09-18 15:45:57 -04:00
Piero Toffanin e2b7de81d3 Chunked import uploads 2023-09-18 14:08:45 -04:00
Piero Toffanin ac78176f2d
Merge pull request #1395 from pierotofy/pnodeimp
Processing node handling improvements
2023-09-16 14:11:23 -04:00
Piero Toffanin 950d54d51b Update locales 2023-09-16 13:49:34 -04:00
Piero Toffanin ef5336927d Persists secret_key between updates 2023-09-16 13:46:15 -04:00
Piero Toffanin c0fe407157 Add NODE_OPTIMISTIC_MODE 2023-09-16 12:23:49 -04:00
Piero Toffanin 82f3408b94 Add test, comments 2023-09-16 11:26:24 -04:00
Piero Toffanin c6d4c763f0 Add UI_MAX_PROCESSING_NODES setting 2023-09-16 10:55:04 -04:00
Piero Toffanin c54857d6e9 Do not boot on flush 2023-09-15 16:47:24 -04:00
Piero Toffanin 9f5c58fe9a Fix migration on Windows 2023-09-15 16:33:14 -04:00
Piero Toffanin 0de8a7e0fe
Merge pull request #1392 from pierotofy/maximg
Check for maxImages on frontend
2023-09-15 14:32:52 -04:00
Piero Toffanin 93704420c6 Add --worker-memory parameter 2023-09-15 14:11:23 -04:00
Piero Toffanin 9bfdf9c320 Fix tests 2023-09-15 13:49:12 -04:00
Piero Toffanin 74dc45a8ca Add liveupdate command 2023-09-15 13:22:02 -04:00
Piero Toffanin 3254968b63 Cleanup 2023-09-15 13:13:54 -04:00
Piero Toffanin be082a7d71 Check for maxImages on frontend 2023-09-15 13:11:48 -04:00
Piero Toffanin 95085301c2 Update presets 2023-09-14 23:32:09 -04:00
Piero Toffanin 3541882423 Return 100 when den is zero 2023-09-12 18:34:56 -04:00
Piero Toffanin 14aad55245 Add test 2023-09-12 18:06:52 -04:00
Piero Toffanin eda8e1abe0
Merge pull request #1389 from pierotofy/console
Add warning for zero quota
2023-09-12 16:51:02 -04:00
Piero Toffanin 8059900a58 task_count check in quota removal 2023-09-12 12:40:45 -04:00
Piero Toffanin b1fd36da26 Update locale 2023-09-12 11:38:33 -04:00
Piero Toffanin b7178c830a Warn when quota is zero 2023-09-12 11:36:52 -04:00
Piero Toffanin ec908cdc12
Merge pull request #1387 from pierotofy/console
Safer console writes
2023-09-11 17:56:43 -04:00
Piero Toffanin df245905c5 Safer console writes 2023-09-11 17:28:47 -04:00
Piero Toffanin 2c2b75a759
Merge pull request #1386 from pierotofy/console
Move task.console_output
2023-09-11 17:26:05 -04:00
Piero Toffanin ba2d42b3e5 Fix test 2023-09-11 17:05:03 -04:00
Piero Toffanin e510e2fc9b Move task.console_output 2023-09-11 16:35:54 -04:00
Piero Toffanin 53079dbd30
Merge pull request #1371 from pierotofy/quotas
External auth support, task sizes, quotas
2023-09-11 14:34:47 -04:00
Piero Toffanin 49655a4115 Update locales 2023-09-11 13:48:29 -04:00
Piero Toffanin c7ff74a526 Moar unit tests 2023-09-11 13:02:46 -04:00
Piero Toffanin a709c8fdf6 Moar tests 2023-09-11 11:53:10 -04:00
Piero Toffanin 039df51cc6 Add some unit tests 2023-09-08 17:38:05 -04:00
Piero Toffanin 54296bd7a4 Read pwd reset link from settings 2023-09-08 16:02:45 -04:00
Piero Toffanin e7d57b4cd5 Add docker-compose file 2023-09-08 15:56:05 -04:00
Piero Toffanin 83419a7dab Add --settings, drop --external-auth-endpoint 2023-09-08 15:55:42 -04:00
Piero Toffanin 73052fb2ec External auto auth working 2023-09-08 12:28:13 -04:00
Piero Toffanin 89a6aca5f0 Merge branch 'master' of https://github.com/OpenDroneMap/WebODM into quotas 2023-09-07 10:53:09 -04:00
Piero Toffanin bd70b4b7ec Increase point budget, auto-login logic PoC 2023-09-07 10:50:44 -04:00
Piero Toffanin 4cd5a01023 Add --external-auth-endpoint 2023-09-06 11:09:49 -04:00
Piero Toffanin fd05b3a71f
Merge pull request #1378 from chris-bateman/patch-1
Update test-docker.yml
2023-09-04 20:21:46 -04:00
Piero Toffanin fab02f0cee
Merge pull request #1381 from chris-bateman/patch-3
Update build-and-publish.yml
2023-09-04 20:21:37 -04:00
Piero Toffanin 7c97d9365b
Merge pull request #1380 from chris-bateman/patch-2
Update build-docs.yml
2023-09-04 20:21:26 -04:00
Chris Bateman 132e8f9d69
Update build-and-publish.yml
Update actions/checkout from v2 to v3
2023-09-05 09:17:01 +10:00
Chris Bateman c4c5085e2a
Update build-docs.yml
Update actions/checkout from v2 to v3
2023-09-05 09:15:58 +10:00
Chris Bateman 5047413e12
Update test-docker.yml
Update actions/checkout from v2 to v3
2023-09-05 09:01:51 +10:00
Piero Toffanin 1b92ee1f19 Quota deletion working 2023-09-04 13:34:54 -04:00
Piero Toffanin f1b358db44
Pin redis version 2023-09-04 11:06:27 -04:00
Piero Toffanin 510cd961cf
Merge pull request #1375 from Firefishy/patch-1
Update Map.jsx to use correct tile.osm.org URL
2023-09-02 16:55:01 -04:00
Grant f5ff31b3ff
Update Map.jsx to use correct tile.osm.org URL
See: https://github.com/openstreetmap/operations/issues/737
2023-09-02 17:29:44 +01:00
Piero Toffanin aa737da1a1 Expose profiles API, quota update endpoint 2023-09-01 16:16:13 -04:00
Piero Toffanin b4e54e6406 Tweaks 2023-08-26 09:53:42 -04:00
Piero Toffanin 872d5abbc7 Invalidate cache, warn of quota excess 2023-08-26 09:23:42 -04:00
Piero Toffanin a0dbd68122 Pretty quota status bar 2023-08-26 08:12:16 -04:00
Piero Toffanin cd7f779019 Update locale 2023-08-26 06:07:05 -04:00
Piero Toffanin ba1965add0 Add profile model 2023-08-24 15:02:30 -04:00
Piero Toffanin 5ba0d472af Add tests, update size 2023-08-24 12:17:50 -04:00
Piero Toffanin 08608a6727 Fix non-georeferenced textured models loading 2023-08-21 12:55:17 -04:00
Piero Toffanin 84356f1ce7 External auth PoC, add task sizes 2023-08-21 11:43:50 -04:00
Piero Toffanin 51f03be14e
Merge pull request #1367 from pierotofy/locup
Update locales
2023-08-15 15:56:36 -04:00
Piero Toffanin 544b06a81a Lock pydantic 2023-08-15 15:36:52 -04:00
Piero Toffanin 2035b3a3fe Update locale 2023-08-14 20:28:41 -04:00
Piero Toffanin 849ae6576f
Merge pull request #1355 from vinsonliux/dropzonem-maxfilesize
Dropzonem maxfilesize
2023-06-19 12:39:13 +02:00
vinsonliux 397117fad1 code format 2023-06-19 14:26:18 +08:00
vinsonliux 4b87007682 format add code 2023-06-19 14:24:14 +08:00
vinsonliux 7e9791d5c1 Fixed the uploaded file was too large, set maxFilesize = 128G 2023-06-19 14:08:56 +08:00
Piero Toffanin a290b7af75
Merge pull request #1354 from pierotofy/sharedelfix
Add delete button for read only projects
2023-06-18 18:06:43 +02:00
Piero Toffanin ce108ec119 Add delete button for read only projects 2023-06-18 16:57:13 +02:00
Piero Toffanin 1e5356f74d
Merge pull request #1350 from diegoaces/master
addon: Projects and tasks chart
2023-06-03 18:50:29 -04:00
Diego Acuña af7188890f
Merge branch 'OpenDroneMap:master' into master 2023-06-03 17:27:58 -04:00
Diego Acuña 798434ecad
Merge pull request #3 from diegoaces/charts
New chart plugin
2023-06-03 17:27:45 -04:00
Diego Acuña 3316d1c3a8 New chart plugin 2023-06-03 17:26:22 -04:00
Piero Toffanin cc816a66e9
Mention ODMSemantic3D 2023-05-30 13:28:55 -04:00
Piero Toffanin a6023a9f8d
Merge pull request #1348 from pierotofy/localeup
Update locale strings
2023-05-30 00:18:56 -04:00
Piero Toffanin 5c668292e8 Update locale 2023-05-29 21:31:22 -04:00
Piero Toffanin 8c88111cc4 Update locale strings 2023-05-29 19:44:06 -04:00
Piero Toffanin e0747ab9ae
Merge pull request #1347 from pierotofy/revertdiagnostic
Revert diagnostic plugin changes
2023-05-24 15:00:53 -04:00
Piero Toffanin 277a659771 Revert diagnostic plugin changes 2023-05-24 14:35:23 -04:00
Piero Toffanin 2cedf42751
Merge pull request #1346 from Kathenae/bugfix/task-notification-plugin-db-save
Bugfix: Task notification plugin fails to send email notifications in production mode
2023-05-21 21:22:10 -04:00
Ronald 394e7add2c refactor: use data store for smtp config 2023-05-21 19:55:08 +02:00
Piero Toffanin bdf5b334d6
Merge pull request #1343 from Kathenae/kathenae-task_notification_plugin
Task notification plugin
2023-05-19 13:51:34 -04:00
Piero Toffanin 441782987c Check for active plugin 2023-05-19 13:34:30 -04:00
Piero Toffanin 26dee3b023 Merge branch 'master' of https://github.com/OpenDroneMap/WebODM into kathenae-task_notification_plugin 2023-05-19 13:13:49 -04:00
Piero Toffanin d8825e2160
Merge pull request #1345 from pierotofy/flir
Adds support for single band thermal datasets
2023-05-19 13:13:24 -04:00
Piero Toffanin 92a016b095 Bump version 2023-05-19 13:13:06 -04:00
Piero Toffanin 7413ebda7b Fix tests 2023-05-19 12:46:39 -04:00
Piero Toffanin 82c027226a Cache orthophoto bands, better support for single band orthophotos in plant health tab 2023-05-19 11:57:32 -04:00
Piero Toffanin c029446f88 Merge branch 'master' of https://github.com/OpenDroneMap/WebODM into flir 2023-05-19 11:25:01 -04:00
Piero Toffanin a18c1d3506
Merge pull request #1344 from Kathenae/kathenae-add_cpu_usage_chart
Add cpu usage chart
2023-05-18 16:47:43 -04:00
Piero Toffanin 3712d3a757 Require auth 2023-05-18 16:26:05 -04:00
Piero Toffanin f6114c0544 Use plugin's psutil 2023-05-18 16:21:56 -04:00
Piero Toffanin 3309664043 Remove requirements.txt, add disabled 2023-05-18 16:05:42 -04:00
Piero Toffanin c2bb526df7 Temperature algorithm rename 2023-05-18 15:24:39 -04:00
Ronald 726ae46886 Modified `plugin.py` and `diagnostics.html` 2023-05-18 19:26:11 +02:00
Ronald 111971c261 added `psutil` to requirements.txt 2023-05-18 19:23:38 +02:00
Ronald 18d3c7827c Created Task Notification plugin 2023-05-18 18:34:11 +02:00
Ronald 8005fcdc21 Created a `task_failed` signal 2023-05-18 18:27:23 +02:00
Piero Toffanin a8f852cdf7
Merge pull request #1339 from pierotofy/singlefix
Allow single file upload
2023-05-04 14:55:20 -04:00
Piero Toffanin 0fc5387cf5 Allow single file upload 2023-05-04 14:14:32 -04:00
Piero Toffanin 473b435acf Bump version 2023-05-03 12:27:31 -04:00
Piero Toffanin 932bfec0b0
Merge pull request #1337 from t4y/expose-ports
Docker-compose: expose ports instead of publishing
2023-05-03 12:26:40 -04:00
t3Y 69674401c2 Docker-compose: expose ports instead of publishing
With the
ports:
  - "12345"
syntax docker publishes the ports under an ephemeral port on the host.
Since these ports should presumably only be available to other services,
using expose: instead avoids publishing unnecessary services. See #1336
2023-05-03 18:10:39 +02:00
Piero Toffanin 48d76079bd Fix camera toggle: #1642 2023-05-02 12:11:04 -04:00
Piero Toffanin f7ec1c3208 Allow upload of a single file 2023-05-01 22:58:13 -04:00
Piero Toffanin c621c44e56
Merge pull request #1333 from pierotofy/compr
Use inline CSS compression
2023-04-27 17:51:47 -04:00
Piero Toffanin 34e8c46a4d Disable test 2023-04-27 17:16:35 -04:00
Piero Toffanin 1c24acf001 Sleep 2023-04-27 15:40:49 -04:00
Piero Toffanin e55ef9726a Fix tests 2023-04-27 15:22:08 -04:00
Piero Toffanin f8410c720d Use inline CSS compression 2023-04-27 15:02:42 -04:00
Piero Toffanin ff0d4b5c4f
Merge pull request #1331 from diegoaces/master
Columns are added to the projects view in administration.
2023-04-26 23:59:34 -04:00
Diego Acuña 2ee003338d
Merge pull request #2 from diegoaces/project_admin_add_columns
Columns are added to the projects in admin.
2023-04-26 18:21:03 -04:00
Diego Acuña 3429255f42 Columns are added to the projects view in administration.
Column tasks_count is added.
2023-04-26 18:19:29 -04:00
Piero Toffanin ecd89a3f3f Merge branch 'master' of https://github.com/OpenDroneMap/WebODM 2023-04-20 12:24:39 -04:00
Piero Toffanin 245ba2d522 No need to show gltf option 2023-04-20 12:24:34 -04:00
Piero Toffanin 80742baae6
Update badges 2023-04-18 14:31:10 -04:00
Piero Toffanin f2855c1ae8
Merge pull request #1327 from Saijin-Naib/master
Re-Silence Django Warning (Session Data Corrupted)
2023-04-16 00:11:02 -04:00
Saijin-Naib 4aa9986943 Re-Silence Django Warning (Session Data Corrupted)
It appears the logging object we silenced was only for versions prior to v2.2.x, which we use now.
2023-04-15 23:37:05 -04:00
Piero Toffanin 4bd0e0f198 Fix cloud import plugin #1325 2023-04-03 13:08:12 -04:00
Piero Toffanin 9598ebfaf1
Merge pull request #1322 from sltaeronautics/update-db-dir-docs
Updating README to reflect --db-data choice.
2023-04-03 01:26:35 -04:00
Tariq Islam 3d69c2c4e0 Fixed 2023-04-02 22:54:59 -05:00
Tariq Islam 8c28849da0 Updating README to reflect --db-data choice. 2023-04-02 22:49:19 -05:00
Piero Toffanin a6e92a4ff2 Fix camera toggle in 3D view 2023-04-02 18:05:39 -04:00
Piero Toffanin 9d4e0e0086
Merge pull request #1321 from sltaeronautics/db-folder-option
Adding a --db-dir option to specify an external Postgres data dir
2023-04-02 15:22:56 -04:00
Tariq Islam d3a743a38b Reverting dashboard.html for WebODM commit 2023-04-02 13:48:36 -05:00
Tariq Islam 04aa66c478 Changes to add --db-dir option for Postgres. 2023-04-02 01:41:02 -05:00
Tariq Islam 0ecd53ccb5
Merge pull request #2 from sltaeronautics/reindex_shotsWEBODM
Reindex shots webodm
2023-04-02 01:30:05 -05:00
Piero Toffanin 1028185b2a
Merge pull request #1320 from vagner-silveira/patch-1
Update Formulas.py - Include the MPRI index
2023-03-31 12:29:14 -04:00
Vagner Silveira 74d82d09cb
Update Formulas.py - Include the MPRI index
I would like to suggest the inclusion of the MPRI vegetative index (Modified Photochemical Reflectance Index), as it has a 90% correlation with the NDVI in studies carried out for use in large cultures here in Brazil, with RGB cameras. the code would be just below the VARI index on lines 47 to 52.
2023-03-31 10:18:00 -04:00
Tariq Islam 001d63939b
Merge branch 'master' into reindex_shotsWEBODM 2023-03-31 01:11:38 -05:00
Tariq Islam 90acb3dc41
Merge pull request #1 from OpenDroneMap/master
Updating to WebODM
2023-03-31 01:04:24 -05:00
Tariq Islam 7d20d54119 Removing welcome messaging. 2023-03-31 01:00:42 -05:00
Piero Toffanin 3ef4c044a3
Merge pull request #1317 from pierotofy/dc
Fix start.sh
2023-03-30 16:34:35 -04:00
Piero Toffanin fd80f494f2 Fix start.sh 2023-03-30 16:31:38 -04:00
Piero Toffanin 57c4f06fe2
Merge pull request #1315 from pierotofy/dc
Docker compose support
2023-03-30 11:32:31 -04:00
Piero Toffanin 31e1770ce7 Add swap 2023-03-30 10:48:08 -04:00
Piero Toffanin fca64b7d09 Add env check 2023-03-30 10:22:27 -04:00
Piero Toffanin 0205966093 Docker compose support 2023-03-30 10:11:13 -04:00
Piero Toffanin b7501de5e6
Merge pull request #1313 from HeDo88TH/master
Fixed imports
2023-03-27 12:47:10 -04:00
HeDo 00ebfeb550 Fixed imports 2023-03-27 16:07:59 +02:00
Piero Toffanin cc573c9364
Merge pull request #1312 from ezeakeal/patch-1
Update task.py
2023-03-24 09:56:30 -04:00
DanV 8724acf794
Update task.py
copyfileobj does not exist, I presume it should be shutil.copyfileobj 
Rebuilding WebODM on a machine in use now to test if this fixes it. 
Currently this breaks uploads
2023-03-24 13:02:21 +01:00
Piero Toffanin 14c0a356fa
Merge pull request #1311 from pierotofy/dropimageupload
Efficient camera markers
2023-03-23 17:37:51 -04:00
Piero Toffanin 21c0097a05 Efficient camera markers 2023-03-23 17:13:06 -04:00
Piero Toffanin b5e9dfad6f
Merge pull request #1310 from pierotofy/dropimageupload
Drop ImageUpload model
2023-03-23 17:09:49 -04:00
Piero Toffanin 0dc54a64d7 Bump version 2023-03-23 13:32:07 -04:00
Piero Toffanin ed5ac98d06 Drop ImageUpload model 2023-03-23 13:31:07 -04:00
Piero Toffanin 6258626102 Updated translations 2023-03-23 10:22:02 -04:00
Piero Toffanin e41b095a9a Update error message 2023-03-23 10:21:03 -04:00
210 zmienionych plików z 5468 dodań i 4402 usunięć

Wyświetl plik

@ -1 +1,2 @@
**/.git
.secret_key

2
.env
Wyświetl plik

@ -1,6 +1,7 @@
WO_HOST=localhost
WO_PORT=8000
WO_MEDIA_DIR=appmedia
WO_DB_DIR=dbdata
WO_SSL=NO
WO_SSL_KEY=
WO_SSL_CERT=
@ -9,3 +10,4 @@ WO_DEBUG=NO
WO_DEV=NO
WO_BROKER=redis://broker
WO_DEFAULT_NODES=1
WO_SETTINGS=

Wyświetl plik

@ -12,7 +12,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
submodules: 'recursive'
- name: Set up QEMU

Wyświetl plik

@ -12,7 +12,7 @@ jobs:
ruby-version: 2.7
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Set up Ruby
uses: ruby/setup-ruby@v1
with:
@ -28,4 +28,4 @@ jobs:
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./slate/build
keep_files: true
keep_files: true

Wyświetl plik

@ -0,0 +1,33 @@
name: Issue Triage
on:
issues:
types:
- opened
jobs:
issue_triage:
runs-on: ubuntu-latest
permissions:
issues: write
steps:
- uses: pierotofy/issuewhiz@v1
with:
ghToken: ${{ secrets.GITHUB_TOKEN }}
openAI: ${{ secrets.OPENAI_TOKEN }}
filter: |
- "#"
variables: |
- Q: "A question about using a software or seeking guidance on doing something?"
- B: "Reporting an issue or a software bug?"
- P: "Describes an issue with processing a set of images or a particular dataset?"
- D: "Contains a link to a dataset or images?"
- E: "Contains a suggestion for an improvement or a feature request?"
- SC: "Describes an issue related to compiling or building source code?"
logic: |
- 'Q and (not B) and (not P) and (not E) and (not SC) and not (title_lowercase ~= ".*bug: .+")': [comment: "Could we move this conversation over to the forum at https://community.opendronemap.org? :pray: The forum is the right place to ask questions (we try to keep the GitHub issue tracker for feature requests and bugs only). Thank you! :+1:", close: true, stop: true]
- "B and (not P) and (not E) and (not SC)": [label: "software fault", stop: true]
- "P and D": [label: "possible software fault", stop: true]
- "P and (not D) and (not SC) and (not E)": [comment: "Thanks for the report, but it looks like you didn't include a copy of your dataset for us to reproduce this issue? Please make sure to follow our [issue guidelines](https://github.com/OpenDroneMap/WebODM/blob/master/ISSUE_TEMPLATE.md) :pray: ", close: true, stop: true]
- "E": [label: enhancement, stop: true]
- "SC": [label: "possible software fault"]
signature: "p.s. I'm just an automated script, not a human being."

Wyświetl plik

@ -7,11 +7,14 @@ jobs:
docker:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
with:
submodules: 'recursive'
name: Checkout
- name: Set Swap Space
uses: pierotofy/set-swap-space@master
with:
swap-size-gb: 12
- name: Build and Test
run: |
docker-compose -f docker-compose.yml -f docker-compose.build.yml build --build-arg TEST_BUILD=ON

3
.gitignore vendored
Wyświetl plik

@ -102,4 +102,5 @@ package-lock.json
# Debian builds
dpkg/build
dpkg/deb
dpkg/deb
.secret_key

Wyświetl plik

@ -2,6 +2,7 @@ FROM ubuntu:21.04
MAINTAINER Piero Toffanin <pt@masseranolabs.com>
ARG TEST_BUILD
ARG DEBIAN_FRONTEND=noninteractive
ENV PYTHONUNBUFFERED 1
ENV PYTHONPATH $PYTHONPATH:/webodm
ENV PROJ_LIB=/usr/share/proj
@ -13,19 +14,23 @@ WORKDIR /webodm
# Use old-releases for 21.04
RUN printf "deb http://old-releases.ubuntu.com/ubuntu/ hirsute main restricted\ndeb http://old-releases.ubuntu.com/ubuntu/ hirsute-updates main restricted\ndeb http://old-releases.ubuntu.com/ubuntu/ hirsute universe\ndeb http://old-releases.ubuntu.com/ubuntu/ hirsute-updates universe\ndeb http://old-releases.ubuntu.com/ubuntu/ hirsute multiverse\ndeb http://old-releases.ubuntu.com/ubuntu/ hirsute-updates multiverse\ndeb http://old-releases.ubuntu.com/ubuntu/ hirsute-backports main restricted universe multiverse" > /etc/apt/sources.list
# Install Node.js
# Install Node.js using new Node install method
RUN apt-get -qq update && apt-get -qq install -y --no-install-recommends wget curl && \
wget --no-check-certificate https://deb.nodesource.com/setup_14.x -O /tmp/node.sh && bash /tmp/node.sh && \
apt-get install -y ca-certificates gnupg && \
mkdir -p /etc/apt/keyrings && \
curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg && \
NODE_MAJOR=20 && \
echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_$NODE_MAJOR.x nodistro main" | tee /etc/apt/sources.list.d/nodesource.list && \
apt-get -qq update && apt-get -qq install -y nodejs && \
# Install Python3, GDAL, PDAL, nginx, letsencrypt, psql
apt-get -qq update && apt-get -qq install -y --no-install-recommends python3 python3-pip python3-setuptools python3-wheel git g++ python3-dev python2.7-dev libpq-dev binutils libproj-dev gdal-bin pdal libgdal-dev python3-gdal nginx certbot grass-core gettext-base cron postgresql-client-13 gettext tzdata && \
apt-get -qq update && apt-get -qq install -y --no-install-recommends python3 python3-pip python3-setuptools python3-wheel git g++ python3-dev python2.7-dev libpq-dev binutils libproj-dev gdal-bin pdal libgdal-dev python3-gdal nginx certbot gettext-base cron postgresql-client-13 gettext tzdata && \
update-alternatives --install /usr/bin/python python /usr/bin/python2.7 1 && update-alternatives --install /usr/bin/python python /usr/bin/python3.9 2 && \
# Install pip reqs
pip install -U pip && pip install -r requirements.txt "boto3==1.14.14" && \
# Setup cron
ln -s /webodm/nginx/crontab /var/spool/cron/crontabs/root && chmod 0644 /webodm/nginx/crontab && service cron start && chmod +x /webodm/nginx/letsencrypt-autogen.sh && \
/webodm/nodeodm/setup.sh && /webodm/nodeodm/cleanup.sh && cd /webodm && \
npm install --quiet -g webpack@4.16.5 && npm install --quiet -g webpack-cli@4.2.0 && npm install --quiet && webpack --mode production && \
npm install --quiet -g webpack@5.89.0 && npm install --quiet -g webpack-cli@5.1.4 && npm install --quiet && webpack --mode production && \
echo "UTC" > /etc/timezone && \
python manage.py collectstatic --noinput && \
python manage.py rebuildplugins && \

Wyświetl plik

@ -1,6 +1,6 @@
<img alt="WebODM" src="https://user-images.githubusercontent.com/1951843/34074943-8f057c3c-e287-11e7-924d-3ccafa60c43a.png" width="180">
[![Build Status](https://travis-ci.org/OpenDroneMap/WebODM.svg?branch=master)](https://travis-ci.org/OpenDroneMap/WebODM) [![Translated](https://hosted.weblate.org/widgets/webodm/-/svg-badge.svg)](https://hosted.weblate.org/engage/webodm/)
![Build Status](https://img.shields.io/github/actions/workflow/status/OpenDroneMap/WebODM/build-and-publish.yml?branch=master) ![Version](https://img.shields.io/github/v/release/OpenDroneMap/WebODM) [![Translated](https://hosted.weblate.org/widgets/webodm/-/svg-badge.svg)](https://hosted.weblate.org/engage/webodm/)
A user-friendly, commercial grade software for drone image processing. Generate georeferenced maps, point clouds, elevation models and textured 3D models from aerial images. It supports multiple engines for processing, currently [ODM](https://github.com/OpenDroneMap/ODM) and [MicMac](https://github.com/OpenDroneMap/NodeMICMAC/).
@ -38,9 +38,11 @@ A user-friendly, commercial grade software for drone image processing. Generate
Windows and macOS users can purchase an automated [installer](https://www.opendronemap.org/webodm/download#installer), which makes the installation process easier.
To install WebODM manually, these steps should get you up and running:
There's also a cloud-hosted version of WebODM available from [webodm.net](https://webodm.net).
* Install the following applications (if they are not installed already):
To install WebODM manually on your machine:
* Install the following applications:
- [Git](https://git-scm.com/downloads)
- [Docker](https://www.docker.com/)
- [Docker-compose](https://docs.docker.com/compose/install/)
@ -127,10 +129,16 @@ Note! You cannot pass an IP address to the hostname parameter! You need a DNS re
### Where Are My Files Stored?
When using Docker, all processing results are stored in a docker volume and are not available on the host filesystem. If you want to store your files on the host filesystem instead of a docker volume, you need to pass a path via the `--media-dir` option:
When using Docker, all processing results are stored in a docker volume and are not available on the host filesystem. There are two specific docker volumes of interest:
1. Media (called webodm_appmedia): This is where all files related to a project and task are stored.
2. Postgres DB (called webodm_dbdata): This is what Postgres database uses to store its data.
For more information on how these two volumes are used and in which containers, please refer to the [docker-compose.yml](docker-compose.yml) file.
For various reasons such as ease of backup/restore, if you want to store your files on the host filesystem instead of a docker volume, you need to pass a path via the `--media-dir` and/or the `--db-dir` options:
```bash
./webodm.sh restart --media-dir /home/user/webodm_data
./webodm.sh restart --media-dir /home/user/webodm_data --db-dir /home/user/webodm_db
```
Note that existing task results will not be available after the change. Refer to the [Migrate Data Volumes](https://docs.docker.com/engine/tutorials/dockervolumes/#backup-restore-or-migrate-data-volumes) section of the Docker documentation for information on migrating existing task results.
@ -148,6 +156,23 @@ Cannot start WebODM via `./webodm.sh start`, error messages are different at eac
While running WebODM with Docker Toolbox (VirtualBox) you cannot access WebODM from another computer in the same network. | As Administrator, run `cmd.exe` and then type `"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" controlvm "default" natpf1 "rule-name,tcp,,8000,,8000"`
On Windows, the storage space shown on the WebODM diagnostic page is not the same as what is actually set in Docker's settings. | From Hyper-V Manager, right-click “DockerDesktopVM”, go to Edit Disk, then choose to expand the disk and match the maximum size to the settings specified in the docker settings. Upon making the changes, restart docker.
#### Images Missing from Lightning Assets
When you use Lightning to process your task, you will need to download all assets to your local instance of WebODM. The all assets zip does *not* contain the images which were used to create the orthomosaic. This means that, although you can visualise the cameras layer in your local WebODM, when you click on a particular camera icon the image will not be shown.
The fix if you are using WebODM with Docker is as follows (instructions are for MacOS host):
1. Ensure that you have a directory which contains all of the images for the task and only the images;
2. Open Docker Desktop and navigate to Containers. Identify your WebODM instance and navigate to the container that is named `worker`. You will need the Container ID. This is a hash which is listed under the container name. Click to copy the Container ID using the copy icon next to it.
3. Open Terminal and enter `docker cp <sourcedirectory>/. <dockercontainerID>:/webodm/app/media/project/<projectID>/task/<taskID>`. Paste the Container ID to replace the location titled `<dockercontainerID>`. Enter the full directory path for your images to replace `<sourcedirectory>`;
4. Go back to Docker Desktop and navigate to Volumes in the side bar. Click on the volume called `webodm_appmedia`, click on `project`, identify the correct project and click on it, click on `task` and identify the correct task.
5. From Docker Desktop substitute the correct `<projectID>` and `<taskID>` into the command in Terminal;
6. Execute the newly edited command in Terminal. You will see a series of progress messages and your images will be copied to Docker;
7. Navigate to your project in your local instance of WebODM;
8. Open the Map and turn on the Cameras layer (top left);
9. Click on a Camera icon and the relevant image will be shown
Have you had other issues? Please [report them](https://github.com/OpenDroneMap/WebODM/issues/new) so that we can include them in this document.
### Backup and Restore
@ -220,17 +245,15 @@ Don't expect to process more than a few hundred images with these specifications
WebODM runs best on Linux, but works well on Windows and Mac too. If you are technically inclined, you can get WebODM to run natively on all three platforms.
[NodeODM](https://github.com/OpenDroneMap/NodeODM) and [ODM](https://github.com/OpenDroneMap/ODM) cannot run natively on Mac and this is the reason we mostly recommend people to use docker.
WebODM by itself is just a user interface (see [below](#odm-nodeodm-webodm-what)) and does not require many resources. WebODM can be loaded on a machine with just 1 or 2 GB of RAM and work fine without NodeODM. You can then use a processing service such as the [lightning network](https://webodm.net) or run NodeODM on a separate, more powerful machine.
## Customizing and Extending
Small customizations such as changing the application colors, name, logo, or addying custom CSS/HTML/Javascript can be performed directly from the Customize -- Brand/Theme panels within WebODM. No need to fork or change the code.
Small customizations such as changing the application colors, name, logo, or adding custom CSS/HTML/Javascript can be performed directly from the Customize -- Brand/Theme panels within WebODM. No need to fork or change the code.
More advanced customizations can be achieved by writing [plugins](https://github.com/OpenDroneMap/WebODM/tree/master/plugins). This is the preferred way to add new functionality to WebODM since it requires less effort than maintaining a separate fork. The plugin system features server-side [signals](https://github.com/OpenDroneMap/WebODM/blob/master/app/plugins/signals.py) that can be used to be notified of various events, a ES6/React build system, a dynamic [client-side API](https://github.com/OpenDroneMap/WebODM/tree/master/app/static/app/js/classes/plugins) for adding elements to the UI, a built-in data store, an async task runner, a GRASS engine, hooks to add menu items and functions to rapidly inject CSS, Javascript and Django views.
More advanced customizations can be achieved by writing [plugins](https://github.com/OpenDroneMap/WebODM/tree/master/coreplugins). This is the preferred way to add new functionality to WebODM since it requires less effort than maintaining a separate fork. The plugin system features server-side [signals](https://github.com/OpenDroneMap/WebODM/blob/master/app/plugins/signals.py) that can be used to be notified of various events, a ES6/React build system, a dynamic [client-side API](https://github.com/OpenDroneMap/WebODM/tree/master/app/static/app/js/classes/plugins) for adding elements to the UI, a built-in data store, an async task runner, a GRASS engine, hooks to add menu items and functions to rapidly inject CSS, Javascript and Django views.
For plugins, the best source of documentation currently is to look at existing [code](https://github.com/OpenDroneMap/WebODM/tree/master/plugins). If a particular hook / entrypoint for your plugin does not yet exist, [request it](https://github.com/OpenDroneMap/WebODM/issues). We are adding hooks and entrypoints as we go.
For plugins, the best source of documentation currently is to look at existing [code](https://github.com/OpenDroneMap/WebODM/tree/master/coreplugins). If a particular hook / entrypoint for your plugin does not yet exist, [request it](https://github.com/OpenDroneMap/WebODM/issues). We are adding hooks and entrypoints as we go.
To create a plugin simply copy the `plugins/test` plugin into a new directory (for example, `plugins/myplugin`), then modify `manifest.json`, `plugin.py` and issue a `./webodm.sh restart`.
@ -253,7 +276,7 @@ We have several channels of communication for people to ask questions and to get
- [OpenDroneMap Community Forum](http://community.opendronemap.org/c/webodm)
- [Report Issues](https://github.com/OpenDroneMap/WebODM/issues)
We also have a [Gitter Chat](https://gitter.im/OpenDroneMap/web-development), but the preferred way to communicate is via the [OpenDroneMap Community Forum](http://community.opendronemap.org/c/webodm).
The preferred way to communicate is via the [OpenDroneMap Community Forum](http://community.opendronemap.org/c/webodm).
## Support the Project
@ -264,6 +287,7 @@ There are many ways to contribute back to the project:
- Help answer questions on the community [forum](http://community.opendronemap.org/c/webodm) and [chat](https://gitter.im/OpenDroneMap/web-development).
- ⭐️ us on GitHub.
- Help us [translate](#translations) WebODM in your language.
- Help us classify [point cloud datasets](https://github.com/OpenDroneMap/ODMSemantic3D).
- Spread the word about WebODM and OpenDroneMap on social media.
- While we don't accept donations, you can purchase an [installer](https://webodm.org/download#installer), a [book](https://odmbook.com/) or a [sponsor package](https://github.com/users/pierotofy/sponsorship).
- You can [pledge funds](https://fund.webodm.org) for getting new features built and bug fixed.
@ -326,36 +350,37 @@ If you wish to run the docker version with auto start/monitoring/stop, etc, as a
This should work on any Linux OS capable of running WebODM, and using a SystemD based service daemon (such as Ubuntu 16.04 server for example).
This has only been tested on Ubuntu 16.04 server.
This has only been tested on Ubuntu 16.04 server and Red Hat Enterprise Linux 9.
The following pre-requisites are required:
* Requires odm user
* Requires docker installed via system (ubuntu: `sudo apt-get install docker.io`)
* Requires screen to be installed
* Requires 'screen' package to be installed
* Requires odm user member of docker group
* Required WebODM directory checked out to /webodm
* Requires that /webodm is recursively owned by odm:odm
* Requires that a Python 3 environment is used at /webodm/python3-venv
* Required WebODM directory checked out/cloned to /opt/WebODM
* Requires that /opt/WebODM is recursively owned by odm:odm
* Requires that a Python 3 environment is used at /opt/WebODM/python3-venv
If all pre-requisites have been met, and repository is checked out to /opt/WebODM folder, then you can use the following steps to enable and manage the service:
If all pre-requisites have been met, and repository is checked out/cloned to /opt/WebODM folder, then you can use the following steps to enable and manage the service:
First, to install the service, and enable the services to run at startup from now on:
```bash
sudo systemctl enable /webodm/service/webodm-gunicorn.service
sudo systemctl enable /webodm/service/webodm-nginx.service
sudo systemctl enable /opt/WebODM/service/webodm-docker.service
```
To manually start/stop the service:
```bash
sudo systemctl stop webodm-gunicorn
sudo systemctl start webodm-gunicorn
sudo systemctl stop webodm-docker
sudo systemctl start webodm-docker
```
To manually check service status:
```bash
sudo systemctl status webodm-gunicorn
sudo systemctl status webodm-docker
```
For the adventurous, the repository can be put anyplace you like by editing the ./WebODM/service/webodm-docker.service file before enabling the service the reflect your repository location, and modifying the systemctl enable command to that directiory.
## Run it natively
WebODM can run natively on Windows, MacOS and Linux. We don't recommend to run WebODM natively (using docker is easier), but it's possible.

Wyświetl plik

@ -10,39 +10,43 @@ from django.http import HttpResponseRedirect
from django.urls import reverse
from django.utils.html import format_html
from guardian.admin import GuardedModelAdmin
from django.contrib.auth.admin import UserAdmin as BaseUserAdmin
from django.contrib.auth.models import User
from app.models import PluginDatum
from app.models import Preset
from app.models import Plugin
from app.models import Profile
from app.plugins import get_plugin_by_name, enable_plugin, disable_plugin, delete_plugin, valid_plugin, \
get_plugins_persistent_path, clear_plugins_cache, init_plugins
from .models import Project, Task, ImageUpload, Setting, Theme
from .models import Project, Task, Setting, Theme
from django import forms
from codemirror2.widgets import CodeMirrorEditor
from webodm import settings
from django.core.files.uploadedfile import InMemoryUploadedFile
from django.utils.translation import gettext_lazy as _, gettext
admin.site.register(Project, GuardedModelAdmin)
class ProjectAdmin(GuardedModelAdmin):
list_display = ('id', 'name', 'owner', 'created_at', 'tasks_count', 'tags')
list_filter = ('owner',)
search_fields = ('id', 'name', 'owner__username')
admin.site.register(Project, ProjectAdmin)
class TaskAdmin(admin.ModelAdmin):
def has_add_permission(self, request):
return False
list_display = ('id', 'project', 'processing_node', 'created_at', 'status', 'last_error')
list_display = ('id', 'name', 'project', 'processing_node', 'created_at', 'status', 'last_error')
list_filter = ('status', 'project',)
search_fields = ('id', 'project__name')
search_fields = ('id', 'name', 'project__name')
admin.site.register(Task, TaskAdmin)
class ImageUploadAdmin(admin.ModelAdmin):
readonly_fields = ('image',)
admin.site.register(ImageUpload, ImageUploadAdmin)
admin.site.register(Preset, admin.ModelAdmin)
@ -259,3 +263,14 @@ class PluginAdmin(admin.ModelAdmin):
admin.site.register(Plugin, PluginAdmin)
class ProfileInline(admin.StackedInline):
model = Profile
can_delete = False
class UserAdmin(BaseUserAdmin):
inlines = [ProfileInline]
# Re-register UserAdmin
admin.site.unregister(User)
admin.site.register(User, UserAdmin)

Wyświetl plik

@ -1,7 +1,10 @@
from django.contrib.auth.models import User, Group
from rest_framework import serializers, viewsets, generics, status
from app.models import Profile
from rest_framework import serializers, viewsets, generics, status, exceptions
from rest_framework.decorators import action
from rest_framework.permissions import IsAdminUser
from rest_framework.response import Response
from django.core.exceptions import ObjectDoesNotExist
from django.contrib.auth.hashers import make_password
from app import models
@ -20,6 +23,7 @@ class AdminUserViewSet(viewsets.ModelViewSet):
if email is not None:
queryset = queryset.filter(email=email)
return queryset
def create(self, request):
data = request.data.copy()
password = data.get('password')
@ -44,3 +48,37 @@ class AdminGroupViewSet(viewsets.ModelViewSet):
if name is not None:
queryset = queryset.filter(name=name)
return queryset
class ProfileSerializer(serializers.ModelSerializer):
class Meta:
model = Profile
exclude = ('id', )
read_only_fields = ('user', )
class AdminProfileViewSet(viewsets.ModelViewSet):
pagination_class = None
serializer_class = ProfileSerializer
permission_classes = [IsAdminUser]
lookup_field = 'user'
def get_queryset(self):
return Profile.objects.all()
@action(detail=True, methods=['post'])
def update_quota_deadline(self, request, user=None):
try:
hours = float(request.data.get('hours', ''))
if hours < 0:
raise ValueError("hours must be >= 0")
except ValueError as e:
raise exceptions.ValidationError(str(e))
try:
p = Profile.objects.get(user=user)
except ObjectDoesNotExist:
raise exceptions.NotFound()
return Response({'deadline': p.set_quota_deadline(hours)}, status=status.HTTP_200_OK)

Wyświetl plik

@ -0,0 +1,39 @@
from django.contrib.auth.models import User
from django.contrib.auth import login
from rest_framework.views import APIView
from rest_framework import exceptions, permissions, parsers
from rest_framework.response import Response
from app.auth.backends import get_user_from_external_auth_response
import requests
from webodm import settings
class ExternalTokenAuth(APIView):
permission_classes = (permissions.AllowAny,)
parser_classes = (parsers.JSONParser, parsers.FormParser,)
def post(self, request):
# This should never happen
if settings.EXTERNAL_AUTH_ENDPOINT == '':
return Response({'error': 'EXTERNAL_AUTH_ENDPOINT not set'})
token = request.COOKIES.get('external_access_token', '')
if token == '':
return Response({'error': 'external_access_token cookie not set'})
try:
r = requests.post(settings.EXTERNAL_AUTH_ENDPOINT, headers={
'Authorization': "Bearer %s" % token
})
res = r.json()
if res.get('user_id') is not None:
user = get_user_from_external_auth_response(res)
if user is not None:
login(request, user, backend='django.contrib.auth.backends.ModelBackend')
return Response({'redirect': '/'})
else:
return Response({'error': 'Invalid credentials'})
else:
return Response({'error': res.get('message', 'Invalid external server response')})
except Exception as e:
return Response({'error': str(e)})

Wyświetl plik

@ -44,15 +44,16 @@ algos = {
'expr': '(G - R) / (G + R - B)',
'help': _('Visual Atmospheric Resistance Index shows the areas of vegetation.'),
'range': (-1, 1)
},
'MPRI': {
'expr': '(G - R) / (G + R)',
'help': _('Modified Photochemical Reflectance Index'),
'range': (-1, 1)
},
'EXG': {
'expr': '(2 * G) - (R + B)',
'help': _('Excess Green Index (derived from only the RGB bands) emphasizes the greenness of leafy crops such as potatoes.')
},
'TGI': {
'expr': '(G - 0.39) * (R - 0.61) * B',
'help': _('Triangular Greenness Index (derived from only the RGB bands) performs similarly to EXG but with improvements over certain environments.')
},
'BAI': {
'expr': '1.0 / (((0.1 - R) ** 2) + ((0.06 - N) ** 2))',
'help': _('Burn Area Index hightlights burned land in the red to near-infrared spectrum.')
@ -110,13 +111,13 @@ algos = {
'help': _('Atmospherically Resistant Vegetation Index. Useful when working with imagery for regions with high atmospheric aerosol content.'),
'range': (-1, 1)
},
'Thermal C': {
'Celsius': {
'expr': 'L',
'help': _('Thermal temperature in Celsius degrees.')
'help': _('Temperature in Celsius degrees.')
},
'Thermal K': {
'Kelvin': {
'expr': 'L * 100 + 27315',
'help': _('Thermal temperature in Centikelvin degrees.')
'help': _('Temperature in Centikelvin degrees.')
},
# more?
@ -139,16 +140,22 @@ camera_filters = [
'NRB',
'RGBN',
'RGNRe',
'GRReN',
'RGBNRe',
'BGRNRe',
'BGRReN',
'RGBNRe',
'RGBReN',
'RGBNReL',
'BGRNReL',
'BGRReNL',
'RGBNRePL',
'L', # FLIR camera has a single LWIR band
# more?
# TODO: certain cameras have only two bands? eg. MAPIR NDVI BLUE+NIR
]
@ -162,7 +169,7 @@ def lookup_formula(algo, band_order = 'RGB'):
if algo not in algos:
raise ValueError("Cannot find algorithm " + algo)
input_bands = tuple(b for b in re.split(r"([A-Z][a-z]*)", band_order) if b != "")
def repl(matches):
@ -184,7 +191,7 @@ def get_algorithm_list(max_bands=3):
if k.startswith("_"):
continue
cam_filters = get_camera_filters_for(algos[k], max_bands)
cam_filters = get_camera_filters_for(algos[k]['expr'], max_bands)
if len(cam_filters) == 0:
continue
@ -197,9 +204,9 @@ def get_algorithm_list(max_bands=3):
return res
def get_camera_filters_for(algo, max_bands=3):
@lru_cache(maxsize=100)
def get_camera_filters_for(expr, max_bands=3):
result = []
expr = algo['expr']
pattern = re.compile("([A-Z]+?[a-z]*)")
bands = list(set(re.findall(pattern, expr)))
for f in camera_filters:
@ -217,3 +224,45 @@ def get_camera_filters_for(algo, max_bands=3):
return result
@lru_cache(maxsize=1)
def get_bands_lookup():
bands_aliases = {
'R': ['red', 'r'],
'G': ['green', 'g'],
'B': ['blue', 'b'],
'N': ['nir', 'n'],
'Re': ['rededge', 're'],
'P': ['panchro', 'p'],
'L': ['lwir', 'l']
}
bands_lookup = {}
for band in bands_aliases:
for a in bands_aliases[band]:
bands_lookup[a] = band
return bands_lookup
def get_auto_bands(orthophoto_bands, formula):
algo = algos.get(formula)
if not algo:
raise ValueError("Cannot find formula: " + formula)
max_bands = len(orthophoto_bands) - 1 # minus alpha
filters = get_camera_filters_for(algo['expr'], max_bands)
if not filters:
raise valueError(f"Cannot find filters for {algo} with max bands {max_bands}")
bands_lookup = get_bands_lookup()
band_order = ""
for band in orthophoto_bands:
if band['name'] == 'alpha' or (not band['description']):
continue
f_band = bands_lookup.get(band['description'].lower())
if f_band is not None:
band_order += f_band
if band_order in filters:
return band_order, True
else:
return filters[0], False # Fallback

Wyświetl plik

@ -4,7 +4,6 @@ import math
from .tasks import TaskNestedView
from rest_framework import exceptions
from app.models import ImageUpload
from app.models.task import assets_directory_path
from PIL import Image, ImageDraw, ImageOps
from django.http import HttpResponse
@ -33,12 +32,7 @@ class Thumbnail(TaskNestedView):
Generate a thumbnail on the fly for a particular task's image
"""
task = self.get_and_check_task(request, pk)
image = ImageUpload.objects.filter(task=task, image=assets_directory_path(task.id, task.project.id, image_filename)).first()
if image is None:
raise exceptions.NotFound()
image_path = image.path()
image_path = task.get_image_path(image_filename)
if not os.path.isfile(image_path):
raise exceptions.NotFound()
@ -146,12 +140,7 @@ class ImageDownload(TaskNestedView):
Download a task's image
"""
task = self.get_and_check_task(request, pk)
image = ImageUpload.objects.filter(task=task, image=assets_directory_path(task.id, task.project.id, image_filename)).first()
if image is None:
raise exceptions.NotFound()
image_path = image.path()
image_path = task.get_image_path(image_filename)
if not os.path.isfile(image_path):
raise exceptions.NotFound()

Wyświetl plik

@ -6,7 +6,7 @@ from rest_framework.response import Response
from rest_framework.views import APIView
from nodeodm.models import ProcessingNode
from webodm import settings
class ProcessingNodeSerializer(serializers.ModelSerializer):
online = serializers.SerializerMethodField()
@ -49,6 +49,18 @@ class ProcessingNodeViewSet(viewsets.ModelViewSet):
serializer_class = ProcessingNodeSerializer
queryset = ProcessingNode.objects.all()
def list(self, request, *args, **kwargs):
queryset = self.filter_queryset(self.get_queryset())
if settings.UI_MAX_PROCESSING_NODES is not None:
queryset = queryset[:settings.UI_MAX_PROCESSING_NODES]
if settings.NODE_OPTIMISTIC_MODE:
for pn in queryset:
pn.update_node_info()
serializer = self.get_serializer(queryset, many=True)
return Response(serializer.data)
class ProcessingNodeOptionsView(APIView):
"""

Wyświetl plik

@ -197,10 +197,12 @@ class ProjectViewSet(viewsets.ModelViewSet):
return Response({'success': True}, status=status.HTTP_200_OK)
def destroy(self, request, pk=None):
project = get_and_check_project(request, pk, ('delete_project', ))
project = get_and_check_project(request, pk, ('view_project', ))
# Owner? Delete the project
if project.owner == request.user or request.user.is_superuser:
get_and_check_project(request, pk, ('delete_project', ))
return super().destroy(self, request, pk=pk)
else:
# Do not remove the project, simply remove all user's permissions to the project

Wyświetl plik

@ -1,9 +1,11 @@
import os
import re
import shutil
from wsgiref.util import FileWrapper
import mimetypes
from shutil import copyfileobj
from shutil import copyfileobj, move
from django.core.exceptions import ObjectDoesNotExist, SuspiciousFileOperation, ValidationError
from django.core.files.uploadedfile import InMemoryUploadedFile
from django.db import transaction
@ -23,7 +25,7 @@ from .common import get_and_check_project, get_asset_download_filename
from .tags import TagsField
from app.security import path_traversal_check
from django.utils.translation import gettext_lazy as _
from webodm import settings
def flatten_files(request_files):
# MultiValueDict in, flat array of files out
@ -74,8 +76,8 @@ class TaskSerializer(serializers.ModelSerializer):
class Meta:
model = models.Task
exclude = ('console_output', 'orthophoto_extent', 'dsm_extent', 'dtm_extent', )
read_only_fields = ('processing_time', 'status', 'last_error', 'created_at', 'pending_action', 'available_assets', )
exclude = ('orthophoto_extent', 'dsm_extent', 'dtm_extent', )
read_only_fields = ('processing_time', 'status', 'last_error', 'created_at', 'pending_action', 'available_assets', 'size', )
class TaskViewSet(viewsets.ViewSet):
"""
@ -83,7 +85,7 @@ class TaskViewSet(viewsets.ViewSet):
A task represents a set of images and other input to be sent to a processing node.
Once a processing node completes processing, results are stored in the task.
"""
queryset = models.Task.objects.all().defer('orthophoto_extent', 'dsm_extent', 'dtm_extent', 'console_output', )
queryset = models.Task.objects.all().defer('orthophoto_extent', 'dsm_extent', 'dtm_extent', )
parser_classes = (parsers.MultiPartParser, parsers.JSONParser, parsers.FormParser, )
ordering_fields = '__all__'
@ -145,8 +147,7 @@ class TaskViewSet(viewsets.ViewSet):
raise exceptions.NotFound()
line_num = max(0, int(request.query_params.get('line', 0)))
output = task.console_output or ""
return Response('\n'.join(output.rstrip().split('\n')[line_num:]))
return Response('\n'.join(task.console.output().rstrip().split('\n')[line_num:]))
def list(self, request, project_pk=None):
get_and_check_project(request, project_pk)
@ -179,11 +180,12 @@ class TaskViewSet(viewsets.ViewSet):
raise exceptions.NotFound()
task.partial = False
task.images_count = models.ImageUpload.objects.filter(task=task).count()
task.images_count = len(task.scan_images())
if task.images_count < 2:
raise exceptions.ValidationError(detail=_("You need to upload at least 2 images before commit"))
if task.images_count < 1:
raise exceptions.ValidationError(detail=_("You need to upload at least 1 file before commit"))
task.update_size()
task.save()
worker_tasks.process_task.delay(task.id)
@ -202,21 +204,17 @@ class TaskViewSet(viewsets.ViewSet):
raise exceptions.NotFound()
files = flatten_files(request.FILES)
if len(files) == 0:
raise exceptions.ValidationError(detail=_("No files uploaded"))
with transaction.atomic():
for image in files:
models.ImageUpload.objects.create(task=task, image=image)
task.images_count = models.ImageUpload.objects.filter(task=task).count()
uploaded = task.handle_images_upload(files)
task.images_count = len(task.scan_images())
# Update other parameters such as processing node, task name, etc.
serializer = TaskSerializer(task, data=request.data, partial=True)
serializer.is_valid(raise_exception=True)
serializer.save()
return Response({'success': True}, status=status.HTTP_200_OK)
return Response({'success': True, 'uploaded': uploaded}, status=status.HTTP_200_OK)
@action(detail=True, methods=['post'])
def duplicate(self, request, pk=None, project_pk=None):
@ -256,9 +254,8 @@ class TaskViewSet(viewsets.ViewSet):
task = models.Task.objects.create(project=project,
pending_action=pending_actions.RESIZE if 'resize_to' in request.data else None)
for image in files:
models.ImageUpload.objects.create(task=task, image=image)
task.images_count = len(files)
task.handle_images_upload(files)
task.images_count = len(task.scan_images())
# Update other parameters such as processing node, task name, etc.
serializer = TaskSerializer(task, data=request.data, partial=True)
@ -299,7 +296,7 @@ class TaskViewSet(viewsets.ViewSet):
class TaskNestedView(APIView):
queryset = models.Task.objects.all().defer('orthophoto_extent', 'dtm_extent', 'dsm_extent', 'console_output', )
queryset = models.Task.objects.all().defer('orthophoto_extent', 'dtm_extent', 'dsm_extent', )
permission_classes = (AllowAny, )
def get_and_check_task(self, request, pk, annotate={}):
@ -368,19 +365,20 @@ class TaskDownloads(TaskNestedView):
# Check and download
try:
asset_fs, is_zipstream = task.get_asset_file_or_zipstream(asset)
asset_fs = task.get_asset_file_or_stream(asset)
except FileNotFoundError:
raise exceptions.NotFound(_("Asset does not exist"))
if not is_zipstream and not os.path.isfile(asset_fs):
is_stream = not isinstance(asset_fs, str)
if not is_stream and not os.path.isfile(asset_fs):
raise exceptions.NotFound(_("Asset does not exist"))
download_filename = request.GET.get('filename', get_asset_download_filename(task, asset))
if not is_zipstream:
return download_file_response(request, asset_fs, 'attachment', download_filename=download_filename)
else:
if is_stream:
return download_file_stream(request, asset_fs, 'attachment', download_filename=download_filename)
else:
return download_file_response(request, asset_fs, 'attachment', download_filename=download_filename)
"""
Raw access to the task's asset folder resources
@ -424,18 +422,52 @@ class TaskAssetsImport(APIView):
if import_url and len(files) > 0:
raise exceptions.ValidationError(detail=_("Cannot create task, either specify a URL or upload 1 file."))
chunk_index = request.data.get('dzchunkindex')
uuid = request.data.get('dzuuid')
total_chunk_count = request.data.get('dztotalchunkcount', None)
# Chunked upload?
tmp_upload_file = None
if len(files) > 0 and chunk_index is not None and uuid is not None and total_chunk_count is not None:
byte_offset = request.data.get('dzchunkbyteoffset', 0)
try:
chunk_index = int(chunk_index)
byte_offset = int(byte_offset)
total_chunk_count = int(total_chunk_count)
except ValueError:
raise exceptions.ValidationError(detail="Some parameters are not integers")
uuid = re.sub('[^0-9a-zA-Z-]+', "", uuid)
tmp_upload_file = os.path.join(settings.FILE_UPLOAD_TEMP_DIR, f"{uuid}.upload")
if os.path.isfile(tmp_upload_file) and chunk_index == 0:
os.unlink(tmp_upload_file)
with open(tmp_upload_file, 'ab') as fd:
fd.seek(byte_offset)
if isinstance(files[0], InMemoryUploadedFile):
for chunk in files[0].chunks():
fd.write(chunk)
else:
with open(files[0].temporary_file_path(), 'rb') as file:
fd.write(file.read())
if chunk_index + 1 < total_chunk_count:
return Response({'uploaded': True}, status=status.HTTP_200_OK)
# Ready to import
with transaction.atomic():
task = models.Task.objects.create(project=project,
auto_processing_node=False,
name=task_name,
import_url=import_url if import_url else "file://all.zip",
status=status_codes.RUNNING,
pending_action=pending_actions.IMPORT)
auto_processing_node=False,
name=task_name,
import_url=import_url if import_url else "file://all.zip",
status=status_codes.RUNNING,
pending_action=pending_actions.IMPORT)
task.create_task_directories()
destination_file = task.assets_path("all.zip")
if len(files) > 0:
destination_file = task.assets_path("all.zip")
# Non-chunked file import
if tmp_upload_file is None and len(files) > 0:
with open(destination_file, 'wb+') as fd:
if isinstance(files[0], InMemoryUploadedFile):
for chunk in files[0].chunks():
@ -443,6 +475,9 @@ class TaskAssetsImport(APIView):
else:
with open(files[0].temporary_file_path(), 'rb') as file:
copyfileobj(file, fd)
elif tmp_upload_file is not None:
# Move
shutil.move(tmp_upload_file, destination_file)
worker_tasks.process_task.delay(task.id)

Wyświetl plik

@ -3,6 +3,7 @@ import rio_tiler.utils
from rasterio.enums import ColorInterp
from rasterio.crs import CRS
from rasterio.features import bounds as featureBounds
from rasterio.errors import NotGeoreferencedWarning
import urllib
import os
from .common import get_asset_download_filename
@ -16,19 +17,25 @@ from rio_tiler.models import Metadata as RioMetadata
from rio_tiler.profiles import img_profiles
from rio_tiler.colormap import cmap as colormap, apply_cmap
from rio_tiler.io import COGReader
from rio_tiler.errors import InvalidColorMapName
from rio_tiler.errors import InvalidColorMapName, AlphaBandWarning
import numpy as np
from .custom_colormaps_helper import custom_colormaps
from app.raster_utils import extension_for_export_format, ZOOM_EXTRA_LEVELS
from .hsvblend import hsv_blend
from .hillshade import LightSource
from .formulas import lookup_formula, get_algorithm_list
from .formulas import lookup_formula, get_algorithm_list, get_auto_bands
from .tasks import TaskNestedView
from rest_framework import exceptions
from rest_framework.response import Response
from worker.tasks import export_raster, export_pointcloud
from django.utils.translation import gettext as _
import warnings
# Disable: NotGeoreferencedWarning: Dataset has no geotransform, gcps, or rpcs. The identity matrix be returned.
warnings.filterwarnings("ignore", category=NotGeoreferencedWarning)
# Disable: Alpha band was removed from the output data array
warnings.filterwarnings("ignore", category=AlphaBandWarning)
for custom_colormap in custom_colormaps:
colormap = colormap.register(custom_colormap)
@ -134,6 +141,12 @@ class Metadata(TaskNestedView):
if boundaries_feature == '': boundaries_feature = None
if boundaries_feature is not None:
boundaries_feature = json.loads(boundaries_feature)
is_auto_bands_match = False
is_auto_bands = False
if bands == 'auto' and formula:
is_auto_bands = True
bands, is_auto_bands_match = get_auto_bands(task.orthophoto_bands, formula)
try:
expr, hrange = lookup_formula(formula, bands)
if defined_range is not None:
@ -194,6 +207,8 @@ class Metadata(TaskNestedView):
for b in info['statistics']:
info['statistics'][b]['min'] = hrange[0]
info['statistics'][b]['max'] = hrange[1]
info['statistics'][b]['percentiles'][0] = max(hrange[0], info['statistics'][b]['percentiles'][0])
info['statistics'][b]['percentiles'][1] = min(hrange[1], info['statistics'][b]['percentiles'][1])
cmap_labels = {
"viridis": "Viridis",
@ -217,6 +232,8 @@ class Metadata(TaskNestedView):
colormaps = []
algorithms = []
auto_bands = {'filter': '', 'match': None}
if tile_type in ['dsm', 'dtm']:
colormaps = ['viridis', 'jet', 'terrain', 'gist_earth', 'pastel1']
elif formula and bands:
@ -224,9 +241,14 @@ class Metadata(TaskNestedView):
'better_discrete_ndvi',
'viridis', 'plasma', 'inferno', 'magma', 'cividis', 'jet', 'jet_r']
algorithms = *get_algorithm_list(band_count),
if is_auto_bands:
auto_bands['filter'] = bands
auto_bands['match'] = is_auto_bands_match
info['color_maps'] = []
info['algorithms'] = algorithms
info['auto_bands'] = auto_bands
if colormaps:
for cmap in colormaps:
try:
@ -247,6 +269,7 @@ class Metadata(TaskNestedView):
info['maxzoom'] += ZOOM_EXTRA_LEVELS
info['minzoom'] -= ZOOM_EXTRA_LEVELS
info['bounds'] = {'value': src.bounds, 'crs': src.dataset.crs}
return Response(info)
@ -289,6 +312,8 @@ class Tiles(TaskNestedView):
if color_map == '': color_map = None
if hillshade == '' or hillshade == '0': hillshade = None
if tilesize == '' or tilesize is None: tilesize = 256
if bands == 'auto' and formula:
bands, _discard_ = get_auto_bands(task.orthophoto_bands, formula)
try:
tilesize = int(tilesize)
@ -374,7 +399,7 @@ class Tiles(TaskNestedView):
# Hillshading is not a local tile operation and
# requires neighbor tiles to be rendered seamlessly
if hillshade is not None:
tile_buffer = tilesize
tile_buffer = 16
try:
if expr is not None:
@ -446,17 +471,17 @@ class Tiles(TaskNestedView):
# Remove elevation data from edge buffer tiles
# (to keep intensity uniform across tiles)
elevation = tile.data[0]
elevation[0:tilesize, 0:tilesize] = nodata
elevation[tilesize*2:tilesize*3, 0:tilesize] = nodata
elevation[0:tilesize, tilesize*2:tilesize*3] = nodata
elevation[tilesize*2:tilesize*3, tilesize*2:tilesize*3] = nodata
elevation[0:tile_buffer, 0:tile_buffer] = nodata
elevation[tile_buffer+tilesize:tile_buffer*2+tilesize, 0:tile_buffer] = nodata
elevation[0:tile_buffer, tile_buffer+tilesize:tile_buffer*2+tilesize] = nodata
elevation[tile_buffer+tilesize:tile_buffer*2+tilesize, tile_buffer+tilesize:tile_buffer*2+tilesize] = nodata
intensity = ls.hillshade(elevation, dx=dx, dy=dy, vert_exag=hillshade)
intensity = intensity[tilesize:tilesize * 2, tilesize:tilesize * 2]
intensity = intensity[tile_buffer:tile_buffer+tilesize, tile_buffer:tile_buffer+tilesize]
if intensity is not None:
rgb = tile.post_process(in_range=(rescale_arr,))
rgb_data = rgb.data[:,tilesize:tilesize * 2, tilesize:tilesize * 2]
rgb_data = rgb.data[:,tile_buffer:tilesize+tile_buffer, tile_buffer:tilesize+tile_buffer]
if colormap:
rgb, _discard_ = apply_cmap(rgb_data, colormap.get(color_map))
if rgb.data.shape[0] != 3:
@ -465,7 +490,7 @@ class Tiles(TaskNestedView):
intensity = intensity * 255.0
rgb = hsv_blend(rgb, intensity)
if rgb is not None:
mask = tile.mask[tilesize:tilesize * 2, tilesize:tilesize * 2]
mask = tile.mask[tile_buffer:tilesize+tile_buffer, tile_buffer:tilesize+tile_buffer]
return HttpResponse(
render(rgb, mask, img_format=driver, **options),
content_type="image/{}".format(ext)
@ -537,6 +562,9 @@ class Export(TaskNestedView):
raise exceptions.ValidationError(_("Both formula and bands parameters are required"))
if formula and bands:
if bands == 'auto':
bands, _discard_ = get_auto_bands(task.orthophoto_bands, formula)
try:
expr, _discard_ = lookup_formula(formula, bands)
except ValueError as e:
@ -604,4 +632,4 @@ class Export(TaskNestedView):
else:
celery_task_id = export_pointcloud.delay(url, epsg=epsg,
format=export_format).task_id
return Response({'celery_task_id': celery_task_id, 'filename': filename})
return Response({'celery_task_id': celery_task_id, 'filename': filename})

Wyświetl plik

@ -6,13 +6,14 @@ from .projects import ProjectViewSet
from .tasks import TaskViewSet, TaskDownloads, TaskAssets, TaskAssetsImport
from .imageuploads import Thumbnail, ImageDownload
from .processingnodes import ProcessingNodeViewSet, ProcessingNodeOptionsView
from .admin import AdminUserViewSet, AdminGroupViewSet
from .admin import AdminUserViewSet, AdminGroupViewSet, AdminProfileViewSet
from rest_framework_nested import routers
from rest_framework_jwt.views import obtain_jwt_token
from .tiler import TileJson, Bounds, Metadata, Tiles, Export
from .potree import Scene, CameraView
from .workers import CheckTask, GetTaskResult
from .users import UsersList
from .externalauth import ExternalTokenAuth
from webodm import settings
router = routers.DefaultRouter()
@ -26,6 +27,7 @@ tasks_router.register(r'tasks', TaskViewSet, basename='projects-tasks')
admin_router = routers.DefaultRouter()
admin_router.register(r'admin/users', AdminUserViewSet, basename='admin-users')
admin_router.register(r'admin/groups', AdminGroupViewSet, basename='admin-groups')
admin_router.register(r'admin/profiles', AdminProfileViewSet, basename='admin-groups')
urlpatterns = [
url(r'processingnodes/options/$', ProcessingNodeOptionsView.as_view()),
@ -56,9 +58,12 @@ urlpatterns = [
url(r'^auth/', include('rest_framework.urls')),
url(r'^token-auth/', obtain_jwt_token),
url(r'^plugins/(?P<plugin_name>[^/.]+)/(.*)$', api_view_handler)
url(r'^plugins/(?P<plugin_name>[^/.]+)/(.*)$', api_view_handler),
]
if settings.ENABLE_USERS_API:
urlpatterns.append(url(r'users', UsersList.as_view()))
if settings.EXTERNAL_AUTH_ENDPOINT != '':
urlpatterns.append(url(r'^external-token-auth/', ExternalTokenAuth.as_view()))

Wyświetl plik

@ -0,0 +1,88 @@
import requests
from django.contrib.auth.backends import ModelBackend
from django.contrib.auth.models import User
from nodeodm.models import ProcessingNode
from webodm import settings
from guardian.shortcuts import assign_perm
import logging
logger = logging.getLogger('app.logger')
def get_user_from_external_auth_response(res):
if 'message' in res or 'error' in res:
return None
if 'user_id' in res and 'username' in res:
try:
user = User.objects.get(pk=res['user_id'])
except User.DoesNotExist:
user = User(pk=res['user_id'], username=res['username'])
user.save()
# Update user info
if user.username != res['username']:
user.username = res['username']
user.save()
maxQuota = -1
if 'maxQuota' in res:
maxQuota = res['maxQuota']
if 'node' in res and 'limits' in res['node'] and 'maxQuota' in res['node']['limits']:
maxQuota = res['node']['limits']['maxQuota']
# Update quotas
if user.profile.quota != maxQuota:
user.profile.quota = maxQuota
user.save()
# Setup/update processing node
if 'node' in res and 'hostname' in res['node'] and 'port' in res['node']:
hostname = res['node']['hostname']
port = res['node']['port']
token = res['node'].get('token', '')
# Only add/update if a token is provided, since we use
# tokens as unique identifiers for hostname/port updates
if token != "":
try:
node = ProcessingNode.objects.get(token=token)
if node.hostname != hostname or node.port != port:
node.hostname = hostname
node.port = port
node.save()
except ProcessingNode.DoesNotExist:
node = ProcessingNode(hostname=hostname, port=port, token=token)
node.save()
if not user.has_perm('view_processingnode', node):
assign_perm('view_processingnode', user, node)
return user
else:
return None
class ExternalBackend(ModelBackend):
def authenticate(self, request, username=None, password=None):
if settings.EXTERNAL_AUTH_ENDPOINT == "":
return None
try:
r = requests.post(settings.EXTERNAL_AUTH_ENDPOINT, {
'username': username,
'password': password
}, headers={'Accept': 'application/json'})
res = r.json()
return get_user_from_external_auth_response(res)
except:
return None
def get_user(self, user_id):
if settings.EXTERNAL_AUTH_ENDPOINT == "":
return None
try:
return User.objects.get(pk=user_id)
except User.DoesNotExist:
return None

Wyświetl plik

@ -26,7 +26,7 @@ from webodm.wsgi import booted
def boot():
# booted is a shared memory variable to keep track of boot status
# as multiple gunicorn workers could trigger the boot sequence twice
if (not settings.DEBUG and booted.value) or settings.MIGRATING: return
if (not settings.DEBUG and booted.value) or settings.MIGRATING or settings.FLUSHING: return
booted.value = True
logger = logging.getLogger('app.logger')
@ -37,7 +37,7 @@ def boot():
logger.warning("Debug mode is ON (for development this is OK)")
# Silence django's "Warning: Session data corrupted" messages
session_logger = logging.getLogger("django.security.SuspiciousSession")
session_logger = logging.getLogger("django.SuspiciousOperation.SuspiciousSession")
session_logger.disabled = True
# Make sure our app/media/tmp folder exists
@ -110,8 +110,7 @@ def add_default_presets():
defaults={'options': [{'name': 'auto-boundary', 'value': True},
{'name': 'dsm', 'value': True},
{'name': 'dem-resolution', 'value': '2'},
{'name': 'pc-quality', 'value': 'high'},
{'name': 'use-3dmesh', 'value': True}]})
{'name': 'pc-quality', 'value': 'high'}]})
Preset.objects.update_or_create(name='3D Model', system=True,
defaults={'options': [{'name': 'auto-boundary', 'value': True},
{'name': 'mesh-octree-depth', 'value': "12"},
@ -121,24 +120,13 @@ def add_default_presets():
Preset.objects.update_or_create(name='Buildings', system=True,
defaults={'options': [{'name': 'auto-boundary', 'value': True},
{'name': 'mesh-size', 'value': '300000'},
{'name': 'pc-geometric', 'value': True},
{'name': 'feature-quality', 'value': 'high'},
{'name': 'pc-quality', 'value': 'high'}]})
Preset.objects.update_or_create(name='Buildings Ultra Quality', system=True,
defaults={'options': [{'name': 'auto-boundary', 'value': True},
{'name': 'mesh-size', 'value': '300000'},
{'name': 'pc-geometric', 'value': True},
{'name': 'feature-quality', 'value': 'ultra'},
{'name': 'pc-quality', 'value': 'ultra'}]})
Preset.objects.update_or_create(name='Point of Interest', system=True,
defaults={'options': [{'name': 'auto-boundary', 'value': True},
{'name': 'mesh-size', 'value': '300000'},
{'name': 'use-3dmesh', 'value': True}]})
Preset.objects.update_or_create(name='Forest', system=True,
defaults={'options': [{'name': 'auto-boundary', 'value': True},
{'name': 'min-num-features', 'value': '18000'},
{'name': 'use-3dmesh', 'value': True},
{'name': 'feature-quality', 'value': 'ultra'}]})
{'name': 'feature-quality', 'value': 'medium'}]})
Preset.objects.update_or_create(name='DSM + DTM', system=True,
defaults={'options': [{'name': 'auto-boundary', 'value': True},
{'name': 'dsm', 'value': True},

Wyświetl plik

@ -0,0 +1,53 @@
import os
import logging
logger = logging.getLogger('app.logger')
class Console:
def __init__(self, file):
self.file = file
self.base_dir = os.path.dirname(self.file)
self.parent_dir = os.path.dirname(self.base_dir)
def __repr__(self):
return "<Console output: %s>" % self.file
def __str__(self):
if not os.path.isfile(self.file):
return ""
try:
with open(self.file, 'r', encoding="utf-8") as f:
return f.read()
except IOError:
logger.warn("Cannot read console file: %s" % self.file)
return ""
def __add__(self, other):
self.append(other)
return self
def output(self):
return str(self)
def append(self, text):
if os.path.isdir(self.parent_dir):
try:
# Write
if not os.path.isdir(self.base_dir):
os.makedirs(self.base_dir, exist_ok=True)
with open(self.file, "a", encoding="utf-8") as f:
f.write(text)
except IOError:
logger.warn("Cannot append to console file: %s" % self.file)
def reset(self, text = ""):
if os.path.isdir(self.parent_dir):
try:
if not os.path.isdir(self.base_dir):
os.makedirs(self.base_dir, exist_ok=True)
with open(self.file, "w", encoding="utf-8") as f:
f.write(text)
except IOError:
logger.warn("Cannot reset console file: %s" % self.file)

Wyświetl plik

@ -7,77 +7,3 @@ logger = logging.getLogger('app.logger')
# Make the SETTINGS object available to all templates
def load(request=None):
return {'SETTINGS': Setting.objects.first()}
# Helper functions for libsass
def theme(color):
"""Return a theme color from the currently selected theme"""
try:
return getattr(load()['SETTINGS'].theme, color)
except Exception as e:
logger.warning("Cannot load configuration from theme(): " + e.message)
return "blue" # dah buh dih ah buh daa..
def complementary(hexcolor):
"""Returns complementary RGB color
Example: complementaryColor('#FFFFFF') --> '#000000'
"""
if hexcolor[0] == '#':
hexcolor = hexcolor[1:]
rgb = (hexcolor[0:2], hexcolor[2:4], hexcolor[4:6])
comp = ['%02X' % (255 - int(a, 16)) for a in rgb]
return '#' + ''.join(comp)
def scaleby(hexcolor, scalefactor, ignore_value = False):
"""
Scales a hex string by ``scalefactor``, but is color dependent, unless ignore_value is True
scalefactor is now always between 0 and 1. A value of 0.8
will cause bright colors to become darker and
dark colors to become brigther by 20%
"""
def calculate(hexcolor, scalefactor):
"""
Scales a hex string by ``scalefactor``. Returns scaled hex string.
To darken the color, use a float value between 0 and 1.
To brighten the color, use a float value greater than 1.
>>> colorscale("#DF3C3C", .5)
#6F1E1E
>>> colorscale("#52D24F", 1.6)
#83FF7E
>>> colorscale("#4F75D2", 1)
#4F75D2
"""
def clamp(val, minimum=0, maximum=255):
if val < minimum:
return minimum
if val > maximum:
return maximum
return int(val)
hexcolor = hexcolor.strip('#')
if scalefactor < 0 or len(hexcolor) != 6:
return hexcolor
r, g, b = int(hexcolor[:2], 16), int(hexcolor[2:4], 16), int(hexcolor[4:], 16)
r = clamp(r * scalefactor)
g = clamp(g * scalefactor)
b = clamp(b * scalefactor)
return "#%02x%02x%02x" % (r, g, b)
hexcolor = hexcolor.strip('#')
scalefactor = abs(float(scalefactor.value))
scalefactor = min(1.0, max(0, scalefactor))
r, g, b = int(hexcolor[:2], 16), int(hexcolor[2:4], 16), int(hexcolor[4:], 16)
value = max(r, g, b)
return calculate(hexcolor, scalefactor if ignore_value or value >= 127 else 2 - scalefactor)

Wyświetl plik

@ -0,0 +1,63 @@
import os
from django.core.management.base import BaseCommand
from django.core.management import call_command
from app.models import Project
from webodm import settings
class Command(BaseCommand):
requires_system_checks = []
def add_arguments(self, parser):
parser.add_argument("action", type=str, choices=['mediapattern'])
parser.add_argument("--skip-images", action='store_true', required=False, help="Skip images")
parser.add_argument("--skip-no-quotas", action='store_true', required=False, help="Skip directories owned by users with no quota (0)")
parser.add_argument("--skip-tiles", action='store_true', required=False, help="Skip tiled assets which can be regenerated from other data")
parser.add_argument("--skip-legacy-textured-models", action='store_true', required=False, help="Skip textured models in OBJ format")
super(Command, self).add_arguments(parser)
def handle(self, **options):
if options.get('action') == 'mediapattern':
print("# BorgBackup pattern file for media directory")
print("# Generated with WebODM")
print("")
print("# Skip anything but project folder")
for d in os.listdir(settings.MEDIA_ROOT):
if d != "project":
print(f"! {d}")
if options.get('skip_no_quotas'):
skip_projects = Project.objects.filter(owner__profile__quota=0).order_by('id')
else:
skip_projects = []
print("")
print("# Skip projects")
for sp in skip_projects:
print("- " + os.path.join("project", str(sp.id)))
if options.get('skip_images'):
print("")
print("# Skip images/other files")
print("- project/*/task/*/*.*")
if options.get('skip_tiles'):
print("")
print("# Skip entwine/potree folders")
print("! project/*/task/*/assets/entwine_pointcloud")
print("! project/*/task/*/assets/potree_pointcloud")
print("")
print("# Skip tiles folders")
print("! project/*/task/*/assets/*_tiles")
print("# Skip data")
print("! project/*/task/*/data")
if options.get('skip_legacy_textured_models'):
print("")
print("# Skip OBJ texture model files")
print("+ project/*/task/*/assets/odm_texturing/*.glb")
print("- project/*/task/*/assets/odm_texturing")

Wyświetl plik

@ -8,17 +8,14 @@ import uuid, os, pickle, tempfile
from webodm import settings
tasks = []
imageuploads = []
task_ids = {} # map old task IDs --> new task IDs
def dump(apps, schema_editor):
global tasks, imageuploads, task_ids
global tasks, task_ids
Task = apps.get_model('app', 'Task')
ImageUpload = apps.get_model('app', 'ImageUpload')
tasks = list(Task.objects.all().values('id', 'project'))
imageuploads = list(ImageUpload.objects.all().values('id', 'task'))
# Generate UUIDs
for task in tasks:
@ -31,9 +28,9 @@ def dump(apps, schema_editor):
task_ids[task['id']] = new_id
tmp_path = os.path.join(tempfile.gettempdir(), "public_task_uuids_migration.pickle")
pickle.dump((tasks, imageuploads, task_ids), open(tmp_path, 'wb'))
pickle.dump((tasks, task_ids), open(tmp_path, 'wb'))
if len(tasks) > 0: print("Dumped tasks and imageuploads")
if len(tasks) > 0: print("Dumped tasks")
class Migration(migrations.Migration):

Wyświetl plik

@ -8,7 +8,6 @@ import uuid, os, pickle, tempfile
from webodm import settings
tasks = []
imageuploads = []
task_ids = {} # map old task IDs --> new task IDs
def task_path(project_id, task_id):
@ -44,10 +43,10 @@ def create_uuids(apps, schema_editor):
def restore(apps, schema_editor):
global tasks, imageuploads, task_ids
global tasks, task_ids
tmp_path = os.path.join(tempfile.gettempdir(), "public_task_uuids_migration.pickle")
tasks, imageuploads, task_ids = pickle.load(open(tmp_path, 'rb'))
tasks, task_ids = pickle.load(open(tmp_path, 'rb'))
class Migration(migrations.Migration):

Wyświetl plik

@ -1,54 +0,0 @@
# -*- coding: utf-8 -*-
# Generated by Django 1.11.1 on 2017-11-30 15:41
from __future__ import unicode_literals
from django.db import migrations, models
import os, pickle, tempfile
from webodm import settings
tasks = []
imageuploads = []
task_ids = {} # map old task IDs --> new task IDs
def restoreImageUploadFks(apps, schema_editor):
global imageuploads, task_ids
ImageUpload = apps.get_model('app', 'ImageUpload')
Task = apps.get_model('app', 'Task')
for img in imageuploads:
i = ImageUpload.objects.get(pk=img['id'])
old_image_path = i.image.name
task_id = task_ids[img['task']]
# project/2/task/5/DJI_0032.JPG --> project/2/task/<NEW_TASK_ID>/DJI_0032.JPG
dirs, filename = os.path.split(old_image_path)
head, tail = os.path.split(dirs)
new_image_path = os.path.join(head, str(task_id), filename)
i.task = Task.objects.get(id=task_id)
i.image.name = new_image_path
i.save()
print("{} --> {} (Task {})".format(old_image_path, new_image_path, str(task_id)))
def restore(apps, schema_editor):
global tasks, imageuploads, task_ids
tmp_path = os.path.join(tempfile.gettempdir(), "public_task_uuids_migration.pickle")
tasks, imageuploads, task_ids = pickle.load(open(tmp_path, 'rb'))
class Migration(migrations.Migration):
dependencies = [
('app', '0014_public_task_uuids'),
]
operations = [
migrations.RunPython(restore),
migrations.RunPython(restoreImageUploadFks),
]

Wyświetl plik

@ -9,7 +9,7 @@ from webodm import settings
class Migration(migrations.Migration):
dependencies = [
('app', '0015_public_task_uuids'),
('app', '0014_public_task_uuids'),
]
operations = [

Wyświetl plik

@ -10,7 +10,7 @@ def update_images_count(apps, schema_editor):
for t in Task.objects.all():
print("Updating {}".format(t))
t.images_count = t.imageupload_set.count()
t.images_count = len(t.scan_images())
t.save()

Wyświetl plik

@ -1,6 +1,6 @@
# Generated by Django 2.1.11 on 2019-09-07 13:48
import app.models.image_upload
import app.models
from django.db import migrations, models
@ -14,6 +14,6 @@ class Migration(migrations.Migration):
migrations.AlterField(
model_name='imageupload',
name='image',
field=models.ImageField(help_text='File uploaded by a user', max_length=512, upload_to=app.models.image_upload.image_directory_path),
field=models.ImageField(help_text='File uploaded by a user', max_length=512, upload_to=app.models.image_directory_path),
),
]

Wyświetl plik

@ -1,7 +1,7 @@
# Generated by Django 2.1.15 on 2021-06-10 18:50
import app.models.image_upload
import app.models.task
from app.models import image_directory_path
import colorfield.fields
from django.conf import settings
import django.contrib.gis.db.models.fields
@ -60,7 +60,7 @@ class Migration(migrations.Migration):
migrations.AlterField(
model_name='imageupload',
name='image',
field=models.ImageField(help_text='File uploaded by a user', max_length=512, upload_to=app.models.image_upload.image_directory_path, verbose_name='Image'),
field=models.ImageField(help_text='File uploaded by a user', max_length=512, upload_to=image_directory_path, verbose_name='Image'),
),
migrations.AlterField(
model_name='imageupload',

Wyświetl plik

@ -0,0 +1,16 @@
# Generated by Django 2.2.27 on 2023-03-23 17:10
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('app', '0033_auto_20230307_1532'),
]
operations = [
migrations.DeleteModel(
name='ImageUpload',
),
]

Wyświetl plik

@ -0,0 +1,44 @@
# Generated by Django 2.2.27 on 2023-05-19 15:38
import rasterio
import os
import django.contrib.postgres.fields.jsonb
from django.db import migrations
from webodm import settings
def update_orthophoto_bands_fields(apps, schema_editor):
Task = apps.get_model('app', 'Task')
for t in Task.objects.all():
bands = []
orthophoto_path = os.path.join(settings.MEDIA_ROOT, "project", str(t.project.id), "task", str(t.id), "assets", "odm_orthophoto", "odm_orthophoto.tif")
if os.path.isfile(orthophoto_path):
try:
with rasterio.open(orthophoto_path) as f:
bands = [c.name for c in f.colorinterp]
except Exception as e:
print(e)
print("Updating {} (with orthophoto bands: {})".format(t, str(bands)))
t.orthophoto_bands = bands
t.save()
class Migration(migrations.Migration):
dependencies = [
('app', '0034_delete_imageupload'),
]
operations = [
migrations.AddField(
model_name='task',
name='orthophoto_bands',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, default=list, help_text='List of orthophoto bands', verbose_name='Orthophoto Bands'),
),
migrations.RunPython(update_orthophoto_bands_fields),
]

Wyświetl plik

@ -0,0 +1,50 @@
# Generated by Django 2.2.27 on 2023-08-21 14:50
import os
from django.db import migrations, models
from webodm import settings
def task_path(project_id, task_id, *args):
return os.path.join(settings.MEDIA_ROOT,
"project",
str(project_id),
"task",
str(task_id),
*args)
def update_size(task):
try:
total_bytes = 0
for dirpath, _, filenames in os.walk(task_path(task.project.id, task.id)):
for f in filenames:
fp = os.path.join(dirpath, f)
if not os.path.islink(fp):
total_bytes += os.path.getsize(fp)
task.size = (total_bytes / 1024 / 1024)
task.save()
print("Updated {} with size {}".format(task, task.size))
except Exception as e:
print("Cannot update size for task {}: {}".format(task, str(e)))
def update_task_sizes(apps, schema_editor):
Task = apps.get_model('app', 'Task')
for t in Task.objects.all():
update_size(t)
class Migration(migrations.Migration):
dependencies = [
('app', '0035_task_orthophoto_bands'),
]
operations = [
migrations.AddField(
model_name='task',
name='size',
field=models.FloatField(blank=True, default=0.0, help_text='Size of the task on disk in megabytes', verbose_name='Size'),
),
migrations.RunPython(update_task_sizes),
]

Wyświetl plik

@ -0,0 +1,35 @@
# Generated by Django 2.2.27 on 2023-08-24 16:35
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
def create_profiles(apps, schema_editor):
User = apps.get_model('auth', 'User')
Profile = apps.get_model('app', 'Profile')
for u in User.objects.all():
p = Profile.objects.create(user=u)
p.save()
print("Created user profile for %s" % u.username)
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('app', '0036_task_size'),
]
operations = [
migrations.CreateModel(
name='Profile',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('quota', models.FloatField(blank=True, default=-1, help_text='Maximum disk quota in megabytes', verbose_name='Quota')),
('user', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.RunPython(create_profiles),
]

Wyświetl plik

@ -0,0 +1,42 @@
# Generated by Django 2.2.27 on 2023-09-11 19:11
import os
from django.db import migrations
from webodm import settings
def data_path(project_id, task_id, *args):
return os.path.join(settings.MEDIA_ROOT,
"project",
str(project_id),
"task",
str(task_id),
"data",
*args)
def dump_console_outputs(apps, schema_editor):
Task = apps.get_model('app', 'Task')
for t in Task.objects.all():
if t.console_output is not None and len(t.console_output) > 0:
dp = data_path(t.project.id, t.id)
os.makedirs(dp, exist_ok=True)
outfile = os.path.join(dp, "console_output.txt")
with open(outfile, "w", encoding="utf-8") as f:
f.write(t.console_output)
print("Wrote console output for %s to %s" % (t, outfile))
else:
print("No task output for %s" % t)
class Migration(migrations.Migration):
dependencies = [
('app', '0037_profile'),
]
operations = [
migrations.RunPython(dump_console_outputs),
migrations.RemoveField(
model_name='task',
name='console_output',
),
]

Wyświetl plik

@ -0,0 +1,43 @@
# Generated by Django 2.2.27 on 2023-10-02 10:21
import rasterio
import os
import django.contrib.postgres.fields.jsonb
from django.db import migrations
from webodm import settings
def update_orthophoto_bands_fields(apps, schema_editor):
Task = apps.get_model('app', 'Task')
for t in Task.objects.all():
bands = []
orthophoto_path = os.path.join(settings.MEDIA_ROOT, "project", str(t.project.id), "task", str(t.id), "assets", "odm_orthophoto", "odm_orthophoto.tif")
if os.path.isfile(orthophoto_path):
try:
with rasterio.open(orthophoto_path) as f:
names = [c.name for c in f.colorinterp]
for i, n in enumerate(names):
bands.append({
'name': n,
'description': f.descriptions[i]
})
except Exception as e:
print(e)
print("Updating {} (with orthophoto bands: {})".format(t, str(bands)))
t.orthophoto_bands = bands
t.save()
class Migration(migrations.Migration):
dependencies = [
('app', '0038_remove_task_console_output'),
]
operations = [
migrations.RunPython(update_orthophoto_bands_fields),
]

Wyświetl plik

@ -1,4 +1,3 @@
from .image_upload import ImageUpload, image_directory_path
from .project import Project
from .task import Task, validate_task_options, gcp_directory_path
from .preset import Preset
@ -6,4 +5,8 @@ from .theme import Theme
from .setting import Setting
from .plugin_datum import PluginDatum
from .plugin import Plugin
from .profile import Profile
# deprecated
def image_directory_path(image_upload, filename):
raise Exception("Deprecated")

Wyświetl plik

@ -1,21 +0,0 @@
from .task import Task, assets_directory_path
from django.db import models
from django.utils.translation import gettext_lazy as _
def image_directory_path(image_upload, filename):
return assets_directory_path(image_upload.task.id, image_upload.task.project.id, filename)
class ImageUpload(models.Model):
task = models.ForeignKey(Task, on_delete=models.CASCADE, help_text=_("Task this image belongs to"), verbose_name=_("Task"))
image = models.ImageField(upload_to=image_directory_path, help_text=_("File uploaded by a user"), max_length=512, verbose_name=_("Image"))
def __str__(self):
return self.image.name
def path(self):
return self.image.path
class Meta:
verbose_name = _("Image Upload")
verbose_name_plural = _("Image Uploads")

Wyświetl plik

@ -0,0 +1,74 @@
import time
from django.contrib.auth.models import User
from django.db import models
from django.utils.translation import gettext_lazy as _
from django.db.models.signals import post_save
from django.dispatch import receiver
from app.models import Task
from django.db.models import Sum
from django.core.cache import cache
from webodm import settings
class Profile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE)
quota = models.FloatField(default=-1, blank=True, help_text=_("Maximum disk quota in megabytes"), verbose_name=_("Quota"))
def has_quota(self):
return self.quota != -1
def used_quota(self):
q = Task.objects.filter(project__owner=self.user).aggregate(total=Sum('size'))['total']
if q is None:
q = 0
return q
def has_exceeded_quota(self):
if not self.has_quota():
return False
q = self.used_quota()
return q > self.quota
def used_quota_cached(self):
k = f'used_quota_{self.user.id}'
cached = cache.get(k)
if cached is not None:
return cached
v = self.used_quota()
cache.set(k, v, 1800) # 30 minutes
return v
def has_exceeded_quota_cached(self):
if not self.has_quota():
return False
q = self.used_quota_cached()
return q > self.quota
def clear_used_quota_cache(self):
cache.delete(f'used_quota_{self.user.id}')
def get_quota_deadline(self):
return cache.get(f'quota_deadline_{self.user.id}')
def set_quota_deadline(self, hours):
k = f'quota_deadline_{self.user.id}'
seconds = (hours * 60 * 60)
v = time.time() + seconds
cache.set(k, v, int(max(seconds * 10, settings.QUOTA_EXCEEDED_GRACE_PERIOD * 60 * 60)))
return v
def clear_quota_deadline(self):
cache.delete(f'quota_deadline_{self.user.id}')
@receiver(post_save, sender=User)
def create_user_profile(sender, instance, created, **kwargs):
if created:
Profile.objects.create(user=instance)
@receiver(post_save, sender=User)
def save_user_profile(sender, instance, **kwargs):
instance.profile.save()

Wyświetl plik

@ -48,6 +48,9 @@ class Project(models.Model):
def tasks(self):
return self.task_set.only('id')
def tasks_count(self):
return self.task_set.count()
def get_map_items(self):
return [task.get_map_items() for task in self.task_set.filter(
status=status_codes.COMPLETED

Wyświetl plik

@ -2,6 +2,7 @@ import logging
import os
import shutil
import time
import struct
import uuid as uuid_module
from app.vendor import zipfly
@ -21,6 +22,7 @@ from django.contrib.gis.gdal import GDALRaster
from django.contrib.gis.gdal import OGRGeometry
from django.contrib.gis.geos import GEOSGeometry
from django.contrib.postgres import fields
from django.core.files.uploadedfile import InMemoryUploadedFile
from django.core.exceptions import ValidationError, SuspiciousFileOperation
from django.db import models
from django.db import transaction
@ -45,6 +47,7 @@ from django.utils.translation import gettext_lazy as _, gettext
from functools import partial
import subprocess
from app.classes.console import Console
logger = logging.getLogger('app.logger')
@ -155,7 +158,7 @@ def resize_image(image_path, resize_to, done=None):
os.rename(resized_image_path, image_path)
logger.info("Resized {} to {}x{}".format(image_path, resized_width, resized_height))
except (IOError, ValueError) as e:
except (IOError, ValueError, struct.error) as e:
logger.warning("Cannot resize {}: {}.".format(image_path, str(e)))
if done is not None:
done()
@ -246,7 +249,6 @@ class Task(models.Model):
last_error = models.TextField(null=True, blank=True, help_text=_("The last processing error received"), verbose_name=_("Last Error"))
options = fields.JSONField(default=dict, blank=True, help_text=_("Options that are being used to process this task"), validators=[validate_task_options], verbose_name=_("Options"))
available_assets = fields.ArrayField(models.CharField(max_length=80), default=list, blank=True, help_text=_("List of available assets to download"), verbose_name=_("Available Assets"))
console_output = models.TextField(null=False, default="", blank=True, help_text=_("Console output of the processing node"), verbose_name=_("Console Output"))
orthophoto_extent = GeometryField(null=True, blank=True, srid=4326, help_text=_("Extent of the orthophoto"), verbose_name=_("Orthophoto Extent"))
dsm_extent = GeometryField(null=True, blank=True, srid=4326, help_text="Extent of the DSM", verbose_name=_("DSM Extent"))
@ -277,6 +279,8 @@ class Task(models.Model):
potree_scene = fields.JSONField(default=dict, blank=True, help_text=_("Serialized potree scene information used to save/load measurements and camera view angle"), verbose_name=_("Potree Scene"))
epsg = models.IntegerField(null=True, default=None, blank=True, help_text=_("EPSG code of the dataset (if georeferenced)"), verbose_name="EPSG")
tags = models.TextField(db_index=True, default="", blank=True, help_text=_("Task tags"), verbose_name=_("Tags"))
orthophoto_bands = fields.JSONField(default=list, blank=True, help_text=_("List of orthophoto bands"), verbose_name=_("Orthophoto Bands"))
size = models.FloatField(default=0.0, blank=True, help_text=_("Size of the task on disk in megabytes"), verbose_name=_("Size"))
class Meta:
verbose_name = _("Task")
@ -287,6 +291,8 @@ class Task(models.Model):
# To help keep track of changes to the project id
self.__original_project_id = self.project.id
self.console = Console(self.data_path("console_output.txt"))
def __str__(self):
name = self.name if self.name is not None else gettext("unnamed")
@ -310,15 +316,6 @@ class Task(models.Model):
shutil.move(old_task_folder, new_task_folder_parent)
logger.info("Moved task folder from {} to {}".format(old_task_folder, new_task_folder))
with transaction.atomic():
for img in self.imageupload_set.all():
prev_name = img.image.name
img.image.name = assets_directory_path(self.id, new_project_id,
os.path.basename(img.image.name))
logger.info("Changing {} to {}".format(prev_name, img))
img.save()
else:
logger.warning("Project changed for task {}, but either {} doesn't exist, or {} already exists. This doesn't look right, so we will not move any files.".format(self,
old_task_folder,
@ -360,6 +357,12 @@ class Task(models.Model):
"""
return self.task_path("assets", *args)
def data_path(self, *args):
"""
Path to task data that does not fit in database fields (e.g. console output)
"""
return self.task_path("data", *args)
def task_path(self, *args):
"""
Get path relative to the root task directory
@ -430,16 +433,6 @@ class Task(models.Model):
logger.info("Duplicating {} to {}".format(self, task))
for img in self.imageupload_set.all():
img.pk = None
img.task = task
prev_name = img.image.name
img.image.name = assets_directory_path(task.id, task.project.id,
os.path.basename(img.image.name))
img.save()
if os.path.isdir(self.task_path()):
try:
# Try to use hard links first
@ -449,22 +442,24 @@ class Task(models.Model):
shutil.copytree(self.task_path(), task.task_path())
else:
logger.warning("Task {} doesn't have folder, will skip copying".format(self))
self.project.owner.profile.clear_used_quota_cache()
return task
except Exception as e:
logger.warning("Cannot duplicate task: {}".format(str(e)))
return False
def get_asset_file_or_zipstream(self, asset):
def get_asset_file_or_stream(self, asset):
"""
Get a stream to an asset
:param asset: one of ASSETS_MAP keys
:return: (path|stream, is_zipstream:bool)
:return: (path|stream)
"""
if asset in self.ASSETS_MAP:
value = self.ASSETS_MAP[asset]
if isinstance(value, str):
return self.assets_path(value), False
return self.assets_path(value)
elif isinstance(value, dict):
if 'deferred_path' in value and 'deferred_compress_dir' in value:
@ -474,7 +469,7 @@ class Task(models.Model):
paths = [p for p in paths if os.path.basename(p['fs']) not in value['deferred_exclude_files']]
if len(paths) == 0:
raise FileNotFoundError("No files available for download")
return zipfly.ZipStream(paths), True
return zipfly.ZipStream(paths)
else:
raise FileNotFoundError("{} is not a valid asset (invalid dict values)".format(asset))
else:
@ -504,7 +499,7 @@ class Task(models.Model):
raise FileNotFoundError("{} is not a valid asset".format(asset))
def handle_import(self):
self.console_output += gettext("Importing assets...") + "\n"
self.console += gettext("Importing assets...") + "\n"
self.save()
zip_path = self.assets_path("all.zip")
@ -629,7 +624,8 @@ class Task(models.Model):
if not self.uuid and self.pending_action is None and self.status is None:
logger.info("Processing... {}".format(self))
images = [image.path() for image in self.imageupload_set.all()]
images_path = self.task_path()
images = [os.path.join(images_path, i) for i in self.scan_images()]
# Track upload progress, but limit the number of DB updates
# to every 2 seconds (and always record the 100% progress)
@ -722,7 +718,7 @@ class Task(models.Model):
self.options = list(filter(lambda d: d['name'] != 'rerun-from', self.options))
self.upload_progress = 0
self.console_output = ""
self.console.reset()
self.processing_time = -1
self.status = None
self.last_error = None
@ -753,10 +749,10 @@ class Task(models.Model):
# Need to update status (first time, queued or running?)
if self.uuid and self.status in [None, status_codes.QUEUED, status_codes.RUNNING]:
# Update task info from processing node
if not self.console_output:
if not self.console.output():
current_lines_count = 0
else:
current_lines_count = len(self.console_output.split("\n"))
current_lines_count = len(self.console.output().split("\n"))
info = self.processing_node.get_task_info(self.uuid, current_lines_count)
@ -764,7 +760,7 @@ class Task(models.Model):
self.status = info.status.value
if len(info.output) > 0:
self.console_output += "\n".join(info.output) + '\n'
self.console += "\n".join(info.output) + '\n'
# Update running progress
self.running_progress = (info.progress / 100.0) * self.TASK_PROGRESS_LAST_VALUE
@ -828,6 +824,11 @@ class Task(models.Model):
else:
# FAILED, CANCELED
self.save()
if self.status == status_codes.FAILED:
from app.plugins import signals as plugin_signals
plugin_signals.task_failed.send_robust(sender=self.__class__, task_id=self.id)
else:
# Still waiting...
self.save()
@ -895,9 +896,11 @@ class Task(models.Model):
self.update_available_assets_field()
self.update_epsg_field()
self.update_orthophoto_bands_field()
self.update_size()
self.potree_scene = {}
self.running_progress = 1.0
self.console_output += gettext("Done!") + "\n"
self.console += gettext("Done!") + "\n"
self.status = status_codes.COMPLETED
self.save()
@ -916,8 +919,9 @@ class Task(models.Model):
def get_map_items(self):
types = []
if 'orthophoto.tif' in self.available_assets: types.append('orthophoto')
if 'orthophoto.tif' in self.available_assets: types.append('plant')
if 'orthophoto.tif' in self.available_assets:
types.append('orthophoto')
types.append('plant')
if 'dsm.tif' in self.available_assets: types.append('dsm')
if 'dtm.tif' in self.available_assets: types.append('dtm')
@ -936,7 +940,8 @@ class Task(models.Model):
'public': self.public,
'camera_shots': camera_shots,
'ground_control_points': ground_control_points,
'epsg': self.epsg
'epsg': self.epsg,
'orthophoto_bands': self.orthophoto_bands,
}
}
}
@ -1010,6 +1015,27 @@ class Task(models.Model):
if commit: self.save()
def update_orthophoto_bands_field(self, commit=False):
"""
Updates the orthophoto bands field with the correct value
:param commit: when True also saves the model, otherwise the user should manually call save()
"""
bands = []
orthophoto_path = self.assets_path(self.ASSETS_MAP['orthophoto.tif'])
if os.path.isfile(orthophoto_path):
with rasterio.open(orthophoto_path) as f:
names = [c.name for c in f.colorinterp]
for i, n in enumerate(names):
bands.append({
'name': n,
'description': f.descriptions[i]
})
self.orthophoto_bands = bands
if commit: self.save()
def delete(self, using=None, keep_parents=False):
task_id = self.id
from app.plugins import signals as plugin_signals
@ -1026,6 +1052,8 @@ class Task(models.Model):
except FileNotFoundError as e:
logger.warning(e)
self.project.owner.profile.clear_used_quota_cache()
plugin_signals.task_removed.send_robust(sender=self.__class__, task_id=task_id)
def set_failure(self, error_message):
@ -1122,3 +1150,53 @@ class Task(models.Model):
pass
else:
raise
def scan_images(self):
tp = self.task_path()
try:
return [e.name for e in os.scandir(tp) if e.is_file()]
except:
return []
def get_image_path(self, filename):
p = self.task_path(filename)
return path_traversal_check(p, self.task_path())
def handle_images_upload(self, files):
uploaded = {}
for file in files:
name = file.name
if name is None:
continue
tp = self.task_path()
if not os.path.exists(tp):
os.makedirs(tp, exist_ok=True)
dst_path = self.get_image_path(name)
with open(dst_path, 'wb+') as fd:
if isinstance(file, InMemoryUploadedFile):
for chunk in file.chunks():
fd.write(chunk)
else:
with open(file.temporary_file_path(), 'rb') as f:
shutil.copyfileobj(f, fd)
uploaded[name] = os.path.getsize(dst_path)
return uploaded
def update_size(self, commit=False):
try:
total_bytes = 0
for dirpath, _, filenames in os.walk(self.task_path()):
for f in filenames:
fp = os.path.join(dirpath, f)
if not os.path.islink(fp):
total_bytes += os.path.getsize(fp)
self.size = (total_bytes / 1024 / 1024)
if commit: self.save()
self.project.owner.profile.clear_used_quota_cache()
except Exception as e:
logger.warn("Cannot update size for task {}: {}".format(self, str(e)))

Wyświetl plik

@ -7,6 +7,8 @@ from django.db import models
from colorfield.fields import ColorField
from django.dispatch import receiver
from django.utils.translation import gettext_lazy as _
from django.core.cache import cache
from django.core.cache.utils import make_template_fragment_key
from webodm import settings
@ -54,14 +56,5 @@ def theme_post_save(sender, instance, created, **kwargs):
def update_theme_css():
"""
Touch theme.scss to invalidate its cache and force
compressor to regenerate it
"""
theme_file = os.path.join('app', 'static', 'app', 'css', 'theme.scss')
try:
Path(theme_file).touch()
logger.info("Regenerate cache for {}".format(theme_file))
except:
logger.warning("Failed to touch {}".format(theme_file))
key = make_template_fragment_key("theme_css")
cache.delete(key)

Wyświetl plik

@ -110,7 +110,7 @@ def build_plugins():
# Create entry configuration
entry = {}
for e in plugin.build_jsx_components():
entry[os.path.splitext(os.path.basename(e))[0]] = [os.path.join('.', e)]
entry[os.path.splitext(os.path.basename(e))[0]] = ['./' + e]
wpc_content = tmpl.substitute({
'entry_json': json.dumps(entry)
})
@ -210,9 +210,12 @@ def get_plugins():
module = importlib.import_module("plugins.{}".format(dir))
plugin = (getattr(module, "Plugin"))()
except (ImportError, AttributeError):
module = importlib.import_module("coreplugins.{}".format(dir))
plugin = (getattr(module, "Plugin"))()
except (ImportError, AttributeError) as plugin_error:
try:
module = importlib.import_module("coreplugins.{}".format(dir))
plugin = (getattr(module, "Plugin"))()
except (ImportError, AttributeError) as coreplugin_error:
raise coreplugin_error from plugin_error
# Check version
manifest = plugin.get_manifest()
@ -237,7 +240,7 @@ def get_plugins():
plugins.append(plugin)
except Exception as e:
logger.warning("Failed to instantiate plugin {}: {}".format(dir, e))
logger.warning("Failed to instantiate plugin {}: {}: {}".format(dir, e, e.__cause__))
return plugins
@ -273,7 +276,7 @@ def get_plugin_by_name(name, only_active=True, refresh_cache_if_none=False):
else:
return res
def get_current_plugin():
def get_current_plugin(only_active=False):
"""
When called from a python module inside a plugin's directory,
it returns the plugin that this python module belongs to
@ -289,7 +292,7 @@ def get_current_plugin():
parts = relp.split(os.sep)
if len(parts) > 0:
plugin_name = parts[0]
return get_plugin_by_name(plugin_name, only_active=False)
return get_plugin_by_name(plugin_name, only_active=only_active)
return None

Wyświetl plik

@ -1,152 +0,0 @@
import logging
import shutil
import tempfile
import subprocess
import os
import platform
from webodm import settings
logger = logging.getLogger('app.logger')
class GrassEngine:
def __init__(self):
self.grass_binary = shutil.which('grass7') or \
shutil.which('grass7.bat') or \
shutil.which('grass72') or \
shutil.which('grass72.bat') or \
shutil.which('grass74') or \
shutil.which('grass74.bat') or \
shutil.which('grass76') or \
shutil.which('grass76.bat') or \
shutil.which('grass78') or \
shutil.which('grass78.bat') or \
shutil.which('grass80') or \
shutil.which('grass80.bat')
if self.grass_binary is None:
logger.warning("Could not find a GRASS 7 executable. GRASS scripts will not work.")
def create_context(self, serialized_context = {}):
if self.grass_binary is None: raise GrassEngineException("GRASS engine is unavailable")
return GrassContext(self.grass_binary, **serialized_context)
class GrassContext:
def __init__(self, grass_binary, tmpdir = None, script_opts = {}, location = None, auto_cleanup=True, python_path=None):
self.grass_binary = grass_binary
if tmpdir is None:
tmpdir = os.path.basename(tempfile.mkdtemp('_grass_engine', dir=settings.MEDIA_TMP))
self.tmpdir = tmpdir
self.script_opts = script_opts.copy()
self.location = location
self.auto_cleanup = auto_cleanup
self.python_path = python_path
def get_cwd(self):
return os.path.join(settings.MEDIA_TMP, self.tmpdir)
def add_file(self, filename, source, use_as_location=False):
param = os.path.splitext(filename)[0] # filename without extension
dst_path = os.path.abspath(os.path.join(self.get_cwd(), filename))
with open(dst_path, 'w') as f:
f.write(source)
self.script_opts[param] = dst_path
if use_as_location:
self.set_location(self.script_opts[param])
return dst_path
def add_param(self, param, value):
self.script_opts[param] = value
def set_location(self, location):
"""
:param location: either a "epsg:XXXXX" string or a path to a geospatial file defining the location
"""
if not location.lower().startswith('epsg:'):
location = os.path.abspath(location)
self.location = location
def execute(self, script):
"""
:param script: path to .grass script
:return: script output
"""
if self.location is None: raise GrassEngineException("Location is not set")
script = os.path.abspath(script)
# Make sure working directory exists
if not os.path.exists(self.get_cwd()):
os.mkdir(self.get_cwd())
# Create param list
params = ["{}={}".format(opt,value) for opt,value in self.script_opts.items()]
# Track success, output
success = False
out = ""
err = ""
# Setup env
env = os.environ.copy()
env["LC_ALL"] = "C.UTF-8"
if self.python_path:
sep = ";" if platform.system() == "Windows" else ":"
env["PYTHONPATH"] = "%s%s%s" % (self.python_path, sep, env.get("PYTHONPATH", ""))
# Execute it
logger.info("Executing grass script from {}: {} -c {} location --exec python3 {} {}".format(self.get_cwd(), self.grass_binary, self.location, script, " ".join(params)))
command = [self.grass_binary, '-c', self.location, 'location', '--exec', 'python3', script] + params
if platform.system() == "Windows":
# communicate() hangs on Windows so we use check_output instead
try:
out = subprocess.check_output(command, cwd=self.get_cwd(), env=env).decode('utf-8').strip()
success = True
except:
success = False
err = out
else:
p = subprocess.Popen(command, cwd=self.get_cwd(), env=env, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
out = out.decode('utf-8').strip()
err = err.decode('utf-8').strip()
success = p.returncode == 0
if success:
return out
else:
raise GrassEngineException("Could not execute GRASS script {} from {}: {}".format(script, self.get_cwd(), err))
def serialize(self):
return {
'tmpdir': self.tmpdir,
'script_opts': self.script_opts,
'location': self.location,
'auto_cleanup': self.auto_cleanup,
'python_path': self.python_path,
}
def cleanup(self):
if os.path.exists(self.get_cwd()):
shutil.rmtree(self.get_cwd())
def __del__(self):
if self.auto_cleanup:
self.cleanup()
class GrassEngineException(Exception):
pass
def cleanup_grass_context(serialized_context):
ctx = grass.create_context(serialized_context)
ctx.cleanup()
grass = GrassEngine()

Wyświetl plik

@ -3,5 +3,6 @@ import django.dispatch
task_completed = django.dispatch.Signal(providing_args=["task_id"])
task_removing = django.dispatch.Signal(providing_args=["task_id"])
task_removed = django.dispatch.Signal(providing_args=["task_id"])
task_failed = django.dispatch.Signal(providing_args=["task_id"])
processing_node_removed = django.dispatch.Signal(providing_args=["processing_node_id"])

Wyświetl plik

@ -6,7 +6,7 @@ process.env.NODE_PATH = webodmRoot + "node_modules";
require("module").Module._initPaths();
let path = require("path");
let ExtractTextPlugin = require('extract-text-webpack-plugin');
const MiniCssExtractPlugin = require('mini-css-extract-plugin');
module.exports = {
mode: 'production',
@ -21,8 +21,9 @@ module.exports = {
},
plugins: [
new ExtractTextPlugin('[name].css', {
allChunks: true
new MiniCssExtractPlugin({
filename: '[name].css',
chunkFilename: '[id].css'
})
],
@ -34,7 +35,7 @@ module.exports = {
use: [
{
loader: 'babel-loader',
query: {
options: {
plugins: [
'@babel/syntax-class-properties',
'@babel/proposal-class-properties'
@ -49,22 +50,21 @@ module.exports = {
},
{
test: /\.s?css$$/,
use: ExtractTextPlugin.extract({
use: [
{ loader: 'css-loader' },
{
loader: 'sass-loader',
options: {
implementation: require("sass")
}
}
]
})
use: [
MiniCssExtractPlugin.loader,
'css-loader',
'sass-loader',
]
},
{
{
test: /\.(png|jpg|jpeg|svg)/,
loader: "url-loader?limit=100000"
}
use: {
loader: 'url-loader',
options: {
limit: 100000,
},
},
},
]
},

Wyświetl plik

@ -1,7 +1,5 @@
import inspect
from worker.celery import app
# noinspection PyUnresolvedReferences
from worker.tasks import execute_grass_script
task = app.task

Wyświetl plik

@ -51,14 +51,13 @@ def export_raster(input, output, **opts):
output_raster = output
jpg_background = 255 # white
# KMZ is special, we just export it as PNG with EPSG:4326
# KMZ is special, we just export it as GeoTIFF
# and then call GDAL to tile/package it
kmz = export_format == "kmz"
if kmz:
export_format = "png"
epsg = 4326
export_format = "gtiff-rgb"
path_base, _ = os.path.splitext(output)
output_raster = path_base + ".png"
output_raster = path_base + ".kmz.tif"
if export_format == "jpg":
driver = "JPEG"
@ -282,4 +281,4 @@ def export_raster(input, output, **opts):
if kmz:
subprocess.check_output(["gdal_translate", "-of", "KMLSUPEROVERLAY",
"-co", "Name={}".format(name),
"-co", "FORMAT=PNG", output_raster, output])
"-co", "FORMAT=AUTO", output_raster, output])

Wyświetl plik

@ -50,11 +50,26 @@ body {
margin-right: 0;
}
.navbar-top-links .dropdown-menu li a {
.navbar-top-links .dropdown-menu li a{
padding: 3px 20px;
min-height: 0;
}
.navbar-top-links .dropdown-menu li div.info-item{
padding: 3px 8px;
min-height: 0;
}
.navbar-top-links .dropdown-menu li div.info-item.quotas{
min-width: 232px;
}
.navbar-top-links .dropdown-menu li .progress{
margin-bottom: 0;
margin-top: 6px;
}
.navbar-top-links .dropdown-menu li a div {
white-space: normal;
}

Wyświetl plik

@ -1,278 +0,0 @@
/* Primary */
body,
ul#side-menu.nav a,
.console,
.alert,
.form-control,
.dropdown-menu > li > a,
.theme-color-primary,
{
color: theme("primary");
}
.theme-border-primary{
border-color: theme("primary");
}
.tooltip{
.tooltip-inner{
background-color: theme("primary");
}
&.left .tooltip-arrow{ border-left-color: theme("primary"); }
&.top .tooltip-arrow{ border-top-color: theme("primary"); }
&.bottom .tooltip-arrow{ border-bottom-color: theme("primary"); }
&.right .tooltip-arrow{ border-right-color: theme("primary"); }
}
.theme-fill-primary{
fill: theme("primary");
}
.theme-stroke-primary{
stroke: theme("primary");
}
/* Secondary */
body,
.navbar-default,
.console,
.alert,
.modal-content,
.form-control,
.dropdown-menu,
.theme-secondary
{
background-color: theme("secondary");
}
.tooltip > .tooltip-inner{
color: theme("secondary");
}
.alert{
.close:hover, .close:focus{
color: complementary(theme("secondary"));
}
}
.pagination li > a,
.pagination .disabled > a,
.pagination .disabled > a:hover, .pagination .disabled > a:focus{
color: scaleby(theme("primary"), 0.7);
background-color: theme("secondary");
border-color: scaleby(theme("secondary"), 0.7);
}
.pagination li > a{
color: theme("primary");
}
.theme-border-secondary-07{
border-color: scaleby(theme("secondary"), 0.7) !important;
}
.btn-secondary, .btn-secondary:active, .btn-secondary.active, .open>.dropdown-toggle.btn-secondary{
background-color: theme("secondary");
border-color: theme("secondary");
color: theme("primary");
&:hover, &:active, &:focus{
background-color: scalebyiv(theme("secondary"), 0.90);
border-color: scalebyiv(theme("secondary"), 0.90);
color: theme("primary");
}
}
/* Tertiary */
a, a:hover, a:focus{
color: theme("tertiary");
}
.progress-bar-success{
background-color: theme("tertiary");
}
/* Button primary */
#navbar-top .navbar-top-links,{
a:hover,a:focus,.open > a{
background-color: theme("button_primary");
color: theme("secondary");
}
}
#navbar-top ul#side-menu a:focus{
background-color: inherit;
color: inherit;
}
#navbar-top ul#side-menu a:hover, #navbar-top ul#side-menu a.active:hover{
background-color: theme("button_primary");
color: theme("secondary");
}
.btn-primary, .btn-primary:active, .btn-primary.active, .open>.dropdown-toggle.btn-primary{
background-color: theme("button_primary");
border-color: theme("button_primary");
color: theme("secondary");
&:hover, &:active, &:focus, &[disabled]:hover, &[disabled]:focus, &[disabled]:active{
background-color: scalebyiv(theme("button_primary"), 0.90);
border-color: scalebyiv(theme("button_primary"), 0.90);
color: theme("secondary");
}
}
/* Button default */
.btn-default, .btn-default:active, .btn-default.active, .open>.dropdown-toggle.btn-default{
background-color: theme("button_default");
border-color: theme("button_default");
color: theme("secondary");
&:hover, &:active, &:focus, &[disabled]:hover, &[disabled]:focus, &[disabled]:active{
background-color: scalebyiv(theme("button_default"), 0.90);
border-color: scalebyiv(theme("button_default"), 0.90);
color: theme("secondary");
}
}
.pagination>.active>a, .pagination>.active>span, .pagination>.active>a:hover, .pagination>.active>span:hover, .pagination>.active>a:focus, .pagination>.active>span:focus,
.pagination .active > a:hover, .pagination .active > a:focus,
.pagination li > a:hover, .pagination li > a:focus{
background-color: theme("button_default");
color: theme("secondary");
}
/* Button danger */
.btn-danger, .btn-danger:active, .btn-danger.active, .open>.dropdown-toggle.btn-danger{
background-color: theme("button_danger");
border-color: theme("button_danger");
color: theme("secondary");
&:hover, &:active, &:focus, &[disabled]:hover, &[disabled]:focus, &[disabled]:active {
background-color: scalebyiv(theme("button_danger"), 0.90);
border-color: scalebyiv(theme("button_danger"), 0.90);
color: theme("secondary");
}
}
.theme-color-button-danger{
color: theme("button_danger");
}
.theme-color-button-primary{
color: theme("button_primary");
}
/* Header background */
#navbar-top{
background-color: theme("header_background");
}
/* Header primary */
.navbar-default .navbar-link,
#navbar-top .navbar-top-links a.dropdown-toggle{
color: theme("header_primary");
&:hover{
color: theme("secondary");
}
}
/* Border */
.sidebar ul li,
.project-list-item,
#page-wrapper,
table-bordered>thead>tr>th, .table-bordered>thead>tr>th, table-bordered>tbody>tr>th, .table-bordered>tbody>tr>th, table-bordered>tfoot>tr>th, .table-bordered>tfoot>tr>th, table-bordered>thead>tr>td, .table-bordered>thead>tr>td, table-bordered>tbody>tr>td, .table-bordered>tbody>tr>td, table-bordered>tfoot>tr>td, .table-bordered>tfoot>tr>td,
footer,
.modal-content,
.modal-header,
.modal-footer,
.dropdown-menu
{
border-color: theme("border");
}
.dropdown-menu .divider{
background-color: theme("border");
}
.popover-title{
border-bottom-color: theme("border");
}
.theme-border{
border-color: theme("border") !important;
}
/* Highlight */
.task-list-item:nth-child(odd),
.table-striped>tbody>tr:nth-of-type(odd),
select.form-control option[disabled],
.theme-background-highlight{
background-color: theme("highlight");
}
.dropdown-menu > li > a{
&:hover, &:focus{
background-color: theme("highlight");
color: theme("primary");
}
}
pre.prettyprint,
.form-control{
border-color: theme('highlight');
&:focus{
border-color: scalebyiv(theme('highlight'), 0.7);
}
}
/* Dialog warning */
.alert-warning{
border-color: theme("dialog_warning");
}
/* Success */
.task-list-item .status-label.done, .theme-background-success{
background-color: theme("success");
}
/* Failed */
.task-list-item .status-label.error, .theme-background-failed{
background-color: theme("failed");
}
/* ModelView.jsx specific */
.model-view #potree_sidebar_container {
.dropdown-menu > li > a{
color: theme("primary");
}
}
/* MapView.jsx specific */
.leaflet-bar a, .leaflet-control > a{
background-color: theme("secondary") !important;
border-color: theme("secondary") !important;
color: theme("primary") !important;
&:hover{
background-color: scalebyiv(theme("secondary"), 0.90) !important;
border-color: scalebyiv(theme("secondary"), 0.90) !important;
}
}
.leaflet-popup-content-wrapper{
background-color: theme("secondary") !important;
color: theme("primary") !important;
a{
color: theme("tertiary") !important;
}
}
.leaflet-container{
a.leaflet-popup-close-button{
color: theme("primary") !important;
&:hover{
color: complementary(theme("secondary")) !important;
}
}
}
.tag-badge{
background-color: theme("button_default");
border-color: theme("button_default");
color: theme("secondary");
a, a:hover{
color: theme("secondary");
}
}

Wyświetl plik

@ -8,7 +8,7 @@ import { _, interpolate } from './classes/gettext';
class MapView extends React.Component {
static defaultProps = {
mapItems: [],
selectedMapType: 'orthophoto',
selectedMapType: 'auto',
title: "",
public: false,
shareButtons: true
@ -16,7 +16,7 @@ class MapView extends React.Component {
static propTypes = {
mapItems: PropTypes.array.isRequired, // list of dictionaries where each dict is a {mapType: 'orthophoto', url: <tiles.json>},
selectedMapType: PropTypes.oneOf(['orthophoto', 'plant', 'dsm', 'dtm']),
selectedMapType: PropTypes.oneOf(['auto', 'orthophoto', 'plant', 'dsm', 'dtm']),
title: PropTypes.string,
public: PropTypes.bool,
shareButtons: PropTypes.bool
@ -25,9 +25,30 @@ class MapView extends React.Component {
constructor(props){
super(props);
let selectedMapType = props.selectedMapType;
// Automatically select type based on available tiles
// and preference order (below)
if (props.selectedMapType === "auto"){
let preferredTypes = ['orthophoto', 'dsm', 'dtm'];
for (let i = 0; i < this.props.mapItems.length; i++){
let mapItem = this.props.mapItems[i];
for (let j = 0; j < preferredTypes.length; j++){
if (mapItem.tiles.find(t => t.type === preferredTypes[j])){
selectedMapType = preferredTypes[j];
break;
}
}
if (selectedMapType !== "auto") break;
}
}
if (selectedMapType === "auto") selectedMapType = "orthophoto"; // Hope for the best
this.state = {
selectedMapType: props.selectedMapType,
tiles: this.getTilesByMapType(props.selectedMapType)
selectedMapType,
tiles: this.getTilesByMapType(selectedMapType)
};
this.getTilesByMapType = this.getTilesByMapType.bind(this);
@ -101,7 +122,7 @@ class MapView extends React.Component {
{this.props.title ?
<h3><i className="fa fa-globe"></i> {this.props.title}</h3>
: ""}
<div className="map-container">
<Map
tiles={this.state.tiles}

Wyświetl plik

@ -10,6 +10,7 @@ import PropTypes from 'prop-types';
import * as THREE from 'THREE';
import $ from 'jquery';
import { _, interpolate } from './classes/gettext';
import { getUnitSystem, setUnitSystem } from './classes/Units';
require('./vendor/OBJLoader');
require('./vendor/MTLLoader');
@ -298,9 +299,23 @@ class ModelView extends React.Component {
window.viewer = new Potree.Viewer(container);
viewer.setEDLEnabled(true);
viewer.setFOV(60);
viewer.setPointBudget(1*1000*1000);
viewer.setPointBudget(10*1000*1000);
viewer.setEDLEnabled(true);
viewer.loadSettingsFromURL();
const currentUnit = getUnitSystem();
const origSetUnit = viewer.setLengthUnitAndDisplayUnit;
viewer.setLengthUnitAndDisplayUnit = (lengthUnit, displayUnit) => {
if (displayUnit === 'm') setUnitSystem('metric');
else if (displayUnit === 'ft'){
// Potree doesn't have US/international imperial, so
// we default to international unless the user has previously
// selected US
if (currentUnit === 'metric') setUnitSystem("imperial");
else setUnitSystem(currentUnit);
}
origSetUnit.call(viewer, lengthUnit, displayUnit);
};
viewer.loadGUI(() => {
viewer.setLanguage('en');
@ -335,7 +350,7 @@ class ModelView extends React.Component {
directional.position.z = 99999999999;
viewer.scene.scene.add( directional );
this.pointCloudFilePath(pointCloudPath => {
this.pointCloudFilePath(pointCloudPath =>{
Potree.loadPointCloud(pointCloudPath, "Point Cloud", e => {
if (e.type == "loading_failed"){
this.setState({error: "Could not load point cloud. This task doesn't seem to have one. Try processing the task again."});
@ -351,6 +366,12 @@ class ModelView extends React.Component {
viewer.fitToScreen();
if (getUnitSystem() === 'metric'){
viewer.setLengthUnitAndDisplayUnit('m', 'm');
}else{
viewer.setLengthUnitAndDisplayUnit('m', 'ft');
}
// Load saved scene (if any)
$.ajax({
type: "GET",
@ -590,7 +611,10 @@ class ModelView extends React.Component {
}
const isVisible = this.cameraMeshes[0].visible;
this.cameraMeshes.forEach(cam => cam.visible = !isVisible);
this.cameraMeshes.forEach(cam => {
cam.visible = !isVisible;
cam.parent.visible = cam.visible;
});
}
loadGltf = (url, cb) => {
@ -641,9 +665,10 @@ class ModelView extends React.Component {
return;
}
const offset = {
x: gltf.scene.CESIUM_RTC.center[0],
y: gltf.scene.CESIUM_RTC.center[1]
const offset = {x: 0, y: 0};
if (gltf.scene.CESIUM_RTC && gltf.scene.CESIUM_RTC.center){
offset.x = gltf.scene.CESIUM_RTC.center[0];
offset.y = gltf.scene.CESIUM_RTC.center[1];
}
addObject(gltf.scene, offset);

Wyświetl plik

@ -18,6 +18,14 @@ class Storage{
console.warn("Failed to call setItem " + key, e);
}
}
static removeItem(key){
try{
localStorage.removeItem(key);
}catch(e){
console.warn("Failed to call removeItem " + key, e);
}
}
}
export default Storage;

Wyświetl plik

@ -0,0 +1,378 @@
import { _ } from './gettext';
const types = {
LENGTH: 1,
AREA: 2,
VOLUME: 3
};
const units = {
acres: {
factor: (1 / (0.3048 * 0.3048)) / 43560,
abbr: 'ac',
round: 5,
label: _("Acres"),
type: types.AREA
},
acres_us: {
factor: Math.pow(3937 / 1200, 2) / 43560,
abbr: 'ac (US)',
round: 5,
label: _("Acres"),
type: types.AREA
},
feet: {
factor: 1 / 0.3048,
abbr: 'ft',
round: 4,
label: _("Feet"),
type: types.LENGTH
},
feet_us:{
factor: 3937 / 1200,
abbr: 'ft (US)',
round: 4,
label: _("Feet"),
type: types.LENGTH
},
hectares: {
factor: 0.0001,
abbr: 'ha',
round: 4,
label: _("Hectares"),
type: types.AREA
},
meters: {
factor: 1,
abbr: 'm',
round: 3,
label: _("Meters"),
type: types.LENGTH
},
kilometers: {
factor: 0.001,
abbr: 'km',
round: 5,
label: _("Kilometers"),
type: types.LENGTH
},
centimeters: {
factor: 100,
abbr: 'cm',
round: 1,
label: _("Centimeters"),
type: types.LENGTH
},
miles: {
factor: (1 / 0.3048) / 5280,
abbr: 'mi',
round: 5,
label: _("Miles"),
type: types.LENGTH
},
miles_us: {
factor: (3937 / 1200) / 5280,
abbr: 'mi (US)',
round: 5,
label: _("Miles"),
type: types.LENGTH
},
sqfeet: {
factor: 1 / (0.3048 * 0.3048),
abbr: 'ft²',
round: 2,
label: _("Square Feet"),
type: types.AREA
},
sqfeet_us: {
factor: Math.pow(3937 / 1200, 2),
abbr: 'ft² (US)',
round: 2,
label: _("Square Feet"),
type: types.AREA
},
sqmeters: {
factor: 1,
abbr: 'm²',
round: 2,
label: _("Square Meters"),
type: types.AREA
},
sqkilometers: {
factor: 0.000001,
abbr: 'km²',
round: 5,
label: _("Square Kilometers"),
type: types.AREA
},
sqmiles: {
factor: Math.pow((1 / 0.3048) / 5280, 2),
abbr: 'mi²',
round: 5,
label: _("Square Miles"),
type: types.AREA
},
sqmiles_us: {
factor: Math.pow((3937 / 1200) / 5280, 2),
abbr: 'mi² (US)',
round: 5,
label: _("Square Miles"),
type: types.AREA
},
cbmeters:{
factor: 1,
abbr: 'm³',
round: 4,
label: _("Cubic Meters"),
type: types.VOLUME
},
cbyards:{
factor: Math.pow(1/(0.3048*3), 3),
abbr: 'yd³',
round: 4,
label: _("Cubic Yards"),
type: types.VOLUME
},
cbyards_us:{
factor: Math.pow(3937/3600, 3),
abbr: 'yd³ (US)',
round: 4,
label: _("Cubic Yards"),
type: types.VOLUME
}
};
class ValueUnit{
constructor(value, unit){
this.value = value;
this.unit = unit;
}
toString(opts = {}){
const mul = Math.pow(10, opts.precision !== undefined ? opts.precision : this.unit.round);
const rounded = (Math.round(this.value * mul) / mul).toString();
let withCommas = "";
let parts = rounded.split(".");
parts[0] = parts[0].replace(/\B(?=(\d{3})+(?!\d))/g, ",");
withCommas = parts.join(".");
return `${withCommas} ${this.unit.abbr}`;
}
}
class NanUnit{
constructor(){
this.value = NaN;
this.unit = units.meters; // Don't matter
}
toString(){
return "NaN";
}
}
class UnitSystem{
lengthUnit(meters, opts = {}){ throw new Error("Not implemented"); }
areaUnit(sqmeters, opts = {}){ throw new Error("Not implemented"); }
volumeUnit(cbmeters, opts = {}){ throw new Error("Not implemented"); }
getName(){ throw new Error("Not implemented"); }
area(sqmeters, opts = {}){
sqmeters = parseFloat(sqmeters);
if (isNaN(sqmeters)) return NanUnit();
const unit = this.areaUnit(sqmeters, opts);
const val = unit.factor * sqmeters;
return new ValueUnit(val, unit);
}
length(meters, opts = {}){
meters = parseFloat(meters);
if (isNaN(meters)) return NanUnit();
const unit = this.lengthUnit(meters, opts);
const val = unit.factor * meters;
return new ValueUnit(val, unit);
}
volume(cbmeters, opts = {}){
cbmeters = parseFloat(cbmeters);
if (isNaN(cbmeters)) return NanUnit();
const unit = this.volumeUnit(cbmeters, opts);
const val = unit.factor * cbmeters;
return new ValueUnit(val, unit);
}
};
function toMetric(valueUnit, unit){
let value = NaN;
if (typeof valueUnit === "object" && unit === undefined){
value = valueUnit.value;
unit = valueUnit.unit;
}else{
value = parseFloat(valueUnit);
}
if (isNaN(value)) return NanUnit();
const val = value / unit.factor;
if (unit.type === types.LENGTH){
return new ValueUnit(val, units.meters);
}else if (unit.type === types.AREA){
return new ValueUnit(val, unit.sqmeters);
}else if (unit.type === types.VOLUME){
return new ValueUnit(val, unit.cbmeters);
}else{
throw new Error(`Unrecognized unit type: ${unit.type}`);
}
}
class MetricSystem extends UnitSystem{
getName(){
return _("Metric");
}
lengthUnit(meters, opts = {}){
if (opts.fixedUnit) return units.meters;
if (meters < 1) return units.centimeters;
else if (meters >= 1000) return units.kilometers;
else return units.meters;
}
areaUnit(sqmeters, opts = {}){
if (opts.fixedUnit) return units.sqmeters;
if (sqmeters >= 10000 && sqmeters < 1000000) return units.hectares;
else if (sqmeters >= 1000000) return units.sqkilometers;
return units.sqmeters;
}
volumeUnit(cbmeters, opts = {}){
return units.cbmeters;
}
}
class ImperialSystem extends UnitSystem{
getName(){
return _("Imperial");
}
feet(){
return units.feet;
}
sqfeet(){
return units.sqfeet;
}
miles(){
return units.miles;
}
sqmiles(){
return units.sqmiles;
}
acres(){
return units.acres;
}
cbyards(){
return units.cbyards;
}
lengthUnit(meters, opts = {}){
if (opts.fixedUnit) return this.feet();
const feet = this.feet().factor * meters;
if (feet >= 5280) return this.miles();
else return this.feet();
}
areaUnit(sqmeters, opts = {}){
if (opts.fixedUnit) return this.sqfeet();
const sqfeet = this.sqfeet().factor * sqmeters;
if (sqfeet >= 43560 && sqfeet < 27878400) return this.acres();
else if (sqfeet >= 27878400) return this.sqmiles();
else return this.sqfeet();
}
volumeUnit(cbmeters, opts = {}){
return this.cbyards();
}
}
class ImperialUSSystem extends ImperialSystem{
getName(){
return _("Imperial (US)");
}
feet(){
return units.feet_us;
}
sqfeet(){
return units.sqfeet_us;
}
miles(){
return units.miles_us;
}
sqmiles(){
return units.sqmiles_us;
}
acres(){
return units.acres_us;
}
cbyards(){
return units.cbyards_us;
}
}
const systems = {
metric: new MetricSystem(),
imperial: new ImperialSystem(),
imperialUS: new ImperialUSSystem()
}
// Expose to allow every part of the app to access this information
function getUnitSystem(){
return localStorage.getItem("unit_system") || "metric";
}
function setUnitSystem(system){
let prevSystem = getUnitSystem();
localStorage.setItem("unit_system", system);
if (prevSystem !== system){
document.dispatchEvent(new CustomEvent("onUnitSystemChanged", { detail: system }));
}
}
function onUnitSystemChanged(callback){
document.addEventListener("onUnitSystemChanged", callback);
}
function offUnitSystemChanged(callback){
document.removeEventListener("onUnitSystemChanged", callback);
}
function unitSystem(){
return systems[getUnitSystem()];
}
export {
systems,
types,
toMetric,
unitSystem,
getUnitSystem,
setUnitSystem,
onUnitSystemChanged,
offUnitSystemChanged
};

Wyświetl plik

@ -93,6 +93,20 @@ export default {
saveAs: function(text, filename){
var blob = new Blob([text], {type: "text/plain;charset=utf-8"});
FileSaver.saveAs(blob, filename);
},
// http://stackoverflow.com/questions/15900485/correct-way-to-convert-size-in-bytes-to-kb-mb-gb-in-javascript
bytesToSize: function(bytes, decimals = 2){
if(bytes == 0) return '0 byte';
var k = 1000; // or 1024 for binary
var dm = decimals || 3;
var sizes = ['bytes', 'Kb', 'Mb', 'Gb', 'Tb', 'Pb', 'Eb', 'Zb', 'Yb'];
var i = Math.floor(Math.log(bytes) / Math.log(k));
return parseFloat((bytes / Math.pow(k, i)).toFixed(dm)) + ' ' + sizes[i];
},
isMobile: function(){
return navigator.userAgent.match(/(iPad)|(iPhone)|(iPod)|(android)|(webOS)/i);
}
};

Wyświetl plik

@ -21,7 +21,7 @@ export default {
}).fail(error => {
console.warn(error);
if (errorCount++ < 10) setTimeout(() => check(), 2000);
else cb(JSON.stringify(error));
else cb(error.statusText);
});
};

Wyświetl plik

@ -15,13 +15,6 @@ export default class ApiFactory{
// are more robust as we can detect more easily if
// things break
// TODO: we should consider refactoring this code
// to use functions instead of events. Originally
// we chose to use events because that would have
// decreased coupling, but since all API pubsub activity
// evolved to require a call to the PluginsAPI object, we might have
// added a bunch of complexity for no real advantage here.
const addEndpoint = (obj, eventName, preTrigger = () => {}) => {
const emitResponse = response => {
// Timeout needed for modules that have no dependencies
@ -99,6 +92,26 @@ export default class ApiFactory{
obj = Object.assign(obj, api.helpers);
}
// Handle syncronous function on/off/export
(api.functions || []).forEach(func => {
let callbacks = [];
obj[func] = (...args) => {
for (let i = 0; i < callbacks.length; i++){
if ((callbacks[i])(...args)) return true;
}
return false;
};
const onName = "on" + func[0].toUpperCase() + func.slice(1);
const offName = "off" + func[0].toUpperCase() + func.slice(1);
obj[onName] = f => {
callbacks.push(f);
};
obj[offName] = f => {
callbacks = callbacks.filter(cb => cb !== f);
};
});
return obj;
}

Wyświetl plik

@ -19,7 +19,11 @@ export default {
endpoints: [
["willAddControls", leafletPreCheck],
["didAddControls", layersControlPreCheck],
["addActionButton", leafletPreCheck],
["addActionButton", leafletPreCheck]
],
functions: [
"handleClick"
]
};

Wyświetl plik

@ -12,7 +12,7 @@ if (!Object.values) {
}
// Do not apply to WebODM, can cause confusion
const OPTS_BLACKLIST = ['build-overviews', 'orthophoto-no-tiled', 'orthophoto-compression', 'orthophoto-png', 'orthophoto-kmz', 'pc-copc', 'pc-las', 'pc-ply', 'pc-csv', 'pc-ept', 'cog'];
const OPTS_BLACKLIST = ['build-overviews', 'orthophoto-no-tiled', 'orthophoto-compression', 'orthophoto-png', 'orthophoto-kmz', 'pc-copc', 'pc-las', 'pc-ply', 'pc-csv', 'pc-ept', 'cog', 'gltf'];
class EditPresetDialog extends React.Component {
static defaultProps = {

Wyświetl plik

@ -85,6 +85,18 @@ class EditTaskForm extends React.Component {
this.state.selectedPreset;
}
checkFilesCount(filesCount){
if (!this.state.selectedNode) return true;
if (filesCount === 0) return true;
if (this.state.selectedNode.max_images === null) return true;
return this.state.selectedNode.max_images >= filesCount;
}
selectedNodeMaxImages(){
if (!this.state.selectedNode) return null;
return this.state.selectedNode.max_images;
}
notifyFormLoaded(){
if (this.props.onFormLoaded && this.formReady()) this.props.onFormLoaded();
}
@ -115,8 +127,6 @@ class EditTaskForm extends React.Component {
return;
}
let now = new Date();
let nodes = json.map(node => {
return {
id: node.id,
@ -124,6 +134,7 @@ class EditTaskForm extends React.Component {
label: `${node.label} (queue: ${node.queue_count})`,
options: node.available_options,
queue_count: node.queue_count,
max_images: node.max_images,
enabled: node.online,
url: `http://${node.hostname}:${node.port}`
};

Wyświetl plik

@ -53,7 +53,7 @@ class EditTaskPanel extends React.Component {
this.setState({saving: false});
this.props.onSave(json);
}).fail(() => {
this.setState({saving: false, error: _("Could not update task information. Plese try again.")});
this.setState({saving: false, error: _("Could not update task information. Please try again.")});
});
}

Wyświetl plik

@ -3,20 +3,29 @@ import PropTypes from 'prop-types';
import '../css/Histogram.scss';
import d3 from 'd3';
import { _ } from '../classes/gettext';
import { onUnitSystemChanged, offUnitSystemChanged } from '../classes/Units';
export default class Histogram extends React.Component {
static defaultProps = {
width: 280,
colorMap: null,
unitForward: value => value,
unitBackward: value => value,
onUpdate: null,
loading: false,
min: null,
max: null
};
static propTypes = {
statistics: PropTypes.object.isRequired,
colorMap: PropTypes.array,
unitForward: PropTypes.func,
unitBackward: PropTypes.func,
width: PropTypes.number,
onUpdate: PropTypes.func,
loading: PropTypes.bool
loading: PropTypes.bool,
min: PropTypes.number,
max: PropTypes.number
}
constructor(props){
@ -53,11 +62,19 @@ export default class Histogram extends React.Component {
this.rangeX = [minX, maxX];
this.rangeY = [minY, maxY];
let min = minX;
let max = maxX;
if (this.props.min !== null && this.props.max !== null){
min = this.props.min;
max = this.props.max;
}
const st = {
min: minX.toFixed(3),
max: maxX.toFixed(3),
minInput: minX.toFixed(3),
maxInput: maxX.toFixed(3)
min: min,
max: max,
minInput: this.props.unitForward(min).toFixed(3),
maxInput: this.props.unitForward(max).toFixed(3)
};
if (!this.state){
@ -101,11 +118,14 @@ export default class Histogram extends React.Component {
let x = d3.scale.linear()
.domain(this.rangeX)
.range([0, width]);
let tickFormat = x => {
return this.props.unitForward(x).toFixed(0);
};
svg.append("g")
.attr("class", "x axis theme-fill-primary")
.attr("transform", "translate(0," + (height - 5) + ")")
.call(d3.svg.axis().scale(x).tickValues(this.rangeX).orient("bottom"));
.call(d3.svg.axis().scale(x).tickValues(this.rangeX).tickFormat(tickFormat).orient("bottom"));
// add the y Axis
let y = d3.scale.linear()
@ -183,7 +203,7 @@ export default class Histogram extends React.Component {
maxLine.setAttribute('x2', newX);
if (prevX !== newX){
self.setState({max: (self.rangeX[0] + ((self.rangeX[1] - self.rangeX[0]) / width) * newX).toFixed(3)});
self.setState({max: (self.rangeX[0] + ((self.rangeX[1] - self.rangeX[0]) / width) * newX)});
}
}
};
@ -201,7 +221,7 @@ export default class Histogram extends React.Component {
minLine.setAttribute('x2', newX);
if (prevX !== newX){
self.setState({min: (self.rangeX[0] + ((self.rangeX[1] - self.rangeX[0]) / width) * newX).toFixed(3)});
self.setState({min: (self.rangeX[0] + ((self.rangeX[1] - self.rangeX[0]) / width) * newX)});
}
}
};
@ -234,11 +254,28 @@ export default class Histogram extends React.Component {
componentDidMount(){
this.redraw();
onUnitSystemChanged(this.handleUnitSystemChanged);
}
componentWillUnmount(){
offUnitSystemChanged(this.handleUnitSystemChanged);
}
handleUnitSystemChanged = e => {
this.redraw();
this.setState({
minInput: this.props.unitForward(this.state.min).toFixed(3),
maxInput: this.props.unitForward(this.state.max).toFixed(3)
});
}
componentDidUpdate(prevProps, prevState){
if (prevState.min !== this.state.min) this.state.minInput = this.state.min;
if (prevState.max !== this.state.max) this.state.maxInput = this.state.max;
if (prevState.min !== this.state.min || prevState.max !== this.state.max){
this.setState({
minInput: this.props.unitForward(this.state.min).toFixed(3),
maxInput: this.props.unitForward(this.state.max).toFixed(3)
});
}
if (prevState.min !== this.state.min ||
prevState.max !== this.state.max ||
@ -277,28 +314,44 @@ export default class Histogram extends React.Component {
handleChangeMax = (e) => {
this.setState({maxInput: e.target.value});
const val = parseFloat(e.target.value);
}
if (val >= this.state.min && val <= this.rangeX[1]){
this.setState({max: val});
handleMaxBlur = (e) => {
let val = parseFloat(e.target.value);
if (!isNaN(val)){
val = this.props.unitBackward(val);
val = Math.max(this.state.min, Math.min(this.rangeX[1], val));
this.setState({max: val, maxInput: val.toFixed(3)});
}
}
handleMaxKeyDown = (e) => {
if (e.key === 'Enter') this.handleMaxBlur(e);
}
handleChangeMin = (e) => {
this.setState({minInput: e.target.value});
const val = parseFloat(e.target.value);
}
if (val <= this.state.max && val >= this.rangeX[0]){
this.setState({min: val});
handleMinBlur = (e) => {
let val = parseFloat(e.target.value);
if (!isNaN(val)){
val = this.props.unitBackward(val);
val = Math.max(this.rangeX[0], Math.min(this.state.max, val));
this.setState({min: val, minInput: val.toFixed(3)});
}
};
handleMinKeyDown = (e) => {
if (e.key === 'Enter') this.handleMinBlur(e);
}
render(){
return (<div className={"histogram " + (this.props.loading ? "disabled" : "")}>
<div ref={(domNode) => { this.hgContainer = domNode; }}>
</div>
<label>{_("Min:")}</label> <input onChange={this.handleChangeMin} type="number" className="form-control min-max" size={5} value={this.state.minInput} />
<label>{_("Max:")}</label> <input onChange={this.handleChangeMax} type="number" className="form-control min-max" size={5} value={this.state.maxInput} />
<label>{_("Min:")}</label> <input onKeyDown={this.handleMinKeyDown} onBlur={this.handleMinBlur} onChange={this.handleChangeMin} type="number" className="form-control min-max" size={5} value={this.state.minInput} />
<label>{_("Max:")}</label> <input onKeyDown={this.handleMaxKeyDown} onBlur={this.handleMaxBlur} onChange={this.handleChangeMax} type="number" className="form-control min-max" size={5} value={this.state.maxInput} />
</div>);
}
}

Wyświetl plik

@ -53,7 +53,8 @@ class ImportTaskPanel extends React.Component {
clickable: this.uploadButton,
chunkSize: 2147483647,
timeout: 2147483647,
chunking: true,
chunkSize: 16000000, // 16MB
headers: {
[csrf.header]: csrf.token
}
@ -69,6 +70,7 @@ class ImportTaskPanel extends React.Component {
this.setState({uploading: false, progress: 0, totalBytes: 0, totalBytesSent: 0});
})
.on("uploadprogress", (file, progress, bytesSent) => {
if (progress == 100) return; // Workaround for chunked upload progress bar jumping around
this.setState({
progress,
totalBytes: file.size,

Wyświetl plik

@ -65,7 +65,6 @@ export default class LayersControlLayer extends React.Component {
exportLoading: false,
error: ""
};
this.rescale = params.rescale || "";
}
@ -134,7 +133,7 @@ export default class LayersControlLayer extends React.Component {
// Check if bands need to be switched
const algo = this.getAlgorithm(e.target.value);
if (algo && algo['filters'].indexOf(bands) === -1) bands = algo['filters'][0]; // Pick first
if (algo && algo['filters'].indexOf(bands) === -1 && bands !== "auto") bands = algo['filters'][0]; // Pick first
this.setState({formula: e.target.value, bands});
}
@ -170,7 +169,14 @@ export default class LayersControlLayer extends React.Component {
// Update rescale values
const { statistics } = this.tmeta;
if (statistics && statistics["1"]){
this.rescale = `${statistics["1"]["min"]},${statistics["1"]["max"]}`;
let min = Infinity;
let max = -Infinity;
for (let b in statistics){
min = Math.min(statistics[b]["percentiles"][0]);
max = Math.max(statistics[b]["percentiles"][1]);
}
this.rescale = `${min},${max}`;
}
this.updateLayer();
@ -262,7 +268,7 @@ export default class LayersControlLayer extends React.Component {
render(){
const { colorMap, bands, hillshade, formula, histogramLoading, exportLoading } = this.state;
const { meta, tmeta } = this;
const { color_maps, algorithms } = tmeta;
const { color_maps, algorithms, auto_bands } = tmeta;
const algo = this.getAlgorithm(formula);
let cmapValues = null;
@ -270,6 +276,16 @@ export default class LayersControlLayer extends React.Component {
cmapValues = (color_maps.find(c => c.key === colorMap) || {}).color_map;
}
let hmin = null;
let hmax = null;
if (this.rescale){
let parts = decodeURIComponent(this.rescale).split(",");
if (parts.length === 2 && parts[0] && parts[1]){
hmin = parseFloat(parts[0]);
hmax = parseFloat(parts[1]);
}
}
return (<div className="layers-control-layer">
{!this.props.overlay ? <ExpandButton bind={[this, 'expanded']} /> : <div className="overlayIcon"><i className={meta.icon || "fa fa-vector-square fa-fw"}></i></div>}<Checkbox bind={[this, 'visible']}/>
<a title={meta.name} className="layer-label" href="javascript:void(0);" onClick={this.handleLayerClick}>{meta.name}</a>
@ -278,8 +294,12 @@ export default class LayersControlLayer extends React.Component {
<div className="layer-expanded">
<Histogram width={274}
loading={histogramLoading}
statistics={tmeta.statistics}
statistics={tmeta.statistics}
unitForward={meta.unitForward}
unitBackward={meta.unitBackward}
colorMap={cmapValues}
min={hmin}
max={hmax}
onUpdate={this.handleHistogramUpdate} />
<ErrorMessage bind={[this, "error"]} />
@ -298,13 +318,17 @@ export default class LayersControlLayer extends React.Component {
{bands !== "" && algo ?
<div className="row form-group form-inline">
<label className="col-sm-3 control-label">{_("Filter:")}</label>
<label className="col-sm-3 control-label">{_("Bands:")}</label>
<div className="col-sm-9 ">
{histogramLoading ?
<i className="fa fa-circle-notch fa-spin fa-fw" /> :
<select className="form-control" value={bands} onChange={this.handleSelectBands}>
[<select key="sel" className="form-control" value={bands} onChange={this.handleSelectBands} title={auto_bands.filter !== "" && bands == "auto" ? auto_bands.filter : ""}>
<option key="auto" value="auto">{_("Automatic")}</option>
{algo.filters.map(f => <option key={f} value={f}>{f}</option>)}
</select>}
</select>,
bands == "auto" && !auto_bands.match ?
<i key="ico" style={{marginLeft: '4px'}} title={interpolate(_("Not every band for %(name)s could be automatically identified."), {name: algo.id}) + "\n" + _("Your sensor might not have the proper bands for using this algorithm.")} className="fa fa-exclamation-circle info-button"></i>
: ""]}
</div>
</div> : ""}

Wyświetl plik

@ -4,8 +4,6 @@ import '../css/Map.scss';
import 'leaflet/dist/leaflet.css';
import Leaflet from 'leaflet';
import async from 'async';
import '../vendor/leaflet/L.Control.MousePosition.css';
import '../vendor/leaflet/L.Control.MousePosition';
import '../vendor/leaflet/Leaflet.Autolayers/css/leaflet.auto-layers.css';
import '../vendor/leaflet/Leaflet.Autolayers/leaflet-autolayers';
// import '../vendor/leaflet/L.TileLayer.NoGap';
@ -26,8 +24,11 @@ import LayersControl from './LayersControl';
import update from 'immutability-helper';
import Utils from '../classes/Utils';
import '../vendor/leaflet/Leaflet.Ajax';
import '../vendor/leaflet/Leaflet.Awesome-markers';
import 'rbush';
import '../vendor/leaflet/leaflet-markers-canvas';
import { _ } from '../classes/gettext';
import UnitSelector from './UnitSelector';
import { unitSystem, toMetric } from '../classes/Units';
class Map extends React.Component {
static defaultProps = {
@ -93,6 +94,16 @@ class Map extends React.Component {
return "";
}
hasBands = (bands, orthophoto_bands) => {
if (!orthophoto_bands) return false;
for (let i = 0; i < bands.length; i++){
if (orthophoto_bands.find(b => b.description !== null && b.description.toLowerCase() === bands[i].toLowerCase()) === undefined) return false;
}
return true;
}
loadImageryLayers(forceAddLayers = false){
// Cancel previous requests
if (this.tileJsonRequests) {
@ -124,9 +135,30 @@ class Map extends React.Component {
const { url, meta, type } = tile;
let metaUrl = url + "metadata";
let unitForward = value => value;
let unitBackward = value => value;
if (type == "plant") metaUrl += "?formula=NDVI&bands=RGN&color_map=rdylgn";
if (type == "dsm" || type == "dtm") metaUrl += "?hillshade=6&color_map=viridis";
if (type == "plant"){
if (meta.task && meta.task.orthophoto_bands && meta.task.orthophoto_bands.length === 2){
// Single band, probably thermal dataset, in any case we can't render NDVI
// because it requires 3 bands
metaUrl += "?formula=Celsius&bands=L&color_map=magma";
}else if (meta.task && meta.task.orthophoto_bands){
let formula = this.hasBands(["red", "green", "nir"], meta.task.orthophoto_bands) ? "NDVI" : "VARI";
metaUrl += `?formula=${formula}&bands=auto&color_map=rdylgn`;
}else{
// This should never happen?
metaUrl += "?formula=NDVI&bands=RGN&color_map=rdylgn";
}
}else if (type == "dsm" || type == "dtm"){
metaUrl += "?hillshade=6&color_map=viridis";
unitForward = value => {
return unitSystem().length(value, { fixedUnit: true }).value;
};
unitBackward = value => {
return toMetric(value).value;
};
}
this.tileJsonRequests.push($.getJSON(metaUrl)
.done(mres => {
@ -145,7 +177,22 @@ class Map extends React.Component {
const params = Utils.queryParams({search: tileUrl.slice(tileUrl.indexOf("?"))});
if (statistics["1"]){
// Add rescale
params["rescale"] = encodeURIComponent(`${statistics["1"]["min"]},${statistics["1"]["max"]}`);
let min = Infinity;
let max = -Infinity;
if (type === 'plant'){
// percentile
for (let b in statistics){
min = Math.min(statistics[b]["percentiles"][0]);
max = Math.max(statistics[b]["percentiles"][1]);
}
}else{
// min/max
for (let b in statistics){
min = Math.min(statistics[b]["min"]);
max = Math.max(statistics[b]["max"]);
}
}
params["rescale"] = encodeURIComponent(`${min},${max}`);
}else{
console.warn("Cannot find min/max statistics for dataset, setting to -1,1");
params["rescale"] = encodeURIComponent("-1,1");
@ -171,6 +218,8 @@ class Map extends React.Component {
// Associate metadata with this layer
meta.name = name + ` (${this.typeToHuman(type)})`;
meta.metaUrl = metaUrl;
meta.unitForward = unitForward;
meta.unitBackward = unitBackward;
layer[Symbol.for("meta")] = meta;
layer[Symbol.for("tile-meta")] = mres;
@ -228,41 +277,45 @@ class Map extends React.Component {
// Add camera shots layer if available
if (meta.task && meta.task.camera_shots && !this.addedCameraShots){
const shotsLayer = new L.GeoJSON.AJAX(meta.task.camera_shots, {
style: function (feature) {
return {
opacity: 1,
fillOpacity: 0.7,
color: "#000000"
}
},
pointToLayer: function (feature, latlng) {
return new L.CircleMarker(latlng, {
color: '#3498db',
fillColor: '#3498db',
fillOpacity: 0.9,
radius: 10,
weight: 1
});
},
onEachFeature: function (feature, layer) {
if (feature.properties && feature.properties.filename) {
let root = null;
const lazyrender = () => {
if (!root) root = document.createElement("div");
ReactDOM.render(<ImagePopup task={meta.task} feature={feature}/>, root);
return root;
}
layer.bindPopup(L.popup(
{
lazyrender,
maxHeight: 450,
minWidth: 320
}));
}
}
var camIcon = L.icon({
iconUrl: "/static/app/js/icons/marker-camera.png",
iconSize: [41, 46],
iconAnchor: [17, 46],
});
const shotsLayer = new L.MarkersCanvas();
$.getJSON(meta.task.camera_shots)
.done((shots) => {
if (shots.type === 'FeatureCollection'){
let markers = [];
shots.features.forEach(s => {
let marker = L.marker(
[s.geometry.coordinates[1], s.geometry.coordinates[0]],
{ icon: camIcon }
);
markers.push(marker);
if (s.properties && s.properties.filename){
let root = null;
const lazyrender = () => {
if (!root) root = document.createElement("div");
ReactDOM.render(<ImagePopup task={meta.task} feature={s}/>, root);
return root;
}
marker.bindPopup(L.popup(
{
lazyrender,
maxHeight: 450,
minWidth: 320
}));
}
});
shotsLayer.addMarkers(markers, this.map);
}
});
shotsLayer[Symbol.for("meta")] = {name: name + " " + _("(Cameras)"), icon: "fa fa-camera fa-fw"};
this.setState(update(this.state, {
@ -274,44 +327,45 @@ class Map extends React.Component {
// Add ground control points layer if available
if (meta.task && meta.task.ground_control_points && !this.addedGroundControlPoints){
const gcpMarker = L.AwesomeMarkers.icon({
icon: 'dot-circle',
markerColor: 'blue',
prefix: 'fa'
const gcpIcon = L.icon({
iconUrl: "/static/app/js/icons/marker-gcp.png",
iconSize: [41, 46],
iconAnchor: [17, 46],
});
const gcpLayer = new L.MarkersCanvas();
$.getJSON(meta.task.ground_control_points)
.done((gcps) => {
if (gcps.type === 'FeatureCollection'){
let markers = [];
const gcpLayer = new L.GeoJSON.AJAX(meta.task.ground_control_points, {
style: function (feature) {
return {
opacity: 1,
fillOpacity: 0.7,
color: "#000000"
}
},
pointToLayer: function (feature, latlng) {
return new L.marker(latlng, {
icon: gcpMarker
});
},
onEachFeature: function (feature, layer) {
if (feature.properties && feature.properties.observations) {
// TODO!
let root = null;
const lazyrender = () => {
gcps.features.forEach(gcp => {
let marker = L.marker(
[gcp.geometry.coordinates[1], gcp.geometry.coordinates[0]],
{ icon: gcpIcon }
);
markers.push(marker);
if (gcp.properties && gcp.properties.observations){
let root = null;
const lazyrender = () => {
if (!root) root = document.createElement("div");
ReactDOM.render(<GCPPopup task={meta.task} feature={feature}/>, root);
ReactDOM.render(<GCPPopup task={meta.task} feature={gcp}/>, root);
return root;
}
}
layer.bindPopup(L.popup(
{
lazyrender,
maxHeight: 450,
minWidth: 320
}));
marker.bindPopup(L.popup(
{
lazyrender,
maxHeight: 450,
minWidth: 320
}));
}
});
gcpLayer.addMarkers(markers, this.map);
}
});
});
gcpLayer[Symbol.for("meta")] = {name: name + " " + _("(GCPs)"), icon: "far fa-dot-circle fa-fw"};
this.setState(update(this.state, {
@ -341,7 +395,7 @@ class Map extends React.Component {
this.map = Leaflet.map(this.container, {
scrollWheelZoom: true,
positionControl: true,
positionControl: false,
zoomControl: false,
minZoom: 0,
maxZoom: 24
@ -353,12 +407,23 @@ class Map extends React.Component {
PluginsAPI.Map.triggerWillAddControls({
map: this.map,
tiles
tiles,
mapView: this
});
let scaleControl = Leaflet.control.scale({
maxWidth: 250,
}).addTo(this.map);
const UnitsCtrl = Leaflet.Control.extend({
options: {
position: 'bottomleft'
},
onAdd: function () {
this.container = Leaflet.DomUtil.create('div', 'leaflet-control-units-selection leaflet-control');
Leaflet.DomEvent.disableClickPropagation(this.container);
ReactDOM.render(<UnitSelector />, this.container);
return this.container;
}
});
new UnitsCtrl().addTo(this.map);
//add zoom control with your options
let zoomControl = Leaflet.control.zoom({
@ -384,14 +449,14 @@ class Map extends React.Component {
const customLayer = L.layerGroup();
customLayer.on("add", a => {
const defaultCustomBm = window.localStorage.getItem('lastCustomBasemap') || 'https://a.tile.openstreetmap.org/{z}/{x}/{y}.png';
const defaultCustomBm = window.localStorage.getItem('lastCustomBasemap') || 'https://tile.openstreetmap.org/{z}/{x}/{y}.png';
let url = window.prompt([_('Enter a tile URL template. Valid coordinates are:'),
_('{z}, {x}, {y} for Z/X/Y tile scheme'),
_('{-y} for flipped TMS-style Y coordinates'),
'',
_('Example:'),
'https://a.tile.openstreetmap.org/{z}/{x}/{y}.png'].join("\n"), defaultCustomBm);
'https://tile.openstreetmap.org/{z}/{x}/{y}.png'].join("\n"), defaultCustomBm);
if (url){
customLayer.clearLayers();
@ -467,7 +532,11 @@ _('Example:'),
});
new AddOverlayCtrl().addTo(this.map);
this.map.fitWorld();
this.map.fitBounds([
[13.772919746115805,
45.664640939831735],
[13.772825784981254,
45.664591558975154]]);
this.map.attributionControl.setPrefix("");
this.setState({showLoading: true});
@ -476,6 +545,8 @@ _('Example:'),
this.map.fitBounds(this.mapBounds);
this.map.on('click', e => {
if (PluginsAPI.Map.handleClick(e)) return;
// Find first tile layer at the selected coordinates
for (let layer of this.state.imageryLayers){
if (layer._map && layer.options.bounds.contains(e.latlng)){
@ -529,7 +600,6 @@ _('Example:'),
tiles: tiles,
controls:{
autolayers: this.autolayers,
scale: scaleControl,
zoom: zoomControl
}
});
@ -579,7 +649,7 @@ _('Example:'),
<div style={{height: "100%"}} className="map">
<ErrorMessage bind={[this, 'error']} />
<div className="opacity-slider theme-secondary hidden-xs">
{_("Opacity:")} <input type="range" step="1" value={this.state.opacity} onChange={this.updateOpacity} />
<div className="opacity-slider-label">{_("Opacity:")}</div> <input type="range" step="1" value={this.state.opacity} onChange={this.updateOpacity} />
</div>
<Standby
@ -600,6 +670,7 @@ _('Example:'),
ref={(ref) => { this.shareButton = ref; }}
task={this.state.singleTask}
linksTarget="map"
queryParams={{t: this.props.mapType}}
/>
: ""}
<SwitchModeButton

Wyświetl plik

@ -124,11 +124,24 @@ class NewTaskPanel extends React.Component {
}
render() {
let filesCountOk = true;
if (this.taskForm && !this.taskForm.checkFilesCount(this.props.filesCount)) filesCountOk = false;
return (
<div className="new-task-panel theme-background-highlight">
<div className="form-horizontal">
<div className={this.state.inReview ? "disabled" : ""}>
<p>{interpolate(_("%(count)s files selected. Please check these additional options:"), { count: this.props.filesCount})}</p>
{!filesCountOk ?
<div className="alert alert-warning">
{interpolate(_("Number of files selected exceeds the maximum of %(count)s allowed on this processing node."), { count: this.taskForm.selectedNodeMaxImages() })}
<button onClick={this.props.onCancel} type="button" className="btn btn-xs btn-primary redo">
<span><i className="glyphicon glyphicon-remove-circle"></i> {_("Cancel")}</span>
</button>
</div>
: ""}
<EditTaskForm
selectedNode={Storage.getItem("last_processing_node") || "auto"}
onFormLoaded={this.handleFormTaskLoaded}
@ -186,7 +199,7 @@ class NewTaskPanel extends React.Component {
{this.state.loading ?
<button type="submit" className="btn btn-primary" disabled={true}><i className="fa fa-circle-notch fa-spin fa-fw"></i>{_("Loading…")}</button>
:
<button type="submit" className="btn btn-primary" onClick={this.save} disabled={this.props.filesCount <= 1}><i className="glyphicon glyphicon-saved"></i> {!this.state.inReview ? _("Review") : _("Start Processing")}</button>
<button type="submit" className="btn btn-primary" onClick={this.save} disabled={this.props.filesCount < 1 || !filesCountOk}><i className="glyphicon glyphicon-saved"></i> {!this.state.inReview ? _("Review") : _("Start Processing")}</button>
}
</div>
</div>

Wyświetl plik

@ -146,8 +146,16 @@ class Paginator extends React.Component {
}
if (itemsPerPage && itemsPerPage && totalItems > itemsPerPage){
const numPages = Math.ceil(totalItems / itemsPerPage),
pages = [...Array(numPages).keys()]; // [0, 1, 2, ...numPages]
const numPages = Math.ceil(totalItems / itemsPerPage);
const MAX_PAGE_BUTTONS = 7;
let rangeStart = Math.max(1, currentPage - Math.floor(MAX_PAGE_BUTTONS / 2));
let rangeEnd = rangeStart + Math.min(numPages, MAX_PAGE_BUTTONS);
if (rangeEnd > numPages){
rangeStart -= rangeEnd - numPages - 1;
rangeEnd -= rangeEnd - numPages - 1
}
let pages = [...Array(rangeEnd - rangeStart).keys()].map(i => i + rangeStart - 1);
paginator = (
<ul className="pagination pagination-sm">

Wyświetl plik

@ -106,6 +106,13 @@ class ProcessingNodeOption extends React.Component {
}
}
handleHelp = e => {
e.preventDefault();
if (window.__taskOptionsDocsLink){
window.open(window.__taskOptionsDocsLink + "#" + encodeURIComponent(this.props.name), "task-options")
}
}
render() {
let inputControl = "";
let warningMsg = "";
@ -152,7 +159,7 @@ class ProcessingNodeOption extends React.Component {
let loadFileControl = "";
if (this.supportsFileAPI() && this.props.domain === 'json'){
loadFileControl = ([
<button key="btn" type="file" className="btn glyphicon glyphicon-import btn-primary" data-toggle="tooltip" data-placement="left" title={_("Click to import a .JSON file")} onClick={() => this.loadFile()}></button>,
<button key="btn" type="file" className="btn glyphicon glyphicon-import btn-primary" data-toggle="tooltip" data-placement="left" title={_("Click to import a JSON file")} onClick={() => this.loadFile()}></button>,
<input key="file-ctrl" className="file-control" type="file"
accept="text/plain,application/json,application/geo+json,.geojson"
onChange={this.handleFileSelect}
@ -168,7 +175,7 @@ class ProcessingNodeOption extends React.Component {
return (
<div className="processing-node-option form-inline form-group form-horizontal" ref={this.setTooltips}>
<label>{this.props.name} {(!this.isEnumType() && this.props.domain ? `(${this.props.domain})` : "")} <i data-toggle="tooltip" data-placement="bottom" title={this.props.help} onClick={e => e.preventDefault()} className="fa fa-info-circle info-button"></i></label><br/>
<label>{this.props.name} {(!this.isEnumType() && this.props.domain ? `(${this.props.domain})` : "")} <i data-toggle="tooltip" data-placement="bottom" title={this.props.help} onClick={this.handleHelp} className="fa fa-info-circle info-button help-button"></i></label><br/>
{inputControl}
{loadFileControl}

Wyświetl plik

@ -60,6 +60,7 @@ class ProjectListItem extends React.Component {
this.toggleTaskList = this.toggleTaskList.bind(this);
this.closeUploadError = this.closeUploadError.bind(this);
this.cancelUpload = this.cancelUpload.bind(this);
this.handleCancel = this.handleCancel.bind(this);
this.handleTaskSaved = this.handleTaskSaved.bind(this);
this.viewMap = this.viewMap.bind(this);
this.handleDelete = this.handleDelete.bind(this);
@ -143,6 +144,7 @@ class ProjectListItem extends React.Component {
autoProcessQueue: false,
createImageThumbnails: false,
clickable: this.uploadButton,
maxFilesize: 131072, // 128G
chunkSize: 2147483647,
timeout: 2147483647,
@ -191,7 +193,7 @@ class ProjectListItem extends React.Component {
.on("complete", (file) => {
// Retry
const retry = () => {
const MAX_RETRIES = 10;
const MAX_RETRIES = 20;
if (file.retries < MAX_RETRIES){
// Update progress
@ -207,7 +209,9 @@ class ProjectListItem extends React.Component {
file.deltaBytesSent = 0;
file.trackedBytesSent = 0;
file.retries++;
this.dz.processQueue();
setTimeout(() => {
this.dz.processQueue();
}, 5000 * file.retries);
}else{
throw new Error(interpolate(_('Cannot upload %(filename)s, exceeded max retries (%(max_retries)s)'), {filename: file.name, max_retries: MAX_RETRIES}));
}
@ -215,11 +219,19 @@ class ProjectListItem extends React.Component {
try{
if (file.status === "error"){
if ((file.size / 1024 / 1024) > this.dz.options.maxFilesize) {
// Delete from upload queue
this.setUploadState({
totalCount: this.state.upload.totalCount - 1,
totalBytes: this.state.upload.totalBytes - file.size
});
throw new Error(interpolate(_('Cannot upload %(filename)s, file is too large! Default MaxFileSize is %(maxFileSize)s MB!'), { filename: file.name, maxFileSize: this.dz.options.maxFilesize }));
}
retry();
}else{
// Check response
let response = JSON.parse(file.xhr.response);
if (response.success){
if (response.success && response.uploaded && response.uploaded[file.name] === file.size){
// Update progress by removing the tracked progress and
// use the file size as the true number of bytes
let totalBytesSent = this.state.upload.totalBytesSent + file.size;
@ -239,13 +251,19 @@ class ProjectListItem extends React.Component {
}
}
}catch(e){
this.setUploadState({error: `${e.message}`, uploading: false});
this.dz.cancelUpload();
if (this.manuallyCanceled){
// Manually canceled, ignore error
this.setUploadState({uploading: false});
}else{
this.setUploadState({error: `${e.message}`, uploading: false});
}
if (this.dz.files.length) this.dz.cancelUpload();
}
})
.on("queuecomplete", () => {
const remainingFilesCount = this.state.upload.totalCount - this.state.upload.uploadedCount;
if (remainingFilesCount === 0){
if (remainingFilesCount === 0 && this.state.upload.uploadedCount > 0){
// All files have uploaded!
this.setUploadState({uploading: false});
@ -266,7 +284,6 @@ class ProjectListItem extends React.Component {
}else if (this.dz.getQueuedFiles() === 0){
// Done but didn't upload all?
this.setUploadState({
totalCount: this.state.upload.totalCount - remainingFilesCount,
uploading: false,
error: interpolate(_('%(count)s files cannot be uploaded. As a reminder, only images (.jpg, .tif, .png) and GCP files (.txt) can be uploaded. Try again.'), { count: remainingFilesCount })
});
@ -323,10 +340,26 @@ class ProjectListItem extends React.Component {
this.setUploadState({error: ""});
}
cancelUpload(e){
cancelUpload(){
this.dz.removeAllFiles(true);
}
handleCancel(){
this.manuallyCanceled = true;
this.cancelUpload();
if (this.dz._taskInfo && this.dz._taskInfo.id !== undefined){
$.ajax({
url: `/api/projects/${this.state.data.id}/tasks/${this.dz._taskInfo.id}/remove/`,
contentType: 'application/json',
dataType: 'json',
type: 'POST'
});
}
setTimeout(() => {
this.manuallyCanceled = false;
}, 500);
}
taskDeleted(){
this.refresh();
}
@ -400,6 +433,20 @@ class ProjectListItem extends React.Component {
this.editProjectDialog.show();
}
handleHideProject = (deleteWarning, deleteAction) => {
return () => {
if (window.confirm(deleteWarning)){
this.setState({error: "", refreshing: true});
deleteAction()
.fail(e => {
this.setState({error: e.message || (e.responseJSON || {}).detail || e.responseText || _("Could not delete item")});
}).always(() => {
this.setState({refreshing: false});
});
}
}
}
updateProject(project){
return $.ajax({
url: `/api/projects/${this.state.data.id}/edit/`,
@ -605,7 +652,7 @@ class ProjectListItem extends React.Component {
<button disabled={this.state.upload.error !== ""}
type="button"
className={"btn btn-danger btn-sm " + (!this.state.upload.uploading ? "hide" : "")}
onClick={this.cancelUpload}>
onClick={this.handleCancel}>
<i className="glyphicon glyphicon-remove-circle"></i>
Cancel Upload
</button>
@ -683,6 +730,12 @@ class ProjectListItem extends React.Component {
</a>]
: ""}
{!canEdit && !data.owned ?
[<i key="edit-icon" className='far fa-eye-slash'></i>
,<a key="edit-text" href="javascript:void(0);" onClick={this.handleHideProject(deleteWarning, this.handleDelete)}> {_("Delete")}
</a>]
: ""}
</div>
</div>
<i className="drag-drop-icon fa fa-inbox"></i>

Wyświetl plik

@ -12,7 +12,8 @@ class ShareButton extends React.Component {
static propTypes = {
task: PropTypes.object.isRequired,
linksTarget: PropTypes.oneOf(['map', '3d']).isRequired,
popupPlacement: PropTypes.string
popupPlacement: PropTypes.string,
queryParams: PropTypes.object
}
constructor(props){
@ -45,6 +46,7 @@ class ShareButton extends React.Component {
taskChanged={this.handleTaskChanged}
placement={this.props.popupPlacement}
linksTarget={this.props.linksTarget}
queryParams={this.props.queryParams}
/>;
return (

Wyświetl plik

@ -15,7 +15,8 @@ class SharePopup extends React.Component{
task: PropTypes.object.isRequired,
linksTarget: PropTypes.oneOf(['map', '3d']).isRequired,
placement: PropTypes.string,
taskChanged: PropTypes.func
taskChanged: PropTypes.func,
queryParams: PropTypes.object
};
static defaultProps = {
placement: 'top',
@ -38,7 +39,11 @@ class SharePopup extends React.Component{
}
getRelShareLink = () => {
return `/public/task/${this.props.task.id}/${this.props.linksTarget}/`;
let url = `/public/task/${this.props.task.id}/${this.props.linksTarget}/`;
if (this.props.queryParams){
url += Utils.toSearchQuery(this.props.queryParams);
}
return url;
}
componentDidMount(){
@ -86,8 +91,8 @@ class SharePopup extends React.Component{
}
render(){
const shareLink = Utils.absoluteUrl(this.state.relShareLink);
const iframeUrl = Utils.absoluteUrl(`public/task/${this.state.task.id}/iframe/${this.props.linksTarget}/`);
const shareLink = Utils.absoluteUrl(this.getRelShareLink());
const iframeUrl = Utils.absoluteUrl(`public/task/${this.state.task.id}/iframe/${this.props.linksTarget}/${Utils.toSearchQuery(this.props.queryParams)}`);
const iframeCode = `<iframe scrolling="no" title="WebODM" width="61.8033%" height="360" frameBorder="0" src="${iframeUrl}"></iframe>`;
return (<div onMouseDown={e => { e.stopPropagation(); }} className={"sharePopup " + this.props.placement}>

Wyświetl plik

@ -3,6 +3,7 @@ import '../css/TaskList.scss';
import TaskListItem from './TaskListItem';
import PropTypes from 'prop-types';
import $ from 'jquery';
import HistoryNav from '../classes/HistoryNav';
import { _, interpolate } from '../classes/gettext';
class TaskList extends React.Component {
@ -19,6 +20,8 @@ class TaskList extends React.Component {
constructor(props){
super(props);
this.historyNav = new HistoryNav(props.history);
this.state = {
tasks: [],
error: "",
@ -54,6 +57,10 @@ class TaskList extends React.Component {
this.taskListRequest =
$.getJSON(this.props.source, json => {
if (json.length === 1){
this.historyNav.addToQSList("project_task_expanded", json[0].id);
}
this.setState({
tasks: json
});
@ -67,7 +74,7 @@ class TaskList extends React.Component {
.always(() => {
this.setState({
loading: false
})
});
});
}

Wyświetl plik

@ -14,6 +14,7 @@ import PipelineSteps from '../classes/PipelineSteps';
import Css from '../classes/Css';
import Tags from '../classes/Tags';
import Trans from './Trans';
import Utils from '../classes/Utils';
import { _, interpolate } from '../classes/gettext';
class TaskListItem extends React.Component {
@ -265,7 +266,7 @@ class TaskListItem extends React.Component {
<li>${_("Not enough overlap between images")}</li>
<li>${_("Images might be too blurry (common with phone cameras)")}</li>
<li>${_("The min-num-features task option is set too low, try increasing it by 25%")}</li>
</ul>`, link: `<a href='https://help.dronedeploy.com/hc/en-us/articles/1500004964282-Making-Successful-Maps' target='_blank'>${_("here")}</a>`})});
</ul>`, link: `<a href='https://docs.webodm.net/references/create-successful-maps' target='_blank'>${_("here")}</a>`})});
}else if (line.indexOf("Illegal instruction") !== -1 ||
line.indexOf("Child returned 132") !== -1){
this.setState({friendlyTaskError: interpolate(_("It looks like this computer might be too old. WebODM requires a computer with a 64-bit CPU supporting MMX, SSE, SSE2, SSE3 and SSSE3 instruction set support or higher. You can still run WebODM if you compile your own docker images. See %(link)s for more information."), { link: `<a href='https://github.com/OpenDroneMap/WebODM#common-troubleshooting'>${_("this page")}</a>` } )});
@ -572,6 +573,15 @@ class TaskListItem extends React.Component {
<td><strong>{_("Reconstructed Points:")}</strong></td>
<td>{stats.pointcloud.points.toLocaleString()}</td>
</tr>}
{task.size > 0 &&
<tr>
<td><strong>{_("Disk Usage:")}</strong></td>
<td>{Utils.bytesToSize(task.size * 1024 * 1024)}</td>
</tr>}
<tr>
<td><strong>{_("Task ID:")}</strong></td>
<td>{task.id}</td>
</tr>
<tr>
<td><strong>{_("Task Output:")}</strong></td>
<td><div className="btn-group btn-toggle">
@ -596,17 +606,17 @@ class TaskListItem extends React.Component {
/> : ""}
{showOrthophotoMissingWarning ?
<div className="task-warning"><i className="fa fa-warning"></i> <span>{_("An orthophoto could not be generated. To generate one, make sure GPS information is embedded in the EXIF tags of your images, or use a Ground Control Points (GCP) file.")}</span></div> : ""}
<div className="task-warning"><i className="fa fa-exclamation-triangle"></i> <span>{_("An orthophoto could not be generated. To generate one, make sure GPS information is embedded in the EXIF tags of your images, or use a Ground Control Points (GCP) file.")}</span></div> : ""}
{showMemoryErrorWarning ?
<div className="task-warning"><i className="fa fa-support"></i> <Trans params={{ memlink: `<a href="${memoryErrorLink}" target='_blank'>${_("enough RAM allocated")}</a>`, cloudlink: `<a href='https://www.opendronemap.org/webodm/lightning/' target='_blank'>${_("cloud processing node")}</a>` }}>{_("It looks like your processing node ran out of memory. If you are using docker, make sure that your docker environment has %(memlink)s. Alternatively, make sure you have enough physical RAM, reduce the number of images, make your images smaller, or reduce the max-concurrency parameter from the task's options. You can also try to use a %(cloudlink)s.")}</Trans></div> : ""}
<div className="task-warning"><i className="fa fa-support"></i> <Trans params={{ memlink: `<a href="${memoryErrorLink}" target='_blank'>${_("enough RAM allocated")}</a>`, cloudlink: `<a href='https://webodm.net' target='_blank'>${_("cloud processing node")}</a>` }}>{_("It looks like your processing node ran out of memory. If you are using docker, make sure that your docker environment has %(memlink)s. Alternatively, make sure you have enough physical RAM, reduce the number of images, make your images smaller, or reduce the max-concurrency parameter from the task's options. You can also try to use a %(cloudlink)s.")}</Trans></div> : ""}
{showTaskWarning ?
<div className="task-warning"><i className="fa fa-support"></i> <span dangerouslySetInnerHTML={{__html: this.state.friendlyTaskError}} /></div> : ""}
{showExitedWithCodeOneHints ?
<div className="task-warning"><i className="fa fa-info-circle"></i> <div className="inline">
<Trans params={{link1: `<a href="https://www.dronedb.app/" target="_blank">DroneDB</a>`, link2: `<a href="https://drive.google.com/drive/u/0/" target="_blank">Google Drive</a>`, open_a_topic: `<a href="http://community.opendronemap.org/c/webodm" target="_blank">${_("open a topic")}</a>`, }}>{_("\"Process exited with code 1\" means that part of the processing failed. Sometimes it's a problem with the dataset, sometimes it can be solved by tweaking the Task Options and sometimes it might be a bug! If you need help, upload your images somewhere like %(link1)s or %(link2)s and %(open_a_topic)s on our community forum, making sure to include a copy of your task's output. Our awesome contributors will try to help you!")}</Trans> <i className="far fa-smile"></i>
<Trans params={{link: `<a href="${window.__taskOptionsDocsLink}" target="_blank">${window.__taskOptionsDocsLink.replace("https://", "")}</a>` }}>{_("\"Process exited with code 1\" means that part of the processing failed. Sometimes it's a problem with the dataset, sometimes it can be solved by tweaking the Task Options. Check the documentation at %(link)s")}</Trans>
</div>
</div>
: ""}

Wyświetl plik

@ -0,0 +1,33 @@
import React from 'react';
import PropTypes from 'prop-types';
import { systems, getUnitSystem, setUnitSystem } from '../classes/Units';
import '../css/UnitSelector.scss';
class UnitSelector extends React.Component {
static propTypes = {
}
constructor(props){
super(props);
this.state = {
system: getUnitSystem()
}
}
handleChange = e => {
this.setState({system: e.target.value});
setUnitSystem(e.target.value);
};
render() {
return (
<select className="unit-selector" value={this.state.system} onChange={this.handleChange}>
{Object.keys(systems).map(k =>
<option value={k} key={k}>{systems[k].getName()}</option>)}
</select>
);
}
}
export default UnitSelector;

Wyświetl plik

@ -2,6 +2,7 @@ import '../css/UploadProgressBar.scss';
import React from 'react';
import PropTypes from 'prop-types';
import { _, interpolate } from '../classes/gettext';
import Utils from '../classes/Utils';
class UploadProgressBar extends React.Component {
static propTypes = {
@ -11,22 +12,12 @@ class UploadProgressBar extends React.Component {
totalCount: PropTypes.number // number of files
}
// http://stackoverflow.com/questions/15900485/correct-way-to-convert-size-in-bytes-to-kb-mb-gb-in-javascript
bytesToSize(bytes, decimals = 2){
if(bytes == 0) return '0 byte';
var k = 1000; // or 1024 for binary
var dm = decimals || 3;
var sizes = ['bytes', 'Kb', 'Mb', 'Gb', 'Tb', 'Pb', 'Eb', 'Zb', 'Yb'];
var i = Math.floor(Math.log(bytes) / Math.log(k));
return parseFloat((bytes / Math.pow(k, i)).toFixed(dm)) + ' ' + sizes[i];
}
render() {
let percentage = (this.props.progress !== undefined ?
this.props.progress :
0).toFixed(2);
let bytes = this.props.totalBytesSent !== undefined && this.props.totalBytes !== undefined ?
' ' + interpolate(_("remaining to upload: %(bytes)s"), { bytes: this.bytesToSize(this.props.totalBytes - this.props.totalBytesSent)}) :
' ' + interpolate(_("remaining to upload: %(bytes)s"), { bytes: Utils.bytesToSize(this.props.totalBytes - this.props.totalBytesSent)}) :
"";
let active = percentage < 100 ? "active" : "";

Wyświetl plik

@ -0,0 +1,10 @@
import React from 'react';
import { shallow } from 'enzyme';
import UnitSelector from '../UnitSelector';
describe('<UnitSelector />', () => {
it('renders without exploding', () => {
const wrapper = shallow(<UnitSelector />);
expect(wrapper.exists()).toBe(true);
})
});

Wyświetl plik

@ -0,0 +1,122 @@
import { systems, toMetric } from '../../classes/Units';
describe('Metric system', () => {
it('it should display units properly', () => {
const { metric } = systems;
const lengths = [
[1, "1 m"],
[0.01, "1 cm"],
[0.0154, "1.5 cm"],
[0.99, "99 cm"],
[0.995555, "99.6 cm"],
[1.01, "1.01 m"],
[999, "999 m"],
[1000, "1 km"],
[1001, "1.001 km"],
[1000010, "1,000.01 km"],
[1000012.349, "1,000.01235 km"],
];
lengths.forEach(l => {
expect(metric.length(l[0]).toString()).toBe(l[1]);
});
const areas = [
[1, "1 m²"],
[9999, "9,999 m²"],
[10000, "1 ha"],
[11005, "1.1005 ha"],
[11005, "1.1005 ha"],
[999999, "99.9999 ha"],
[1000000, "1 km²"],
[1000000000, "1,000 km²"],
[1000255558, "1,000.25556 km²"]
];
areas.forEach(a => {
expect(metric.area(a[0]).toString()).toBe(a[1]);
});
const volumes = [
[1, "1 m³"],
[9000, "9,000 m³"],
[9000.25559, "9,000.2556 m³"],
];
volumes.forEach(v => {
expect(metric.volume(v[0]).toString()).toBe(v[1]);
});
expect(metric.area(11005.09, { fixedUnit: true }).toString({precision: 1})).toBe("11,005.1 m²");
})
});
describe('Imperial systems', () => {
it('it should display units properly', () => {
const { imperial, imperialUS } = systems;
const lengths = [
[1, "3.2808 ft", "3.2808 ft (US)"],
[0.01, "0.0328 ft", "0.0328 ft (US)"],
[0.0154, "0.0505 ft", "0.0505 ft (US)"],
[1609, "5,278.8714 ft", "5,278.8608 ft (US)"],
[1609.344, "1 mi", "5,279.9894 ft (US)"],
[1609.3472187, "1 mi", "1 mi (US)"],
[3218.69, "2 mi", "2 mi (US)"]
];
lengths.forEach(l => {
expect(imperial.length(l[0]).toString()).toBe(l[1]);
expect(imperialUS.length(l[0]).toString()).toBe(l[2]);
});
const areas = [
[1, "10.76 ft²", "10.76 ft² (US)"],
[9999, "2.47081 ac", "2.4708 ac (US)"],
[4046.86, "1 ac", "43,559.86 ft² (US)"],
[4046.87261, "1 ac", "1 ac (US)"],
[2587398.1, "639.35999 ac", "639.35744 ac (US)"],
[2.59e+6, "1 mi²", "1 mi² (US)"]
];
areas.forEach(a => {
expect(imperial.area(a[0]).toString()).toBe(a[1]);
expect(imperialUS.area(a[0]).toString()).toBe(a[2]);
});
const volumes = [
[1, "1.308 yd³", "1.3079 yd³ (US)"],
[1000, "1,307.9506 yd³", "1,307.9428 yd³ (US)"]
];
volumes.forEach(v => {
expect(imperial.volume(v[0]).toString()).toBe(v[1]);
expect(imperialUS.volume(v[0]).toString()).toBe(v[2]);
});
expect(imperial.area(9999, { fixedUnit: true }).toString({precision: 1})).toBe("107,628.3 ft²");
});
});
describe('Metric conversion', () => {
it('it should convert units properly', () => {
const { metric, imperial } = systems;
const km = metric.length(2000);
const mi = imperial.length(3220);
expect(km.unit.abbr).toBe("km");
expect(km.value).toBe(2);
expect(mi.unit.abbr).toBe("mi");
expect(Math.round(mi.value)).toBe(2)
expect(toMetric(km).toString()).toBe("2,000 m");
expect(toMetric(mi).toString()).toBe("3,220 m");
expect(toMetric(km).value).toBe(2000);
expect(toMetric(mi).value).toBe(3220);
});
});

Wyświetl plik

@ -11,6 +11,11 @@
margin-left: -100px;
z-index: 999;
padding-bottom: 6px;
.opacity-slider-label{
display: inline-block;
position: relative;
top: 2px;
}
}
.leaflet-touch .leaflet-control-layers-toggle, .leaflet-control-layers-toggle{

Wyświetl plik

@ -41,4 +41,9 @@
opacity: 0.8;
pointer-events:none;
}
button.redo{
margin-top: 0;
margin-left: 10px;
}
}

Wyświetl plik

@ -29,4 +29,8 @@
padding: 2px 4px 2px 4px;
margin-top: 12px;
}
.help-button:hover{
cursor: pointer;
}
}

Wyświetl plik

@ -0,0 +1,4 @@
.unit-selector{
font-size: 14px;
padding: 5px;
}

Plik binarny nie jest wyświetlany.

Po

Szerokość:  |  Wysokość:  |  Rozmiar: 5.0 KiB

Plik binarny nie jest wyświetlany.

Po

Szerokość:  |  Wysokość:  |  Rozmiar: 6.0 KiB

Wyświetl plik

@ -8,6 +8,16 @@ import { setLocale } from './translations/functions';
// Main is always executed first in the page
// Silence annoying React deprecation notice of useful functionality
const originalError = console.error;
console.error = function(...args) {
let message = args[0];
if (typeof message === 'string' && message.indexOf('Warning: A future version of React will block javascript:') !== -1) {
return;
}
originalError.apply(console, args);
};
// We share some objects to avoid having to include them
// as a dependency in each component (adds too much space overhead)
window.ReactDOM = ReactDOM;

Wyświetl plik

@ -1,93 +1,93 @@
// Auto-generated with extract_odm_strings.py, do not edit!
_("Skip normalization of colors across all images. Useful when processing radiometric data. Default: %(default)s");
_("Displays version number and exits. ");
_("Automatically set a boundary using camera shot locations to limit the area of the reconstruction. This can help remove far away background artifacts (sky, background landscapes, etc.). See also --boundary. Default: %(default)s");
_("The maximum vertex count of the output mesh. Default: %(default)s");
_("URL to a ClusterODM instance for distributing a split-merge workflow on multiple nodes in parallel. Default: %(default)s");
_("Radius of the overlap between submodels. After grouping images into clusters, images that are closer than this radius to a cluster are added to the cluster. This is done to ensure that neighboring submodels overlap. Default: %(default)s");
_("Save the georeferenced point cloud in Cloud Optimized Point Cloud (COPC) format. Default: %(default)s");
_("Choose the algorithm for extracting keypoints and computing descriptors. Can be one of: %(choices)s. Default: %(default)s");
_("Matcher algorithm, Fast Library for Approximate Nearest Neighbors or Bag of Words. FLANN is slower, but more stable. BOW is faster, but can sometimes miss valid matches. BRUTEFORCE is very slow but robust.Can be one of: %(choices)s. Default: %(default)s");
_("Set a value in meters for the GPS Dilution of Precision (DOP) information for all images. If your images are tagged with high precision GPS information (RTK), this value will be automatically set accordingly. You can use this option to manually set it in case the reconstruction fails. Lowering this option can sometimes help control bowling-effects over large areas. Default: %(default)s");
_("Maximum number of frames to extract from video files for processing. Set to 0 for no limit. Default: %(default)s");
_("Generate single file Binary glTF (GLB) textured models. Default: %(default)s");
_("Simple Morphological Filter elevation scalar parameter. Default: %(default)s");
_("Permanently delete all previous results and rerun the processing pipeline.");
_("Filters the point cloud by keeping only a single point around a radius N (in meters). This can be useful to limit the output resolution of the point cloud and remove duplicate points. Set to 0 to disable sampling. Default: %(default)s");
_("Octree depth used in the mesh reconstruction, increase to get more vertices, recommended values are 8-12. Default: %(default)s");
_("Export the georeferenced point cloud in CSV format. Default: %(default)s");
_("Turn off camera parameter optimization during bundle adjustment. This can be sometimes useful for improving results that exhibit doming/bowling or when images are taken with a rolling shutter camera. Default: %(default)s");
_("Skip generation of a full 3D model. This can save time if you only need 2D results such as orthophotos and DEMs. Default: %(default)s");
_("Generate static tiles for orthophotos and DEMs that are suitable for viewers like Leaflet or OpenLayers. Default: %(default)s");
_("Copy output results to this folder after processing.");
_("Ignore Ground Sampling Distance (GSD). GSD caps the maximum resolution of image outputs and resizes images when necessary, resulting in faster processing and lower memory usage. Since GSD is an estimate, sometimes ignoring it can result in slightly better image output quality. Default: %(default)s");
_("Use this tag if you have a GCP File but want to use the EXIF information for georeferencing instead. Default: %(default)s");
_("Use this tag to build a DSM (Digital Surface Model, ground + objects) using a progressive morphological filter. Check the --dem* parameters for finer tuning. Default: %(default)s");
_("Skip the blending of colors near seams. Default: %(default)s");
_("Perform image matching with the nearest images based on GPS exif data. Set to 0 to match by triangulation. Default: %(default)s");
_("End processing at this stage. Can be one of: %(choices)s. Default: %(default)s");
_("Export the georeferenced point cloud in Entwine Point Tile (EPT) format. Default: %(default)s");
_("Skip generation of the orthophoto. This can save time if you only need 3D results or DEMs. Default: %(default)s");
_("Rerun this stage only and stop. Can be one of: %(choices)s. Default: %(default)s");
_("Skip generation of PDF report. This can save time if you don't need a report. Default: %(default)s");
_("Choose the structure from motion algorithm. For aerial datasets, if camera GPS positions and angles are available, triangulation can generate better results. For planar scenes captured at fixed altitude with nadir-only images, planar can be much faster. Can be one of: %(choices)s. Default: %(default)s");
_("Delete heavy intermediate files to optimize disk space usage. This affects the ability to restart the pipeline from an intermediate stage, but allows datasets to be processed on machines that don't have sufficient disk space available. Default: %(default)s");
_("When processing multispectral datasets, ODM will automatically align the images for each band. If the images have been postprocessed and are already aligned, use this option. Default: %(default)s");
_("Run local bundle adjustment for every image added to the reconstruction and a global adjustment every 100 images. Speeds up reconstruction for very large datasets. Default: %(default)s");
_("Automatically compute image masks using AI to remove the background. Experimental. Default: %(default)s");
_("Use this tag to build a DTM (Digital Terrain Model, ground only) using a simple morphological filter. Check the --dem* and --smrf* parameters for finer tuning. Default: %(default)s");
_("Filters the point cloud by removing points that deviate more than N standard deviations from the local mean. Set to 0 to disable filtering. Default: %(default)s");
_("GeoJSON polygon limiting the area of the reconstruction. Can be specified either as path to a GeoJSON file or as a JSON string representing the contents of a GeoJSON file. Default: %(default)s");
_("Automatically crop image outputs by creating a smooth buffer around the dataset boundaries, shrunk by N meters. Use 0 to disable cropping. Default: %(default)s");
_("show this help message and exit");
_("Set point cloud quality. Higher quality generates better, denser point clouds, but requires more memory and takes longer. Each step up in quality increases processing time roughly by a factor of 4x.Can be one of: %(choices)s. Default: %(default)s");
_("Choose what to merge in the merge step in a split dataset. By default all available outputs are merged. Options: %(choices)s. Default: %(default)s");
_("Keep faces in the mesh that are not seen in any camera. Default: %(default)s");
_("Set this parameter if you want a striped GeoTIFF. Default: %(default)s");
_("Path to the image groups file that controls how images should be split into groups. The file needs to use the following format: image_name group_nameDefault: %(default)s");
_("Automatically compute image masks using AI to remove the sky. Experimental. Default: %(default)s");
_("Specify the distance between camera shot locations and the outer edge of the boundary when computing the boundary with --auto-boundary. Set to 0 to automatically choose a value. Default: %(default)s");
_("Reduce the memory usage needed for depthmap fusion by splitting large scenes into tiles. Turn this on if your machine doesn't have much RAM and/or you've set --pc-quality to high or ultra. Experimental. Default: %(default)s");
_("Use the camera parameters computed from another dataset instead of calculating them. Can be specified either as path to a cameras.json file or as a JSON string representing the contents of a cameras.json file. Default: %(default)s");
_("Generate OBJs that have a single material and a single texture file instead of multiple ones. Default: %(default)s");
_("Use images' GPS exif data for reconstruction, even if there are GCPs present.This flag is useful if you have high precision GPS measurements. If there are no GCPs, this flag does nothing. Default: %(default)s");
_("Simple Morphological Filter slope parameter (rise over run). Default: %(default)s");
_("Build orthophoto overviews for faster display in programs such as QGIS. Default: %(default)s");
_("Generates a polygon around the cropping area that cuts the orthophoto around the edges of features. This polygon can be useful for stitching seamless mosaics with multiple overlapping orthophotos. Default: %(default)s");
_("Turn on rolling shutter correction. If the camera has a rolling shutter and the images were taken in motion, you can turn on this option to improve the accuracy of the results. See also --rolling-shutter-readout. Default: %(default)s");
_("Skip alignment of submodels in split-merge. Useful if GPS is good enough on very large datasets. Default: %(default)s");
_("Geometric estimates improve the accuracy of the point cloud by computing geometrically consistent depthmaps but may not be usable in larger datasets. This flag disables geometric estimates. Default: %(default)s");
_("The maximum number of processes to use in various processes. Peak memory requirement is ~1GB per thread and 2 megapixel image resolution. Default: %(default)s");
_("Path to a GeoTIFF DEM or a LAS/LAZ point cloud that the reconstruction outputs should be automatically aligned to. Experimental. Default: %(default)s");
_("DSM/DTM resolution in cm / pixel. Note that this value is capped to 2x the ground sampling distance (GSD) estimate. To remove the cap, check --ignore-gsd also. Default: %(default)s");
_("Minimum number of features to extract per image. More features can be useful for finding more matches between images, potentially allowing the reconstruction of areas with little overlap or insufficient features. More features also slow down processing. Default: %(default)s");
_("Rerun processing from this stage. Can be one of: %(choices)s. Default: %(default)s");
_("Perform ground rectification on the point cloud. This means that wrongly classified ground points will be re-classified and gaps will be filled. Useful for generating DTMs. Default: %(default)s");
_("Create Cloud-Optimized GeoTIFFs instead of normal GeoTIFFs. Default: %(default)s");
_("Generate OGC 3D Tiles outputs. Default: %(default)s");
_("Override the rolling shutter readout time for your camera sensor (in milliseconds), instead of using the rolling shutter readout database. Note that not all cameras are present in the database. Set to 0 to use the database value. Default: %(default)s");
_("Set this parameter if you want to generate a PNG rendering of the orthophoto. Default: %(default)s");
_("Path to the file containing the ground control points used for georeferencing. The file needs to use the following format: EPSG:<code> or <+proj definition>geo_x geo_y geo_z im_x im_y image_name [gcp_name] [extra1] [extra2]Default: %(default)s");
_("Set feature extraction quality. Higher quality generates better features, but requires more memory and takes longer. Can be one of: %(choices)s. Default: %(default)s");
_("Use a full 3D mesh to compute the orthophoto instead of a 2.5D mesh. This option is a bit faster and provides similar results in planar areas. Default: %(default)s");
_("Orthophoto resolution in cm / pixel. Note that this value is capped by a ground sampling distance (GSD) estimate. To remove the cap, check --ignore-gsd also. Default: %(default)s");
_("The maximum output resolution of extracted video frames in pixels. Default: %(default)s");
_("Do not use GPU acceleration, even if it's available. Default: %(default)s");
_("Decimate the points before generating the DEM. 1 is no decimation (full quality). 100 decimates ~99%% of the points. Useful for speeding up generation of DEM results in very large datasets. Default: %(default)s");
_("Average number of images per submodel. When splitting a large dataset into smaller submodels, images are grouped into clusters. This value regulates the number of images that each cluster should have on average. Default: %(default)s");
_("Skips dense reconstruction and 3D model generation. It generates an orthophoto directly from the sparse reconstruction. If you just need an orthophoto and do not need a full 3D model, turn on this option. Default: %(default)s");
_("Number of steps used to fill areas with gaps. Set to 0 to disable gap filling. Starting with a radius equal to the output resolution, N different DEMs are generated with progressively bigger radius using the inverse distance weighted (IDW) algorithm and merged together. Remaining gaps are then merged using nearest neighbor interpolation. Default: %(default)s");
_("Build orthophoto overviews for faster display in programs such as QGIS. Default: %(default)s");
_("Classify the point cloud outputs. You can control the behavior of this option by tweaking the --dem-* parameters. Default: %(default)s");
_("Export the georeferenced point cloud in LAS format. Default: %(default)s");
_("Set the radiometric calibration to perform on images. When processing multispectral and thermal images you should set this option to obtain reflectance/temperature values (otherwise you will get digital number values). [camera] applies black level, vignetting, row gradient gain/exposure compensation (if appropriate EXIF tags are found) and computes absolute temperature values. [camera+sun] is experimental, applies all the corrections of [camera], plus compensates for spectral radiance registered via a downwelling light sensor (DLS) taking in consideration the angle of the sun. Can be one of: %(choices)s. Default: %(default)s");
_("Computes an euclidean raster map for each DEM. The map reports the distance from each cell to the nearest NODATA value (before any hole filling takes place). This can be useful to isolate the areas that have been filled. Default: %(default)s");
_("Set a camera projection type. Manually setting a value can help improve geometric undistortion. By default the application tries to determine a lens type from the images metadata. Can be one of: %(choices)s. Default: %(default)s");
_("Path to the image geolocation file containing the camera center coordinates used for georeferencing. If you don't have values for yaw/pitch/roll you can set them to 0. The file needs to use the following format: EPSG:<code> or <+proj definition>image_name geo_x geo_y geo_z [yaw (degrees)] [pitch (degrees)] [roll (degrees)] [horz accuracy (meters)] [vert accuracy (meters)]Default: %(default)s");
_("Name of dataset (i.e subfolder name within project folder). Default: %(default)s");
_("Simple Morphological Filter elevation threshold parameter (meters). Default: %(default)s");
_("Simple Morphological Filter window radius parameter (meters). Default: %(default)s");
_("When processing multispectral datasets, you can specify the name of the primary band that will be used for reconstruction. It's recommended to choose a band which has sharp details and is in focus. Default: %(default)s");
_("Set this parameter if you want to generate a Google Earth (KMZ) rendering of the orthophoto. Default: %(default)s");
_("Set the compression to use for orthophotos. Can be one of: %(choices)s. Default: %(default)s");
_("Path to a GeoTIFF DEM or a LAS/LAZ point cloud that the reconstruction outputs should be automatically aligned to. Experimental. Default: %(default)s");
_("Skip generation of the orthophoto. This can save time if you only need 3D results or DEMs. Default: %(default)s");
_("Automatically compute image masks using AI to remove the sky. Experimental. Default: %(default)s");
_("Generate static tiles for orthophotos and DEMs that are suitable for viewers like Leaflet or OpenLayers. Default: %(default)s");
_("Path to the project folder. Your project folder should contain subfolders for each dataset. Each dataset should have an \"images\" folder.");
_("Rerun processing from this stage. Can be one of: %(choices)s. Default: %(default)s");
_("Decimate the points before generating the DEM. 1 is no decimation (full quality). 100 decimates ~99%% of the points. Useful for speeding up generation of DEM results in very large datasets. Default: %(default)s");
_("Generates a polygon around the cropping area that cuts the orthophoto around the edges of features. This polygon can be useful for stitching seamless mosaics with multiple overlapping orthophotos. Default: %(default)s");
_("Set a value in meters for the GPS Dilution of Precision (DOP) information for all images. If your images are tagged with high precision GPS information (RTK), this value will be automatically set accordingly. You can use this option to manually set it in case the reconstruction fails. Lowering this option can sometimes help control bowling-effects over large areas. Default: %(default)s");
_("Keep faces in the mesh that are not seen in any camera. Default: %(default)s");
_("Simple Morphological Filter slope parameter (rise over run). Default: %(default)s");
_("Set point cloud quality. Higher quality generates better, denser point clouds, but requires more memory and takes longer. Each step up in quality increases processing time roughly by a factor of 4x.Can be one of: %(choices)s. Default: %(default)s");
_("Set the compression to use for orthophotos. Can be one of: %(choices)s. Default: %(default)s");
_("Use this tag to build a DTM (Digital Terrain Model, ground only) using a simple morphological filter. Check the --dem* and --smrf* parameters for finer tuning. Default: %(default)s");
_("URL to a ClusterODM instance for distributing a split-merge workflow on multiple nodes in parallel. Default: %(default)s");
_("Rerun this stage only and stop. Can be one of: %(choices)s. Default: %(default)s");
_("Save the georeferenced point cloud in Cloud Optimized Point Cloud (COPC) format. Default: %(default)s");
_("Skip generation of a full 3D model. This can save time if you only need 2D results such as orthophotos and DEMs. Default: %(default)s");
_("Ignore Ground Sampling Distance (GSD).A memory and processor hungry change relative to the default behavior if set to true. Ordinarily, GSD estimates are used to cap the maximum resolution of image outputs and resizes images when necessary, resulting in faster processing and lower memory usage. Since GSD is an estimate, sometimes ignoring it can result in slightly better image output quality. Never set --ignore-gsd to true unless you are positive you need it, and even then: do not use it. Default: %(default)s");
_("Generate single file Binary glTF (GLB) textured models. Default: %(default)s");
_("Displays version number and exits. ");
_("Average number of images per submodel. When splitting a large dataset into smaller submodels, images are grouped into clusters. This value regulates the number of images that each cluster should have on average. Default: %(default)s");
_("Perform ground rectification on the point cloud. This means that wrongly classified ground points will be re-classified and gaps will be filled. Useful for generating DTMs. Default: %(default)s");
_("Automatically crop image outputs by creating a smooth buffer around the dataset boundaries, shrunk by N meters. Use 0 to disable cropping. Default: %(default)s");
_("Use images' GPS exif data for reconstruction, even if there are GCPs present.This flag is useful if you have high precision GPS measurements. If there are no GCPs, this flag does nothing. Default: %(default)s");
_("Export the georeferenced point cloud in LAS format. Default: %(default)s");
_("Choose the structure from motion algorithm. For aerial datasets, if camera GPS positions and angles are available, triangulation can generate better results. For planar scenes captured at fixed altitude with nadir-only images, planar can be much faster. Can be one of: %(choices)s. Default: %(default)s");
_("Skip generation of PDF report. This can save time if you don't need a report. Default: %(default)s");
_("Turn off camera parameter optimization during bundle adjustment. This can be sometimes useful for improving results that exhibit doming/bowling or when images are taken with a rolling shutter camera. Default: %(default)s");
_("Filters the point cloud by keeping only a single point around a radius N (in meters). This can be useful to limit the output resolution of the point cloud and remove duplicate points. Set to 0 to disable sampling. Default: %(default)s");
_("Name of dataset (i.e subfolder name within project folder). Default: %(default)s");
_("Use this tag to build a DSM (Digital Surface Model, ground + objects) using a progressive morphological filter. Check the --dem* parameters for finer tuning. Default: %(default)s");
_("GeoJSON polygon limiting the area of the reconstruction. Can be specified either as path to a GeoJSON file or as a JSON string representing the contents of a GeoJSON file. Default: %(default)s");
_("When processing multispectral datasets, you can specify the name of the primary band that will be used for reconstruction. It's recommended to choose a band which has sharp details and is in focus. Default: %(default)s");
_("Path to the image geolocation file containing the camera center coordinates used for georeferencing. If you don't have values for yaw/pitch/roll you can set them to 0. The file needs to use the following format: EPSG:<code> or <+proj definition>image_name geo_x geo_y geo_z [yaw (degrees)] [pitch (degrees)] [roll (degrees)] [horz accuracy (meters)] [vert accuracy (meters)]Default: %(default)s");
_("Do not use GPU acceleration, even if it's available. Default: %(default)s");
_("Geometric estimates improve the accuracy of the point cloud by computing geometrically consistent depthmaps but may not be usable in larger datasets. This flag disables geometric estimates. Default: %(default)s");
_("Filters the point cloud by removing points that deviate more than N standard deviations from the local mean. Set to 0 to disable filtering. Default: %(default)s");
_("Set the radiometric calibration to perform on images. When processing multispectral and thermal images you should set this option to obtain reflectance/temperature values (otherwise you will get digital number values). [camera] applies black level, vignetting, row gradient gain/exposure compensation (if appropriate EXIF tags are found) and computes absolute temperature values. [camera+sun] is experimental, applies all the corrections of [camera], plus compensates for spectral radiance registered via a downwelling light sensor (DLS) taking in consideration the angle of the sun. Can be one of: %(choices)s. Default: %(default)s");
_("Octree depth used in the mesh reconstruction, increase to get more vertices, recommended values are 8-12. Default: %(default)s");
_("Generate OGC 3D Tiles outputs. Default: %(default)s");
_("Minimum number of features to extract per image. More features can be useful for finding more matches between images, potentially allowing the reconstruction of areas with little overlap or insufficient features. More features also slow down processing. Default: %(default)s");
_("Run local bundle adjustment for every image added to the reconstruction and a global adjustment every 100 images. Speeds up reconstruction for very large datasets. Default: %(default)s");
_("Skips dense reconstruction and 3D model generation. It generates an orthophoto directly from the sparse reconstruction. If you just need an orthophoto and do not need a full 3D model, turn on this option. Default: %(default)s");
_("Create Cloud-Optimized GeoTIFFs instead of normal GeoTIFFs. Default: %(default)s");
_("Choose what to merge in the merge step in a split dataset. By default all available outputs are merged. Options: %(choices)s. Default: %(default)s");
_("Export the georeferenced point cloud in Entwine Point Tile (EPT) format. Default: %(default)s");
_("Delete heavy intermediate files to optimize disk space usage. This affects the ability to restart the pipeline from an intermediate stage, but allows datasets to be processed on machines that don't have sufficient disk space available. Default: %(default)s");
_("Matcher algorithm, Fast Library for Approximate Nearest Neighbors or Bag of Words. FLANN is slower, but more stable. BOW is faster, but can sometimes miss valid matches. BRUTEFORCE is very slow but robust.Can be one of: %(choices)s. Default: %(default)s");
_("Automatically set a boundary using camera shot locations to limit the area of the reconstruction. This can help remove far away background artifacts (sky, background landscapes, etc.). See also --boundary. Default: %(default)s");
_("Skip normalization of colors across all images. Useful when processing radiometric data. Default: %(default)s");
_("Simple Morphological Filter window radius parameter (meters). Default: %(default)s");
_("Turn on rolling shutter correction. If the camera has a rolling shutter and the images were taken in motion, you can turn on this option to improve the accuracy of the results. See also --rolling-shutter-readout. Default: %(default)s");
_("When processing multispectral datasets, ODM will automatically align the images for each band. If the images have been postprocessed and are already aligned, use this option. Default: %(default)s");
_("The maximum vertex count of the output mesh. Default: %(default)s");
_("Permanently delete all previous results and rerun the processing pipeline.");
_("Export the georeferenced point cloud in CSV format. Default: %(default)s");
_("Simple Morphological Filter elevation threshold parameter (meters). Default: %(default)s");
_("Maximum number of frames to extract from video files for processing. Set to 0 for no limit. Default: %(default)s");
_("Specify the distance between camera shot locations and the outer edge of the boundary when computing the boundary with --auto-boundary. Set to 0 to automatically choose a value. Default: %(default)s");
_("Set this parameter if you want to generate a Google Earth (KMZ) rendering of the orthophoto. Default: %(default)s");
_("Path to the image groups file that controls how images should be split into groups. The file needs to use the following format: image_name group_nameDefault: %(default)s");
_("Radius of the overlap between submodels. After grouping images into clusters, images that are closer than this radius to a cluster are added to the cluster. This is done to ensure that neighboring submodels overlap. Default: %(default)s");
_("Override the rolling shutter readout time for your camera sensor (in milliseconds), instead of using the rolling shutter readout database. Note that not all cameras are present in the database. Set to 0 to use the database value. Default: %(default)s");
_("Choose the algorithm for extracting keypoints and computing descriptors. Can be one of: %(choices)s. Default: %(default)s");
_("Number of steps used to fill areas with gaps. Set to 0 to disable gap filling. Starting with a radius equal to the output resolution, N different DEMs are generated with progressively bigger radius using the inverse distance weighted (IDW) algorithm and merged together. Remaining gaps are then merged using nearest neighbor interpolation. Default: %(default)s");
_("Automatically compute image masks using AI to remove the background. Experimental. Default: %(default)s");
_("Set feature extraction quality. Higher quality generates better features, but requires more memory and takes longer. Can be one of: %(choices)s. Default: %(default)s");
_("Simple Morphological Filter elevation scalar parameter. Default: %(default)s");
_("Perform image matching with the nearest images based on GPS exif data. Set to 0 to match by triangulation. Default: %(default)s");
_("Computes an euclidean raster map for each DEM. The map reports the distance from each cell to the nearest NODATA value (before any hole filling takes place). This can be useful to isolate the areas that have been filled. Default: %(default)s");
_("Skip alignment of submodels in split-merge. Useful if GPS is good enough on very large datasets. Default: %(default)s");
_("Path to the file containing the ground control points used for georeferencing. The file needs to use the following format: EPSG:<code> or <+proj definition>geo_x geo_y geo_z im_x im_y image_name [gcp_name] [extra1] [extra2]Default: %(default)s");
_("DSM/DTM resolution in cm / pixel. Note that this value is capped by a ground sampling distance (GSD) estimate. Default: %(default)s");
_("End processing at this stage. Can be one of: %(choices)s. Default: %(default)s");
_("Set this parameter if you want a striped GeoTIFF. Default: %(default)s");
_("Set this parameter if you want to generate a PNG rendering of the orthophoto. Default: %(default)s");
_("Generate OBJs that have a single material and a single texture file instead of multiple ones. Default: %(default)s");
_("Do not attempt to merge partial reconstructions. This can happen when images do not have sufficient overlap or are isolated. Default: %(default)s");
_("Use the camera parameters computed from another dataset instead of calculating them. Can be specified either as path to a cameras.json file or as a JSON string representing the contents of a cameras.json file. Default: %(default)s");
_("show this help message and exit");
_("Use this tag if you have a GCP File but want to use the EXIF information for georeferencing instead. Default: %(default)s");
_("The maximum output resolution of extracted video frames in pixels. Default: %(default)s");
_("Orthophoto resolution in cm / pixel. Note that this value is capped by a ground sampling distance (GSD) estimate.Default: %(default)s");
_("Perform image matching with the nearest N images based on image filename order. Can speed up processing of sequential images, such as those extracted from video. It is applied only on non-georeferenced datasets. Set to 0 to disable. Default: %(default)s");
_("Use a full 3D mesh to compute the orthophoto instead of a 2.5D mesh. This option is a bit faster and provides similar results in planar areas. Default: %(default)s");
_("Set a camera projection type. Manually setting a value can help improve geometric undistortion. By default the application tries to determine a lens type from the images metadata. Can be one of: %(choices)s. Default: %(default)s");

Wyświetl plik

@ -45,6 +45,7 @@ _("Navigation cube");
_("Remove all clipping volumes");
_("Compass");
_("Camera Animation");
_("Remove last camera animation");
_("Point budget");
_("Point size");
_("Minimum size");

Wyświetl plik

@ -2199,6 +2199,8 @@ var Dropzone = function (_Emitter) {
}, {
key: "cancelUpload",
value: function cancelUpload(file) {
if (file === undefined) return;
if (file.status === Dropzone.UPLOADING) {
var groupedFiles = this._getFilesWithXhr(file.xhr);
for (var _iterator19 = groupedFiles, _isArray19 = true, _i20 = 0, _iterator19 = _isArray19 ? _iterator19 : _iterator19[Symbol.iterator]();;) {

Wyświetl plik

@ -1,9 +0,0 @@
.leaflet-container .leaflet-control-mouseposition {
background-color: rgba(255, 255, 255, 0.7);
box-shadow: 0 0 5px #bbb;
padding: 0 5px;
margin:0;
color: #333;
font: 11px/1.5 "Helvetica Neue", Arial, Helvetica, sans-serif;
}

Wyświetl plik

@ -1,48 +0,0 @@
L.Control.MousePosition = L.Control.extend({
options: {
position: 'bottomleft',
separator: ' : ',
emptyString: 'Unavailable',
lngFirst: false,
numDigits: 5,
lngFormatter: undefined,
latFormatter: undefined,
prefix: ""
},
onAdd: function (map) {
this._container = L.DomUtil.create('div', 'leaflet-control-mouseposition');
L.DomEvent.disableClickPropagation(this._container);
map.on('mousemove', this._onMouseMove, this);
this._container.innerHTML=this.options.emptyString;
return this._container;
},
onRemove: function (map) {
map.off('mousemove', this._onMouseMove)
},
_onMouseMove: function (e) {
var lng = this.options.lngFormatter ? this.options.lngFormatter(e.latlng.lng) : L.Util.formatNum(e.latlng.lng, this.options.numDigits);
var lat = this.options.latFormatter ? this.options.latFormatter(e.latlng.lat) : L.Util.formatNum(e.latlng.lat, this.options.numDigits);
var value = this.options.lngFirst ? lng + this.options.separator + lat : lat + this.options.separator + lng;
var prefixAndValue = this.options.prefix + ' ' + value;
this._container.innerHTML = prefixAndValue;
}
});
L.Map.mergeOptions({
positionControl: false
});
L.Map.addInitHook(function () {
if (this.options.positionControl) {
this.positionControl = new L.Control.MousePosition();
this.addControl(this.positionControl);
}
});
L.control.mousePosition = function (options) {
return new L.Control.MousePosition(options);
};

Plik binarny nie jest wyświetlany.

Przed

Szerokość:  |  Wysokość:  |  Rozmiar: 14 KiB

Plik binarny nie jest wyświetlany.

Przed

Szerokość:  |  Wysokość:  |  Rozmiar: 30 KiB

Plik binarny nie jest wyświetlany.

Przed

Szerokość:  |  Wysokość:  |  Rozmiar: 7.8 KiB

Plik binarny nie jest wyświetlany.

Przed

Szerokość:  |  Wysokość:  |  Rozmiar: 535 B

Plik binarny nie jest wyświetlany.

Przed

Szerokość:  |  Wysokość:  |  Rozmiar: 1.4 KiB

Plik binarny nie jest wyświetlany.

Przed

Szerokość:  |  Wysokość:  |  Rozmiar: 40 KiB

Plik binarny nie jest wyświetlany.

Przed

Szerokość:  |  Wysokość:  |  Rozmiar: 65 KiB

Wyświetl plik

@ -1,127 +0,0 @@
/*
Leaflet.AwesomeMarkers, a plugin that adds colorful iconic markers for Leaflet, based on the Font Awesome icons
(c) 2012-2013, Lennard Voogdt
http://leafletjs.com
https://github.com/lvoogdt
*/
/*global L*/
import "./leaflet.awesome-markers.css";
(function (window, document, undefined) {
"use strict";
/*
* Leaflet.AwesomeMarkers assumes that you have already included the Leaflet library.
*/
L.AwesomeMarkers = {};
L.AwesomeMarkers.version = '2.0.1';
L.AwesomeMarkers.Icon = L.Icon.extend({
options: {
iconSize: [35, 45],
iconAnchor: [17, 42],
popupAnchor: [1, -32],
shadowAnchor: [10, 12],
shadowSize: [36, 16],
className: 'awesome-marker',
prefix: 'glyphicon',
spinClass: 'fa-spin',
extraClasses: '',
icon: 'home',
markerColor: 'blue',
iconColor: 'white'
},
initialize: function (options) {
options = L.Util.setOptions(this, options);
},
createIcon: function () {
var div = document.createElement('div'),
options = this.options;
if (options.icon) {
div.innerHTML = this._createInner();
}
if (options.bgPos) {
div.style.backgroundPosition =
(-options.bgPos.x) + 'px ' + (-options.bgPos.y) + 'px';
}
this._setIconStyles(div, 'icon-' + options.markerColor);
return div;
},
_createInner: function() {
var iconClass, iconSpinClass = "", iconColorClass = "", iconColorStyle = "", options = this.options;
if(options.icon.slice(0,options.prefix.length+1) === options.prefix + "-") {
iconClass = options.icon;
} else {
iconClass = options.prefix + "-" + options.icon;
}
if(options.spin && typeof options.spinClass === "string") {
iconSpinClass = options.spinClass;
}
if(options.iconColor) {
if(options.iconColor === 'white' || options.iconColor === 'black') {
iconColorClass = "icon-" + options.iconColor;
} else {
iconColorStyle = "style='color: " + options.iconColor + "' ";
}
}
return "<i " + iconColorStyle + "class='" + options.extraClasses + " " + options.prefix + " " + iconClass + " " + iconSpinClass + " " + iconColorClass + "'></i>";
},
_setIconStyles: function (img, name) {
var options = this.options,
size = L.point(options[name === 'shadow' ? 'shadowSize' : 'iconSize']),
anchor;
if (name === 'shadow') {
anchor = L.point(options.shadowAnchor || options.iconAnchor);
} else {
anchor = L.point(options.iconAnchor);
}
if (!anchor && size) {
anchor = size.divideBy(2, true);
}
img.className = 'awesome-marker-' + name + ' ' + options.className;
if (anchor) {
img.style.marginLeft = (-anchor.x) + 'px';
img.style.marginTop = (-anchor.y) + 'px';
}
if (size) {
img.style.width = size.x + 'px';
img.style.height = size.y + 'px';
}
},
createShadow: function () {
var div = document.createElement('div');
this._setIconStyles(div, 'shadow');
return div;
}
});
L.AwesomeMarkers.icon = function (options) {
return new L.AwesomeMarkers.Icon(options);
};
}(this, document));

Wyświetl plik

@ -1,124 +0,0 @@
/*
Author: L. Voogdt
License: MIT
Version: 1.0
*/
/* Marker setup */
.awesome-marker {
background: url('images/markers-soft.png') no-repeat 0 0;
width: 35px;
height: 46px;
position:absolute;
left:0;
top:0;
display: block;
text-align: center;
}
.awesome-marker-shadow {
background: url('images/markers-shadow.png') no-repeat 0 0;
width: 36px;
height: 16px;
}
/* Retina displays */
@media (min--moz-device-pixel-ratio: 1.5),(-o-min-device-pixel-ratio: 3/2),
(-webkit-min-device-pixel-ratio: 1.5),(min-device-pixel-ratio: 1.5),(min-resolution: 1.5dppx) {
.awesome-marker {
background-image: url('images/markers-soft@2x.png');
background-size: 720px 46px;
}
.awesome-marker-shadow {
background-image: url('images/markers-shadow@2x.png');
background-size: 35px 16px;
}
}
.awesome-marker i {
color: #333;
margin-top: 10px;
display: inline-block;
font-size: 14px;
}
.awesome-marker .icon-white {
color: #fff;
}
/* Colors */
.awesome-marker-icon-red {
background-position: 0 0;
}
.awesome-marker-icon-darkred {
background-position: -180px 0;
}
.awesome-marker-icon-lightred {
background-position: -360px 0;
}
.awesome-marker-icon-orange {
background-position: -36px 0;
}
.awesome-marker-icon-beige {
background-position: -396px 0;
}
.awesome-marker-icon-green {
background-position: -72px 0;
}
.awesome-marker-icon-darkgreen {
background-position: -252px 0;
}
.awesome-marker-icon-lightgreen {
background-position: -432px 0;
}
.awesome-marker-icon-blue {
background-position: -108px 0;
}
.awesome-marker-icon-darkblue {
background-position: -216px 0;
}
.awesome-marker-icon-lightblue {
background-position: -468px 0;
}
.awesome-marker-icon-purple {
background-position: -144px 0;
}
.awesome-marker-icon-darkpurple {
background-position: -288px 0;
}
.awesome-marker-icon-pink {
background-position: -504px 0;
}
.awesome-marker-icon-cadetblue {
background-position: -324px 0;
}
.awesome-marker-icon-white {
background-position: -574px 0;
}
.awesome-marker-icon-gray {
background-position: -648px 0;
}
.awesome-marker-icon-lightgray {
background-position: -612px 0;
}
.awesome-marker-icon-black {
background-position: -682px 0;
}

Wyświetl plik

@ -0,0 +1,493 @@
/*https://github.com/francoisromain/leaflet-markers-canvas/blob/master/licence.md*/
(function (global, factory) {
typeof exports === 'object' && typeof module !== 'undefined' ? factory(require('leaflet'), require('rbush')) :
typeof define === 'function' && define.amd ? define(['leaflet', 'rbush'], factory) :
(global = global || self, factory(global.L, global.RBush));
}(this, (function (L, RBush) { 'use strict';
L = L && Object.prototype.hasOwnProperty.call(L, 'default') ? L['default'] : L;
RBush = RBush && Object.prototype.hasOwnProperty.call(RBush, 'default') ? RBush['default'] : RBush;
var markersCanvas = {
// * * * * * * * * * * * * * * * * * * * * * * * * * * * *
//
// private: properties
//
// * * * * * * * * * * * * * * * * * * * * * * * * * * * *
_map: null,
_canvas: null,
_context: null,
// leaflet markers (used to getBounds)
_markers: [],
// visible markers
_markersTree: null,
// every marker positions (even out of the canvas)
_positionsTree: null,
// icon images index
_icons: {},
// * * * * * * * * * * * * * * * * * * * * * * * * * * * *
//
// public: global
//
// * * * * * * * * * * * * * * * * * * * * * * * * * * * *
addTo: function addTo(map) {
map.addLayer(this);
return this;
},
getBounds: function getBounds() {
var bounds = new L.LatLngBounds();
this._markers.forEach(function (marker) {
bounds.extend(marker.getLatLng());
});
return bounds;
},
redraw: function redraw() {
this._redraw(true);
},
clear: function clear() {
this._positionsTree = new RBush();
this._markersTree = new RBush();
this._markers = [];
this._redraw(true);
},
// * * * * * * * * * * * * * * * * * * * * * * * * * * * *
//
// public: markers
//
// * * * * * * * * * * * * * * * * * * * * * * * * * * * *
addMarker: function addMarker(marker, map) {
var ref = this._addMarker(marker, map);
var markerBox = ref.markerBox;
var positionBox = ref.positionBox;
var isVisible = ref.isVisible;
if (markerBox && isVisible) {
this._markersTree.insert(markerBox);
}
if (positionBox) {
this._positionsTree.insert(positionBox);
}
},
// add multiple markers (better for rBush performance)
addMarkers: function addMarkers(markers, map) {
if (!this._markersTree) this._markersTree = new RBush();
if (!this._positionsTree) this._positionsTree = new RBush();
var this$1 = this;
var markerBoxes = [];
var positionBoxes = [];
markers.forEach(function (marker) {
var ref = this$1._addMarker(marker, map);
var markerBox = ref.markerBox;
var positionBox = ref.positionBox;
var isVisible = ref.isVisible;
if (markerBox && isVisible) {
markerBoxes.push(markerBox);
}
if (positionBox) {
positionBoxes.push(positionBox);
}
});
this._markersTree.load(markerBoxes);
this._positionsTree.load(positionBoxes);
},
removeMarker: function removeMarker(marker) {
var latLng = marker.getLatLng();
var isVisible = this._map.getBounds().contains(latLng);
var positionBox = {
minX: latLng.lng,
minY: latLng.lat,
maxX: latLng.lng,
maxY: latLng.lat,
marker: marker,
};
this._positionsTree.remove(positionBox, function (a, b) {
return a.marker._leaflet_id === b.marker._leaflet_id;
});
if (isVisible) {
this._redraw(true);
}
},
// remove multiple markers (better for rBush performance)
removeMarkers: function removeMarkers(markers) {
var this$1 = this;
var hasChanged = false;
markers.forEach(function (marker) {
var latLng = marker.getLatLng();
var isVisible = this$1._map.getBounds().contains(latLng);
var positionBox = {
minX: latLng.lng,
minY: latLng.lat,
maxX: latLng.lng,
maxY: latLng.lat,
marker: marker,
};
this$1._positionsTree.remove(positionBox, function (a, b) {
return a.marker._leaflet_id === b.marker._leaflet_id;
});
if (isVisible) {
hasChanged = true;
}
});
if (hasChanged) {
this._redraw(true);
}
},
// * * * * * * * * * * * * * * * * * * * * * * * * * * * *
//
// leaflet: default methods
//
// * * * * * * * * * * * * * * * * * * * * * * * * * * * *
initialize: function initialize(options) {
L.Util.setOptions(this, options);
},
// called by Leaflet on `map.addLayer`
onAdd: function onAdd(map) {
this._map = map;
if (!this._canvas) this._initCanvas();
this.getPane().appendChild(this._canvas);
map.on("moveend", this._reset, this);
map.on("resize", this._reset, this);
map.on("click", this._fire, this);
map.on("mousemove", this._fire, this);
if (map._zoomAnimated) {
map.on("zoomanim", this._animateZoom, this);
}
this._reset();
},
// called by Leaflet
onRemove: function onRemove(map) {
this.getPane().removeChild(this._canvas);
map.off("click", this._fire, this);
map.off("mousemove", this._fire, this);
map.off("moveend", this._reset, this);
map.off("resize", this._reset, this);
if (map._zoomAnimated) {
map.off("zoomanim", this._animateZoom, this);
}
},
setOptions: function setOptions(options) {
L.Util.setOptions(this, options);
return this.redraw();
},
// * * * * * * * * * * * * * * * * * * * * * * * * * * * *
//
// private: global methods
//
// * * * * * * * * * * * * * * * * * * * * * * * * * * * *
_initCanvas: function _initCanvas() {
var ref = this._map.getSize();
var x = ref.x;
var y = ref.y;
var isAnimated = this._map.options.zoomAnimation && L.Browser.any3d;
this._canvas = L.DomUtil.create(
"canvas",
"leaflet-markers-canvas-layer leaflet-layer"
);
this._canvas.width = x;
this._canvas.height = y;
this._context = this._canvas.getContext("2d");
L.DomUtil.addClass(
this._canvas,
("leaflet-zoom-" + (isAnimated ? "animated" : "hide"))
);
},
// * * * * * * * * * * * * * * * * * * * * * * * * * * * *
//
// private: marker methods
//
// * * * * * * * * * * * * * * * * * * * * * * * * * * * *
_addMarker: function _addMarker(marker, map) {
if (marker.options.pane !== "markerPane" || !marker.options.icon) {
console.error("This is not a marker", marker);
return { markerBox: null, positionBox: null, isVisible: null };
}
// required for pop-up and tooltip
marker._map = map;
// add _leaflet_id property
L.Util.stamp(marker);
var latLng = marker.getLatLng();
var isVisible = map.getBounds().contains(latLng);
var ref = map.latLngToContainerPoint(latLng);
var x = ref.x;
var y = ref.y;
var ref$1 = marker.options.icon.options;
var iconSize = ref$1.iconSize;
var iconAnchor = ref$1.iconAnchor;
var markerBox = {
minX: x - iconAnchor[0],
minY: y - iconAnchor[1],
maxX: x + iconSize[0] - iconAnchor[0],
maxY: y + iconSize[1] - iconAnchor[1],
marker: marker,
};
var positionBox = {
minX: latLng.lng,
minY: latLng.lat,
maxX: latLng.lng,
maxY: latLng.lat,
marker: marker,
};
if (isVisible) {
this._drawMarker(marker, { x: x, y: y });
}
this._markers.push(marker);
return { markerBox: markerBox, positionBox: positionBox, isVisible: isVisible };
},
_drawMarker: function _drawMarker(marker, ref) {
if (!this._map) return;
var this$1 = this;
var x = ref.x;
var y = ref.y;
var ref$1 = marker.options.icon.options;
var iconUrl = ref$1.iconUrl;
if (marker.image) {
this._drawImage(marker, { x: x, y: y });
} else if (this._icons[iconUrl]) {
marker.image = this._icons[iconUrl].image;
if (this._icons[iconUrl].isLoaded) {
this._drawImage(marker, { x: x, y: y });
} else {
this._icons[iconUrl].elements.push({ marker: marker, x: x, y: y });
}
} else {
var image = new Image();
image.src = iconUrl;
marker.image = image;
this._icons[iconUrl] = {
image: image,
isLoaded: false,
elements: [{ marker: marker, x: x, y: y }],
};
image.onload = function () {
this$1._icons[iconUrl].isLoaded = true;
this$1._icons[iconUrl].elements.forEach(function (ref) {
var marker = ref.marker;
var x = ref.x;
var y = ref.y;
this$1._drawImage(marker, { x: x, y: y });
});
};
}
},
_drawImage: function _drawImage(marker, ref) {
var x = ref.x;
var y = ref.y;
var ref$1 = marker.options.icon.options;
var rotationAngle = ref$1.rotationAngle;
var iconAnchor = ref$1.iconAnchor;
var iconSize = ref$1.iconSize;
var angle = rotationAngle || 0;
this._context.save();
this._context.translate(x, y);
this._context.rotate((angle * Math.PI) / 180);
this._context.drawImage(
marker.image,
-iconAnchor[0],
-iconAnchor[1],
iconSize[0],
iconSize[1]
);
this._context.restore();
},
_redraw: function _redraw(clear) {
var this$1 = this;
if (clear) {
this._context.clearRect(0, 0, this._canvas.width, this._canvas.height);
}
if (!this._map || !this._positionsTree) { return; }
var mapBounds = this._map.getBounds();
var mapBoundsBox = {
minX: mapBounds.getWest(),
minY: mapBounds.getSouth(),
maxX: mapBounds.getEast(),
maxY: mapBounds.getNorth(),
};
// draw only visible markers
var markers = [];
this._positionsTree.search(mapBoundsBox).forEach(function (ref) {
var marker = ref.marker;
var latLng = marker.getLatLng();
var ref$1 = this$1._map.latLngToContainerPoint(latLng);
var x = ref$1.x;
var y = ref$1.y;
var ref$2 = marker.options.icon.options;
var iconSize = ref$2.iconSize;
var iconAnchor = ref$2.iconAnchor;
var markerBox = {
minX: x - iconAnchor[0],
minY: y - iconAnchor[1],
maxX: x + iconSize[0] - iconAnchor[0],
maxY: y + iconSize[1] - iconAnchor[1],
marker: marker,
};
markers.push(markerBox);
this$1._drawMarker(marker, { x: x, y: y });
});
this._markersTree.clear();
this._markersTree.load(markers);
},
// * * * * * * * * * * * * * * * * * * * * * * * * * * * *
//
// private: event methods
//
// * * * * * * * * * * * * * * * * * * * * * * * * * * * *
_reset: function _reset() {
var topLeft = this._map.containerPointToLayerPoint([0, 0]);
L.DomUtil.setPosition(this._canvas, topLeft);
var ref = this._map.getSize();
var x = ref.x;
var y = ref.y;
this._canvas.width = x;
this._canvas.height = y;
this._redraw();
},
_fire: function _fire(event) {
if (!this._markersTree) { return; }
var ref = event.containerPoint;
var x = ref.x;
var y = ref.y;
var markers = this._markersTree.search({
minX: x,
minY: y,
maxX: x,
maxY: y,
});
if (markers && markers.length) {
this._map._container.style.cursor = "pointer";
var marker = markers[0].marker;
if (event.type === "click") {
if (marker.listens("click")) {
marker.fire("click");
}
}
if (event.type === "mousemove") {
if (this._mouseOverMarker && this._mouseOverMarker !== marker) {
if (this._mouseOverMarker.listens("mouseout")) {
this._mouseOverMarker.fire("mouseout");
}
}
if (!this._mouseOverMarker || this._mouseOverMarker !== marker) {
this._mouseOverMarker = marker;
if (marker.listens("mouseover")) {
marker.fire("mouseover");
}
}
}
} else {
this._map._container.style.cursor = "";
if (event.type === "mousemove" && this._mouseOverMarker) {
if (this._mouseOverMarker.listens("mouseout")) {
this._mouseOverMarker.fire("mouseout");
}
delete this._mouseOverMarker;
}
}
},
_animateZoom: function _animateZoom(event) {
var scale = this._map.getZoomScale(event.zoom);
var offset = this._map._latLngBoundsToNewLayerBounds(
this._map.getBounds(),
event.zoom,
event.center
).min;
L.DomUtil.setTransform(this._canvas, offset, scale);
},
};
L.MarkersCanvas = L.Layer.extend(markersCanvas);
})));

Some files were not shown because too many files have changed in this diff Show More