Merge pull request #384 from pierotofy/plugins

Plugins Support
pull/408/head v0.5.0
Piero Toffanin 2018-03-02 18:08:07 -05:00 zatwierdzone przez GitHub
commit d89f24fe03
Nie znaleziono w bazie danych klucza dla tego podpisu
ID klucza GPG: 4AEE18F83AFDEB23
160 zmienionych plików z 2026 dodań i 394 usunięć

1
.env
Wyświetl plik

@ -6,3 +6,4 @@ WO_SSL_KEY=
WO_SSL_CERT= WO_SSL_CERT=
WO_SSL_INSECURE_PORT_REDIRECT=80 WO_SSL_INSECURE_PORT_REDIRECT=80
WO_DEBUG=YES WO_DEBUG=YES
WO_BROKER=redis://broker

1
.gitignore vendored
Wyświetl plik

@ -75,6 +75,7 @@ target/
# celery beat schedule file # celery beat schedule file
celerybeat-schedule celerybeat-schedule
celerybeat.pid
# dotenv # dotenv
.env .env

Wyświetl plik

@ -1,4 +1,4 @@
FROM python:3.5 FROM python:3.6
MAINTAINER Piero Toffanin <pt@masseranolabs.com> MAINTAINER Piero Toffanin <pt@masseranolabs.com>
ENV PYTHONUNBUFFERED 1 ENV PYTHONUNBUFFERED 1
@ -8,7 +8,7 @@ ENV PYTHONPATH $PYTHONPATH:/webodm
RUN mkdir /webodm RUN mkdir /webodm
WORKDIR /webodm WORKDIR /webodm
RUN curl --silent --location https://deb.nodesource.com/setup_6.x | bash - RUN curl --silent --location https://deb.nodesource.com/setup_8.x | bash -
RUN apt-get -qq install -y nodejs RUN apt-get -qq install -y nodejs
# Configure use of testing branch of Debian # Configure use of testing branch of Debian
@ -36,7 +36,7 @@ WORKDIR /webodm/nodeodm/external/node-OpenDroneMap
RUN npm install --quiet RUN npm install --quiet
WORKDIR /webodm WORKDIR /webodm
RUN npm install --quiet -g webpack && npm install --quiet && webpack RUN npm install --quiet -g webpack@3.11.0 && npm install --quiet && webpack
RUN python manage.py collectstatic --noinput RUN python manage.py collectstatic --noinput
RUN rm /webodm/webodm/secret_key.py RUN rm /webodm/webodm/secret_key.py

Wyświetl plik

@ -21,6 +21,7 @@ A free, user-friendly, extendable application and [API](http://docs.webodm.org)
* [Getting Help](#getting-help) * [Getting Help](#getting-help)
* [Support the Project](#support-the-project) * [Support the Project](#support-the-project)
* [Become a Contributor](#become-a-contributor) * [Become a Contributor](#become-a-contributor)
* [Architecture Overview](#architecture-overview)
* [Run the docker version as a Linux Service](#run-the-docker-version-as-a-linux-service) * [Run the docker version as a Linux Service](#run-the-docker-version-as-a-linux-service)
* [Run it natively](#run-it-natively) * [Run it natively](#run-it-natively)
@ -86,7 +87,7 @@ You **will not be able to distribute a single job across multiple processing nod
If you want to run WebODM in production, make sure to pass the `--no-debug` flag while starting WebODM: If you want to run WebODM in production, make sure to pass the `--no-debug` flag while starting WebODM:
```bash ```bash
./webodm.sh down && ./webodm.sh start --no-debug ./webodm.sh restart --no-debug
``` ```
This will disable the `DEBUG` flag from `webodm/settings.py` within the docker container. This is [really important](https://docs.djangoproject.com/en/1.11/ref/settings/#std:setting-DEBUG). This will disable the `DEBUG` flag from `webodm/settings.py` within the docker container. This is [really important](https://docs.djangoproject.com/en/1.11/ref/settings/#std:setting-DEBUG).
@ -100,7 +101,7 @@ WebODM has the ability to automatically request and install a SSL certificate vi
- Run the following: - Run the following:
```bash ```bash
./webodm.sh down && ./webodm.sh start --ssl --hostname webodm.myorg.com ./webodm.sh restart --ssl --hostname webodm.myorg.com
``` ```
That's it! The certificate will automatically renew when needed. That's it! The certificate will automatically renew when needed.
@ -112,7 +113,7 @@ If you want to specify your own key/certificate pair, simply pass the `--ssl-key
When using Docker, all processing results are stored in a docker volume and are not available on the host filesystem. If you want to store your files on the host filesystem instead of a docker volume, you need to pass a path via the `--media-dir` option: When using Docker, all processing results are stored in a docker volume and are not available on the host filesystem. If you want to store your files on the host filesystem instead of a docker volume, you need to pass a path via the `--media-dir` option:
```bash ```bash
./webodm.sh down && ./webodm.sh start --media-dir /home/user/webodm_data ./webodm.sh restart --media-dir /home/user/webodm_data
``` ```
Note that existing task results will not be available after the change. Refer to the [Migrate Data Volumes](https://docs.docker.com/engine/tutorials/dockervolumes/#backup-restore-or-migrate-data-volumes) section of the Docker documentation for information on migrating existing task results. Note that existing task results will not be available after the change. Refer to the [Migrate Data Volumes](https://docs.docker.com/engine/tutorials/dockervolumes/#backup-restore-or-migrate-data-volumes) section of the Docker documentation for information on migrating existing task results.
@ -123,7 +124,7 @@ Sympthoms | Possible Solutions
--------- | ------------------ --------- | ------------------
While starting WebODM you get: `from six.moves import _thread as thread ImportError: cannot import name _thread` | Try running: `sudo pip install --ignore-installed six` While starting WebODM you get: `from six.moves import _thread as thread ImportError: cannot import name _thread` | Try running: `sudo pip install --ignore-installed six`
While starting WebODM you get: `'WaitNamedPipe','The system cannot find the file specified.'` | 1. Make sure you have enabled VT-x virtualization in the BIOS.<br/>2. Try to downgrade your version of Python to 2.7 While starting WebODM you get: `'WaitNamedPipe','The system cannot find the file specified.'` | 1. Make sure you have enabled VT-x virtualization in the BIOS.<br/>2. Try to downgrade your version of Python to 2.7
While Accessing the WebODM interface you get: `OperationalError at / could not translate host name “db” to address: Name or service not known` or `ProgrammingError at / relation “auth_user” does not exist` | Try restarting your computer, then type: `./webodm.sh down && ./webodm.sh start` While Accessing the WebODM interface you get: `OperationalError at / could not translate host name “db” to address: Name or service not known` or `ProgrammingError at / relation “auth_user” does not exist` | Try restarting your computer, then type: `./webodm.sh restart`
Task output or console shows one of the following:<ul><li>`MemoryError`</li><li>`Killed`</li></ul> | Make sure that your Docker environment has enough RAM allocated: [MacOS Instructions](http://stackoverflow.com/a/39720010), [Windows Instructions](https://docs.docker.com/docker-for-windows/#advanced) Task output or console shows one of the following:<ul><li>`MemoryError`</li><li>`Killed`</li></ul> | Make sure that your Docker environment has enough RAM allocated: [MacOS Instructions](http://stackoverflow.com/a/39720010), [Windows Instructions](https://docs.docker.com/docker-for-windows/#advanced)
After an update, you get: `django.contrib.auth.models.DoesNotExist: Permission matching query does not exist.` | Try to remove your WebODM folder and start from a fresh git clone After an update, you get: `django.contrib.auth.models.DoesNotExist: Permission matching query does not exist.` | Try to remove your WebODM folder and start from a fresh git clone
Task fails with `Process exited with code null`, no task console output - OR - console output shows `Illegal Instruction` | If the computer running node-opendronemap is using an old or 32bit CPU, you need to compile [OpenDroneMap](https://github.com/OpenDroneMap/OpenDroneMap) from sources and setup node-opendronemap natively. You cannot use docker. Docker images work with CPUs with 64-bit extensions, MMX, SSE, SSE2, SSE3 and SSSE3 instruction set support or higher. Task fails with `Process exited with code null`, no task console output - OR - console output shows `Illegal Instruction` | If the computer running node-opendronemap is using an old or 32bit CPU, you need to compile [OpenDroneMap](https://github.com/OpenDroneMap/OpenDroneMap) from sources and setup node-opendronemap natively. You cannot use docker. Docker images work with CPUs with 64-bit extensions, MMX, SSE, SSE2, SSE3 and SSSE3 instruction set support or higher.
@ -191,7 +192,7 @@ Developer, I'm looking to build an app that will stay behind a firewall and just
- [ ] Volumetric Measurements - [ ] Volumetric Measurements
- [X] Cluster management and setup. - [X] Cluster management and setup.
- [ ] Mission Planner - [ ] Mission Planner
- [ ] Plugins/Webhooks System - [X] Plugins/Webhooks System
- [X] API - [X] API
- [X] Documentation - [X] Documentation
- [ ] Android Mobile App - [ ] Android Mobile App
@ -239,6 +240,17 @@ When your first pull request is accepted, don't forget to fill [this form](https
<img src="https://user-images.githubusercontent.com/1951843/36511023-344f86b2-1733-11e8-8cae-236645db407b.png" alt="T-Shirt" width="50%"> <img src="https://user-images.githubusercontent.com/1951843/36511023-344f86b2-1733-11e8-8cae-236645db407b.png" alt="T-Shirt" width="50%">
## Architecture Overview
WebODM is built with scalability and performance in mind. While the default setup places all databases and applications on the same machine, users can separate its components for increased performance (ex. place a Celery worker on a separate machine for running background tasks).
![Architecture](https://user-images.githubusercontent.com/1951843/36916884-3a269a7a-1e23-11e8-997a-a57cd6ca7950.png)
A few things to note:
* We use Celery workers to do background tasks such as resizing images and processing task results, but we use an ad-hoc scheduling mechanism to communicate with node-OpenDroneMap (which processes the orthophotos, 3D models, etc.). The choice to use two separate systems for task scheduling is due to the flexibility that an ad-hoc mechanism gives us for certain operations (capture task output, persistent data and ability to restart tasks mid-way, communication via REST calls, etc.).
* If loaded on multiple machines, Celery workers should all share their `app/media` directory with the Django application (via network shares). You can manage workers via `./worker.sh`
## Run the docker version as a Linux Service ## Run the docker version as a Linux Service
If you wish to run the docker version with auto start/monitoring/stop, etc, as a systemd style Linux Service, a systemd unit file is included in the service folder of the repo. If you wish to run the docker version with auto start/monitoring/stop, etc, as a systemd style Linux Service, a systemd unit file is included in the service folder of the repo.
@ -288,6 +300,7 @@ To run WebODM, you will need to install:
* GDAL (>= 2.1) * GDAL (>= 2.1)
* Node.js (>= 6.0) * Node.js (>= 6.0)
* Nginx (Linux/MacOS) - OR - Apache + mod_wsgi (Windows) * Nginx (Linux/MacOS) - OR - Apache + mod_wsgi (Windows)
* Redis (>= 2.6)
On Linux, make sure you have: On Linux, make sure you have:
@ -329,17 +342,29 @@ ALTER SYSTEM SET postgis.enable_outdb_rasters TO True;
ALTER SYSTEM SET postgis.gdal_enabled_drivers TO 'GTiff'; ALTER SYSTEM SET postgis.gdal_enabled_drivers TO 'GTiff';
``` ```
Start the redis broker:
```bash
redis-server
```
Then: Then:
```bash ```bash
pip install -r requirements.txt pip install -r requirements.txt
sudo npm install -g webpack sudo npm install -g webpack@3.11.0
npm install npm install
webpack webpack
python manage.py collectstatic --noinput python manage.py collectstatic --noinput
chmod +x start.sh && ./start.sh --no-gunicorn chmod +x start.sh && ./start.sh --no-gunicorn
``` ```
Finally, start at least one celery worker:
```bash
./worker.sh start
```
The `start.sh` script will use Django's built-in server if you pass the `--no-gunicorn` parameter. This is good for testing, but bad for production. The `start.sh` script will use Django's built-in server if you pass the `--no-gunicorn` parameter. This is good for testing, but bad for production.
In production, if you have nginx installed, modify the configuration file in `nginx/nginx.conf` to match your system's configuration and just run `start.sh` without parameters. In production, if you have nginx installed, modify the configuration file in `nginx/nginx.conf` to match your system's configuration and just run `start.sh` without parameters.
@ -372,5 +397,6 @@ python --version
pip --version pip --version
npm --version npm --version
gdalinfo --version gdalinfo --version
redis-server --version
``` ```
Should all work without errors. Should all work without errors.

Wyświetl plik

@ -1 +0,0 @@
0.4.1

Wyświetl plik

@ -1,24 +1,20 @@
import mimetypes import mimetypes
import os import os
from wsgiref.util import FileWrapper
from django.contrib.gis.db.models import GeometryField
from django.contrib.gis.db.models.functions import Envelope
from django.core.exceptions import ObjectDoesNotExist, SuspiciousFileOperation, ValidationError from django.core.exceptions import ObjectDoesNotExist, SuspiciousFileOperation, ValidationError
from django.db import transaction from django.db import transaction
from django.db.models.functions import Cast
from django.http import HttpResponse from django.http import HttpResponse
from wsgiref.util import FileWrapper
from rest_framework import status, serializers, viewsets, filters, exceptions, permissions, parsers from rest_framework import status, serializers, viewsets, filters, exceptions, permissions, parsers
from rest_framework.decorators import detail_route
from rest_framework.permissions import IsAuthenticatedOrReadOnly from rest_framework.permissions import IsAuthenticatedOrReadOnly
from rest_framework.response import Response from rest_framework.response import Response
from rest_framework.decorators import detail_route
from rest_framework.views import APIView from rest_framework.views import APIView
from nodeodm import status_codes from app import models, pending_actions
from .common import get_and_check_project, get_tile_json, path_traversal_check
from app import models, scheduler, pending_actions
from nodeodm.models import ProcessingNode from nodeodm.models import ProcessingNode
from worker import tasks as worker_tasks
from .common import get_and_check_project, get_tile_json, path_traversal_check
class TaskIDsSerializer(serializers.BaseSerializer): class TaskIDsSerializer(serializers.BaseSerializer):
@ -84,8 +80,8 @@ class TaskViewSet(viewsets.ViewSet):
task.last_error = None task.last_error = None
task.save() task.save()
# Call the scheduler (speed things up) # Process task right away
scheduler.process_pending_tasks(background=True) worker_tasks.process_task.delay(task.id)
return Response({'success': True}) return Response({'success': True})
@ -149,7 +145,8 @@ class TaskViewSet(viewsets.ViewSet):
raise exceptions.ValidationError(detail="Cannot create task, you need at least 2 images") raise exceptions.ValidationError(detail="Cannot create task, you need at least 2 images")
with transaction.atomic(): with transaction.atomic():
task = models.Task.objects.create(project=project) task = models.Task.objects.create(project=project,
pending_action=pending_actions.RESIZE if 'resize_to' in request.data else None)
for image in files: for image in files:
models.ImageUpload.objects.create(task=task, image=image) models.ImageUpload.objects.create(task=task, image=image)
@ -159,6 +156,8 @@ class TaskViewSet(viewsets.ViewSet):
serializer.is_valid(raise_exception=True) serializer.is_valid(raise_exception=True)
serializer.save() serializer.save()
worker_tasks.process_task.delay(task.id)
return Response(serializer.data, status=status.HTTP_201_CREATED) return Response(serializer.data, status=status.HTTP_201_CREATED)
@ -180,8 +179,8 @@ class TaskViewSet(viewsets.ViewSet):
serializer.is_valid(raise_exception=True) serializer.is_valid(raise_exception=True)
serializer.save() serializer.save()
# Call the scheduler (speed things up) # Process task right away
scheduler.process_pending_tasks(background=True) worker_tasks.process_task.delay(task.id)
return Response(serializer.data) return Response(serializer.data)

Wyświetl plik

@ -1,40 +0,0 @@
from threading import Thread
import logging
from django import db
from app.testwatch import testWatch
logger = logging.getLogger('app.logger')
def background(func):
"""
Adds background={True|False} param to any function
so that we can call update_nodes_info(background=True) from the outside
"""
def wrapper(*args,**kwargs):
background = kwargs.get('background', False)
if 'background' in kwargs: del kwargs['background']
if background:
if testWatch.hook_pre(func, *args, **kwargs): return
# Create a function that closes all
# db connections at the end of the thread
# This is necessary to make sure we don't leave
# open connections lying around.
def execute_and_close_db():
ret = None
try:
ret = func(*args, **kwargs)
finally:
db.connections.close_all()
testWatch.hook_post(func, *args, **kwargs)
return ret
t = Thread(target=execute_and_close_db)
t.daemon = True
t.start()
return t
else:
return func(*args, **kwargs)
return wrapper

Wyświetl plik

@ -1,5 +1,6 @@
import os import os
import kombu
from django.contrib.auth.models import Permission from django.contrib.auth.models import Permission
from django.contrib.auth.models import User, Group from django.contrib.auth.models import User, Group
from django.core.exceptions import ObjectDoesNotExist from django.core.exceptions import ObjectDoesNotExist
@ -7,12 +8,14 @@ from django.core.files import File
from django.db.utils import ProgrammingError from django.db.utils import ProgrammingError
from guardian.shortcuts import assign_perm from guardian.shortcuts import assign_perm
from worker import tasks as worker_tasks
from app.models import Preset from app.models import Preset
from app.models import Theme from app.models import Theme
from app.plugins import register_plugins
from nodeodm.models import ProcessingNode from nodeodm.models import ProcessingNode
# noinspection PyUnresolvedReferences # noinspection PyUnresolvedReferences
from webodm.settings import MEDIA_ROOT from webodm.settings import MEDIA_ROOT
from . import scheduler, signals from . import signals
import logging import logging
from .models import Task, Setting from .models import Task, Setting
from webodm import settings from webodm import settings
@ -21,12 +24,14 @@ from webodm.wsgi import booted
def boot(): def boot():
# booted is a shared memory variable to keep track of boot status # booted is a shared memory variable to keep track of boot status
# as multiple workers could trigger the boot sequence twice # as multiple gunicorn workers could trigger the boot sequence twice
if not settings.DEBUG and booted.value: return if not settings.DEBUG and booted.value: return
booted.value = True booted.value = True
logger = logging.getLogger('app.logger') logger = logging.getLogger('app.logger')
logger.info("Booting WebODM {}".format(settings.VERSION))
if settings.DEBUG: if settings.DEBUG:
logger.warning("Debug mode is ON (for development this is OK)") logger.warning("Debug mode is ON (for development this is OK)")
@ -57,17 +62,16 @@ def boot():
# Add default presets # Add default presets
Preset.objects.get_or_create(name='DSM + DTM', system=True, Preset.objects.get_or_create(name='DSM + DTM', system=True,
options=[{'name': 'dsm', 'value': True}, {'name': 'dtm', 'value': True}]) options=[{'name': 'dsm', 'value': True}, {'name': 'dtm', 'value': True}, {'name': 'mesh-octree-depth', 'value': 11}])
Preset.objects.get_or_create(name='Fast Orthophoto', system=True,
options=[{'name': 'fast-orthophoto', 'value': True}])
Preset.objects.get_or_create(name='High Quality', system=True, Preset.objects.get_or_create(name='High Quality', system=True,
options=[{'name': 'dsm', 'value': True}, options=[{'name': 'dsm', 'value': True},
{'name': 'skip-resize', 'value': True},
{'name': 'mesh-octree-depth', 'value': "12"}, {'name': 'mesh-octree-depth', 'value': "12"},
{'name': 'use-25dmesh', 'value': True},
{'name': 'min-num-features', 'value': 8000},
{'name': 'dem-resolution', 'value': "0.04"}, {'name': 'dem-resolution', 'value': "0.04"},
{'name': 'orthophoto-resolution', 'value': "60"}, {'name': 'orthophoto-resolution', 'value': "40"},
]) ])
Preset.objects.get_or_create(name='Default', system=True, options=[{'name': 'dsm', 'value': True}]) Preset.objects.get_or_create(name='Default', system=True, options=[{'name': 'dsm', 'value': True}, {'name': 'mesh-octree-depth', 'value': 11}])
# Add settings # Add settings
default_theme, created = Theme.objects.get_or_create(name='Default') default_theme, created = Theme.objects.get_or_create(name='Default')
@ -87,11 +91,14 @@ def boot():
# Unlock any Task that might have been locked # Unlock any Task that might have been locked
Task.objects.filter(processing_lock=True).update(processing_lock=False) Task.objects.filter(processing_lock=True).update(processing_lock=False)
if not settings.TESTING: register_plugins()
# Setup and start scheduler
scheduler.setup() if not settings.TESTING:
try:
worker_tasks.update_nodes_info.delay()
except kombu.exceptions.OperationalError as e:
logger.error("Cannot connect to celery broker at {}. Make sure that your redis-server is running at that address: {}".format(settings.CELERY_BROKER_URL, str(e)))
scheduler.update_nodes_info(background=True)
except ProgrammingError: except ProgrammingError:
logger.warning("Could not touch the database. If running a migration, this is expected.") logger.warning("Could not touch the database. If running a migration, this is expected.")

Wyświetl plik

@ -0,0 +1,6 @@
+proj=utm +zone=15 +ellps=WGS84 +datum=WGS84 +units=m +no_defs
576529.22 5188003.22 0 4 6 tiny_drone_image.JPG
576529.25 5188003.25 0 7.75 8.25 tiny_drone_image.JPG
576529.22 5188003.22 0 4 6 tiny_drone_image_2.jpg
576529.27 5188003.27 0 8.19 8.42 tiny_drone_image_2.jpg
576529.27 5188003.27 0 8 8 missing_image.jpg

Wyświetl plik

@ -0,0 +1,4 @@
<O_O>
1 2 3 4 5 6
1 hello 3 hello 5 6

Wyświetl plik

@ -0,0 +1,25 @@
# -*- coding: utf-8 -*-
# Generated by Django 1.11.7 on 2018-02-19 19:46
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('app', '0016_public_task_uuids'),
]
operations = [
migrations.AddField(
model_name='task',
name='resize_to',
field=models.IntegerField(default=-1, help_text='When set to a value different than -1, indicates that the images for this task have been / will be resized to the size specified here before processing.'),
),
migrations.AlterField(
model_name='task',
name='pending_action',
field=models.IntegerField(blank=True, choices=[(1, 'CANCEL'), (2, 'REMOVE'), (3, 'RESTART'), (4, 'RESIZE')], db_index=True, help_text='A requested action to be performed on the task. The selected action will be performed by the worker at the next iteration.', null=True),
),
]

Wyświetl plik

@ -32,7 +32,7 @@ class Project(models.Model):
super().delete(*args) super().delete(*args)
else: else:
# Need to remove all tasks before we can remove this project # Need to remove all tasks before we can remove this project
# which will be deleted on the scheduler after pending actions # which will be deleted by workers after pending actions
# have been completed # have been completed
self.task_set.update(pending_action=pending_actions.REMOVE) self.task_set.update(pending_action=pending_actions.REMOVE)
self.deleting = True self.deleting = True

Wyświetl plik

@ -4,6 +4,12 @@ import shutil
import zipfile import zipfile
import uuid as uuid_module import uuid as uuid_module
import json
from shlex import quote
import piexif
import re
from PIL import Image
from django.contrib.gis.gdal import GDALRaster from django.contrib.gis.gdal import GDALRaster
from django.contrib.gis.gdal import OGRGeometry from django.contrib.gis.gdal import OGRGeometry
from django.contrib.gis.geos import GEOSGeometry from django.contrib.gis.geos import GEOSGeometry
@ -22,6 +28,11 @@ from nodeodm.models import ProcessingNode
from webodm import settings from webodm import settings
from .project import Project from .project import Project
from functools import partial
from multiprocessing import cpu_count
from concurrent.futures import ThreadPoolExecutor
import subprocess
logger = logging.getLogger('app.logger') logger = logging.getLogger('app.logger')
@ -57,6 +68,47 @@ def validate_task_options(value):
raise ValidationError("Invalid options") raise ValidationError("Invalid options")
def resize_image(image_path, resize_to):
try:
im = Image.open(image_path)
path, ext = os.path.splitext(image_path)
resized_image_path = os.path.join(path + '.resized' + ext)
width, height = im.size
max_side = max(width, height)
if max_side < resize_to:
logger.warning('You asked to make {} bigger ({} --> {}), but we are not going to do that.'.format(image_path, max_side, resize_to))
im.close()
return {'path': image_path, 'resize_ratio': 1}
ratio = float(resize_to) / float(max_side)
resized_width = int(width * ratio)
resized_height = int(height * ratio)
im.thumbnail((resized_width, resized_height), Image.LANCZOS)
if 'exif' in im.info:
exif_dict = piexif.load(im.info['exif'])
exif_dict['Exif'][piexif.ExifIFD.PixelXDimension] = resized_width
exif_dict['Exif'][piexif.ExifIFD.PixelYDimension] = resized_height
im.save(resized_image_path, "JPEG", exif=piexif.dump(exif_dict), quality=100)
else:
im.save(resized_image_path, "JPEG", quality=100)
im.close()
# Delete original image, rename resized image to original
os.remove(image_path)
os.rename(resized_image_path, image_path)
logger.info("Resized {} to {}x{}".format(image_path, resized_width, resized_height))
except IOError as e:
logger.warning("Cannot resize {}: {}.".format(image_path, str(e)))
return None
return {'path': image_path, 'resize_ratio': ratio}
class Task(models.Model): class Task(models.Model):
ASSETS_MAP = { ASSETS_MAP = {
'all.zip': 'all.zip', 'all.zip': 'all.zip',
@ -85,6 +137,7 @@ class Task(models.Model):
(pending_actions.CANCEL, 'CANCEL'), (pending_actions.CANCEL, 'CANCEL'),
(pending_actions.REMOVE, 'REMOVE'), (pending_actions.REMOVE, 'REMOVE'),
(pending_actions.RESTART, 'RESTART'), (pending_actions.RESTART, 'RESTART'),
(pending_actions.RESIZE, 'RESIZE'),
) )
id = models.UUIDField(primary_key=True, default=uuid_module.uuid4, unique=True, serialize=False, editable=False) id = models.UUIDField(primary_key=True, default=uuid_module.uuid4, unique=True, serialize=False, editable=False)
@ -109,9 +162,10 @@ class Task(models.Model):
# mission # mission
created_at = models.DateTimeField(default=timezone.now, help_text="Creation date") created_at = models.DateTimeField(default=timezone.now, help_text="Creation date")
pending_action = models.IntegerField(choices=PENDING_ACTIONS, db_index=True, null=True, blank=True, help_text="A requested action to be performed on the task. The selected action will be performed by the scheduler at the next iteration.") pending_action = models.IntegerField(choices=PENDING_ACTIONS, db_index=True, null=True, blank=True, help_text="A requested action to be performed on the task. The selected action will be performed by the worker at the next iteration.")
public = models.BooleanField(default=False, help_text="A flag indicating whether this task is available to the public") public = models.BooleanField(default=False, help_text="A flag indicating whether this task is available to the public")
resize_to = models.IntegerField(default=-1, help_text="When set to a value different than -1, indicates that the images for this task have been / will be resized to the size specified here before processing.")
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
@ -172,9 +226,14 @@ class Task(models.Model):
""" """
Get a path relative to the place where assets are stored Get a path relative to the place where assets are stored
""" """
return self.task_path("assets", *args)
def task_path(self, *args):
"""
Get path relative to the root task directory
"""
return os.path.join(settings.MEDIA_ROOT, return os.path.join(settings.MEDIA_ROOT,
assets_directory_path(self.id, self.project.id, ""), assets_directory_path(self.id, self.project.id, ""),
"assets",
*args) *args)
def is_asset_available_slow(self, asset): def is_asset_available_slow(self, asset):
@ -221,12 +280,18 @@ class Task(models.Model):
def process(self): def process(self):
""" """
This method contains the logic for processing tasks asynchronously This method contains the logic for processing tasks asynchronously
from a background thread or from the scheduler. Here tasks that are from a background thread or from a worker. Here tasks that are
ready to be processed execute some logic. This could be communication ready to be processed execute some logic. This could be communication
with a processing node or executing a pending action. with a processing node or executing a pending action.
""" """
try: try:
if self.pending_action == pending_actions.RESIZE:
resized_images = self.resize_images()
self.resize_gcp(resized_images)
self.pending_action = None
self.save()
if self.auto_processing_node and not self.status in [status_codes.FAILED, status_codes.CANCELED]: if self.auto_processing_node and not self.status in [status_codes.FAILED, status_codes.CANCELED]:
# No processing node assigned and need to auto assign # No processing node assigned and need to auto assign
if self.processing_node is None: if self.processing_node is None:
@ -507,7 +572,6 @@ class Task(models.Model):
except FileNotFoundError as e: except FileNotFoundError as e:
logger.warning(e) logger.warning(e)
def set_failure(self, error_message): def set_failure(self, error_message):
logger.error("FAILURE FOR {}: {}".format(self, error_message)) logger.error("FAILURE FOR {}: {}".format(self, error_message))
self.last_error = error_message self.last_error = error_message
@ -515,8 +579,61 @@ class Task(models.Model):
self.pending_action = None self.pending_action = None
self.save() self.save()
def find_all_files_matching(self, regex):
directory = full_task_directory_path(self.id, self.project.id)
return [os.path.join(directory, f) for f in os.listdir(directory) if
re.match(regex, f, re.IGNORECASE)]
def resize_images(self):
"""
Destructively resize this task's JPG images while retaining EXIF tags.
Resulting images are always converted to JPG.
TODO: add support for tiff files
:return list containing paths of resized images and resize ratios
"""
if self.resize_to < 0:
logger.warning("We were asked to resize images to {}, this might be an error.".format(self.resize_to))
return []
images_path = self.find_all_files_matching(r'.*\.jpe?g$')
with ThreadPoolExecutor(max_workers=cpu_count()) as executor:
resized_images = list(filter(lambda i: i is not None, executor.map(
partial(resize_image, resize_to=self.resize_to),
images_path)))
return resized_images
def resize_gcp(self, resized_images):
"""
Destructively change this task's GCP file (if any)
by resizing the location of GCP entries.
:param resized_images: list of objects having "path" and "resize_ratio" keys
for example [{'path': 'path/to/DJI_0018.jpg', 'resize_ratio': 0.25}, ...]
:return: path to changed GCP file or None if no GCP file was found/changed
"""
gcp_path = self.find_all_files_matching(r'.*\.txt$')
if len(gcp_path) == 0: return None
# Assume we only have a single GCP file per task
gcp_path = gcp_path[0]
resize_script_path = os.path.join(settings.BASE_DIR, 'app', 'scripts', 'resize_gcp.js')
dict = {}
for ri in resized_images:
dict[os.path.basename(ri['path'])] = ri['resize_ratio']
try:
new_gcp_content = subprocess.check_output("node {} {} '{}'".format(quote(resize_script_path), quote(gcp_path), json.dumps(dict)), shell=True)
with open(gcp_path, 'w') as f:
f.write(new_gcp_content.decode('utf-8'))
logger.info("Resized GCP file {}".format(gcp_path))
return gcp_path
except subprocess.CalledProcessError as e:
logger.warning("Could not resize GCP file {}: {}".format(gcp_path, str(e)))
return None
class Meta: class Meta:
permissions = ( permissions = (
('view_task', 'Can view task'), ('view_task', 'Can view task'),
) )

Wyświetl plik

@ -1,3 +1,4 @@
CANCEL = 1 CANCEL = 1
REMOVE = 2 REMOVE = 2
RESTART = 3 RESTART = 3
RESIZE = 4

Wyświetl plik

@ -0,0 +1,4 @@
from .plugin_base import PluginBase
from .menu import Menu
from .mount_point import MountPoint
from .functions import *

Wyświetl plik

@ -0,0 +1,111 @@
import os
import logging
import importlib
import django
import json
from django.conf.urls import url
from functools import reduce
from webodm import settings
logger = logging.getLogger('app.logger')
def register_plugins():
for plugin in get_active_plugins():
plugin.register()
logger.info("Registered {}".format(plugin))
def get_url_patterns():
"""
@return the patterns to expose the /public directory of each plugin (if needed)
"""
url_patterns = []
for plugin in get_active_plugins():
for mount_point in plugin.mount_points():
url_patterns.append(url('^plugins/{}/{}'.format(plugin.get_name(), mount_point.url),
mount_point.view,
*mount_point.args,
**mount_point.kwargs))
if plugin.has_public_path():
url_patterns.append(url('^plugins/{}/(.*)'.format(plugin.get_name()),
django.views.static.serve,
{'document_root': plugin.get_path("public")}))
return url_patterns
plugins = None
def get_active_plugins():
# Cache plugins search
global plugins
if plugins != None: return plugins
plugins = []
plugins_path = get_plugins_path()
for dir in [d for d in os.listdir(plugins_path) if os.path.isdir(plugins_path)]:
# Each plugin must have a manifest.json and a plugin.py
plugin_path = os.path.join(plugins_path, dir)
manifest_path = os.path.join(plugin_path, "manifest.json")
pluginpy_path = os.path.join(plugin_path, "plugin.py")
disabled_path = os.path.join(plugin_path, "disabled")
# Do not load test plugin unless we're in test mode
if os.path.basename(plugin_path) == 'test' and not settings.TESTING:
continue
if not os.path.isfile(manifest_path) or not os.path.isfile(pluginpy_path):
logger.warning("Found invalid plugin in {}".format(plugin_path))
continue
# Plugins that have a "disabled" file are disabled
if os.path.isfile(disabled_path):
continue
# Read manifest
with open(manifest_path) as manifest_file:
manifest = json.load(manifest_file)
if 'webodmMinVersion' in manifest:
min_version = manifest['webodmMinVersion']
if versionToInt(min_version) > versionToInt(settings.VERSION):
logger.warning("In {} webodmMinVersion is set to {} but WebODM version is {}. Plugin will not be loaded. Update WebODM.".format(manifest_path, min_version, settings.VERSION))
continue
# Instantiate the plugin
try:
module = importlib.import_module("plugins.{}".format(dir))
cls = getattr(module, "Plugin")
plugins.append(cls())
except Exception as e:
logger.warning("Failed to instantiate plugin {}: {}".format(dir, e))
return plugins
def get_plugins_path():
current_path = os.path.dirname(os.path.realpath(__file__))
return os.path.abspath(os.path.join(current_path, "..", "..", "plugins"))
def versionToInt(version):
"""
Converts a WebODM version string (major.minor.build) to a integer value
for comparison
>>> versionToInt("1.2.3")
100203
>>> versionToInt("1")
100000
>>> versionToInt("1.2.3.4")
100203
>>> versionToInt("wrong")
-1
"""
try:
return sum([reduce(lambda mult, ver: mult * ver, i) for i in zip([100000, 100, 1], map(int, version.split(".")))])
except:
return -1

Wyświetl plik

@ -0,0 +1,22 @@
class Menu:
def __init__(self, label, link = "javascript:void(0)", css_icon = 'fa fa-caret-right fa-fw', submenu = []):
"""
Create a menu
:param label: text shown in entry
:param css_icon: class used for showing an icon (for example, "fa fa-wrench")
:param link: link of entry (use "#" or "javascript:void(0);" for no action)
:param submenu: list of Menu items
"""
super().__init__()
self.label = label
self.css_icon = css_icon
self.link = link
self.submenu = submenu
if (self.has_submenu()):
self.link = "#"
def has_submenu(self):
return len(self.submenu) > 0

Wyświetl plik

@ -0,0 +1,17 @@
import re
class MountPoint:
def __init__(self, url, view, *args, **kwargs):
"""
:param url: path to mount this view to, relative to plugins directory
:param view: Django view
:param args: extra args to pass to url() call
:param kwargs: extra kwargs to pass to url() call
"""
super().__init__()
self.url = re.sub(r'^/+', '', url) # remove leading slashes
self.view = view
self.args = args
self.kwargs = kwargs

Wyświetl plik

@ -0,0 +1,85 @@
import logging, os, sys
from abc import ABC
logger = logging.getLogger('app.logger')
class PluginBase(ABC):
def __init__(self):
self.name = self.get_module_name().split(".")[-2]
def register(self):
pass
def get_path(self, *paths):
"""
Gets the path of the directory of the plugin, optionally chained with paths
:return: path
"""
return os.path.join(os.path.dirname(sys.modules[self.get_module_name()].__file__), *paths)
def get_name(self):
"""
:return: Name of current module (reflects the directory in which this plugin is stored)
"""
return self.name
def get_module_name(self):
return self.__class__.__module__
def get_include_js_urls(self):
return [self.public_url(js_file) for js_file in self.include_js_files()]
def get_include_css_urls(self):
return [self.public_url(css_file) for css_file in self.include_css_files()]
def public_url(self, path):
"""
:param path: unix-style path
:return: Path that can be accessed via a URL (from the browser), relative to plugins/<yourplugin>/public
"""
return "/plugins/{}/{}".format(self.get_name(), path)
def template_path(self, path):
"""
:param path: unix-style path
:return: path used to reference Django templates for a plugin
"""
return "plugins/{}/templates/{}".format(self.get_name(), path)
def has_public_path(self):
return os.path.isdir(self.get_path("public"))
def include_js_files(self):
"""
Should be overriden by plugins to communicate
which JS files should be included in the WebODM interface
All paths are relative to a plugin's /public folder.
"""
return []
def include_css_files(self):
"""
Should be overriden by plugins to communicate
which CSS files should be included in the WebODM interface
All paths are relative to a plugin's /public folder.
"""
return []
def main_menu(self):
"""
Should be overriden by plugins that want to add
items to the side menu.
:return: [] of Menu objects
"""
return []
def mount_points(self):
"""
Should be overriden by plugins that want to connect
custom Django views
:return: [] of MountPoint objects
"""
return []
def __str__(self):
return "[{}]".format(self.get_module_name())

Wyświetl plik

@ -0,0 +1,5 @@
{% extends "app/logged_in_base.html" %}
{% block content %}
Hello World! Override me.
{% endblock %}

Wyświetl plik

@ -1,96 +0,0 @@
import logging
import traceback
from multiprocessing.dummy import Pool as ThreadPool
from threading import Lock
from apscheduler.schedulers import SchedulerAlreadyRunningError, SchedulerNotRunningError
from apscheduler.schedulers.background import BackgroundScheduler
from django import db
from django.db.models import Q, Count
from webodm import settings
from app.models import Task, Project
from nodeodm import status_codes
from nodeodm.models import ProcessingNode
from app.background import background
logger = logging.getLogger('app.logger')
scheduler = BackgroundScheduler({
'apscheduler.job_defaults.coalesce': 'true',
'apscheduler.job_defaults.max_instances': '3',
})
@background
def update_nodes_info():
processing_nodes = ProcessingNode.objects.all()
for processing_node in processing_nodes:
processing_node.update_node_info()
tasks_mutex = Lock()
@background
def process_pending_tasks():
tasks = []
try:
tasks_mutex.acquire()
# All tasks that have a processing node assigned
# Or that need one assigned (via auto)
# or tasks that need a status update
# or tasks that have a pending action
# and that are not locked (being processed by another thread)
tasks = Task.objects.filter(Q(processing_node__isnull=True, auto_processing_node=True) |
Q(Q(status=None) | Q(status__in=[status_codes.QUEUED, status_codes.RUNNING]), processing_node__isnull=False) |
Q(pending_action__isnull=False)).exclude(Q(processing_lock=True))
for task in tasks:
task.processing_lock = True
task.save()
finally:
tasks_mutex.release()
def process(task):
try:
task.process()
except Exception as e:
logger.error("Uncaught error! This is potentially bad. Please report it to http://github.com/OpenDroneMap/WebODM/issues: {} {}".format(e, traceback.format_exc()))
if settings.TESTING: raise e
finally:
# Might have been deleted
if task.pk is not None:
task.processing_lock = False
task.save()
db.connections.close_all()
if tasks.count() > 0:
pool = ThreadPool(tasks.count())
pool.map(process, tasks, chunksize=1)
pool.close()
pool.join()
def cleanup_projects():
# Delete all projects that are marked for deletion
# and that have no tasks left
total, count_dict = Project.objects.filter(deleting=True).annotate(
tasks_count=Count('task')
).filter(tasks_count=0).delete()
if total > 0 and 'app.Project' in count_dict:
logger.info("Deleted {} projects".format(count_dict['app.Project']))
def setup():
try:
scheduler.start()
scheduler.add_job(update_nodes_info, 'interval', seconds=30)
scheduler.add_job(process_pending_tasks, 'interval', seconds=5)
scheduler.add_job(cleanup_projects, 'interval', seconds=60)
except SchedulerAlreadyRunningError:
logger.warning("Scheduler already running (this is OK while testing)")
def teardown():
logger.info("Stopping scheduler...")
try:
scheduler.shutdown()
logger.info("Scheduler stopped")
except SchedulerNotRunningError:
logger.warning("Scheduler not running")

Wyświetl plik

@ -0,0 +1,25 @@
#!/usr/bin/env node
const fs = require('fs');
const Gcp = require('../static/app/js/classes/Gcp');
const argv = process.argv.slice(2);
function die(s){
console.log(s);
process.exit(1);
}
if (argv.length != 2){
die(`Usage: ./resize_gcp.js <path/to/gcp_file.txt> <JSON encoded image-->ratio map>`);
}
const [inputFile, jsonMap] = argv;
if (!fs.existsSync(inputFile)){
die('File does not exist: ' + inputFile);
}
const originalGcp = new Gcp(fs.readFileSync(inputFile, 'utf8'));
try{
const map = JSON.parse(jsonMap);
const newGcp = originalGcp.resize(map, true);
console.log(newGcp.toString());
}catch(e){
die("Not a valid JSON string: " + jsonMap);
}

Wyświetl plik

@ -260,3 +260,8 @@ footer{
border-top-width: 1px; border-top-width: 1px;
} }
} }
.full-height{
height: calc(100vh - 110px);
padding-bottom: 12px;
}

Wyświetl plik

@ -9,6 +9,9 @@ ul#side-menu.nav a,
{ {
color: theme("primary"); color: theme("primary");
} }
.theme-border-primary{
border-color: theme("primary");
}
.tooltip{ .tooltip{
.tooltip-inner{ .tooltip-inner{
background-color: theme("primary"); background-color: theme("primary");
@ -162,6 +165,9 @@ footer,
.popover-title{ .popover-title{
border-bottom-color: theme("border"); border-bottom-color: theme("border");
} }
.theme-border{
border-color: theme("border");
}
/* Highlight */ /* Highlight */
.task-list-item:nth-child(odd), .task-list-item:nth-child(odd),

Plik binarny nie jest wyświetlany.

Po

Szerokość:  |  Wysokość:  |  Rozmiar: 3.5 KiB

Plik binarny nie jest wyświetlany.

Po

Szerokość:  |  Wysokość:  |  Rozmiar: 1.4 KiB

Wyświetl plik

@ -0,0 +1,58 @@
class Gcp{
constructor(text){
this.text = text;
}
// Scale the image location of GPCs
// according to the values specified in the map
// @param imagesRatioMap {Object} object in which keys are image names and values are scaling ratios
// example: {'DJI_0018.jpg': 0.5, 'DJI_0019.JPG': 0.25}
// @return {Gcp} a new GCP object
resize(imagesRatioMap, muteWarnings = false){
// Make sure dict is all lower case and values are floats
let ratioMap = {};
for (let k in imagesRatioMap) ratioMap[k.toLowerCase()] = parseFloat(imagesRatioMap[k]);
const lines = this.text.split(/\r?\n/);
let output = "";
if (lines.length > 0){
output += lines[0] + '\n'; // coordinate system description
for (let i = 1; i < lines.length; i++){
let line = lines[i].trim();
if (line !== ""){
let parts = line.split(/\s+/);
if (parts.length >= 6){
let [x, y, z, px, py, imagename, ...extracols] = parts;
let ratio = ratioMap[imagename.toLowerCase()];
px = parseFloat(px);
py = parseFloat(py);
if (ratio !== undefined){
px *= ratio;
py *= ratio;
}else{
if (!muteWarnings) console.warn(`${imagename} not found in ratio map. Are you missing some images?`);
}
let extra = extracols.length > 0 ? ' ' + extracols.join(' ') : '';
output += `${x} ${y} ${z} ${px.toFixed(2)} ${py.toFixed(2)} ${imagename}${extra}\n`;
}else{
if (!muteWarnings) console.warn(`Invalid GCP format at line ${i}: ${line}`);
output += line + '\n';
}
}
}
}
return new Gcp(output);
}
toString(){
return this.text;
}
}
module.exports = Gcp;

Wyświetl plik

@ -1,6 +1,7 @@
const CANCEL = 1, const CANCEL = 1,
REMOVE = 2, REMOVE = 2,
RESTART = 3; RESTART = 3,
RESIZE = 4;
let pendingActions = { let pendingActions = {
[CANCEL]: { [CANCEL]: {
@ -11,6 +12,9 @@ let pendingActions = {
}, },
[RESTART]: { [RESTART]: {
descr: "Restarting..." descr: "Restarting..."
},
[RESIZE]: {
descr: "Resizing images..."
} }
}; };
@ -18,6 +22,7 @@ export default {
CANCEL: CANCEL, CANCEL: CANCEL,
REMOVE: REMOVE, REMOVE: REMOVE,
RESTART: RESTART, RESTART: RESTART,
RESIZE: RESIZE,
description: function(pendingAction) { description: function(pendingAction) {
if (pendingActions[pendingAction]) return pendingActions[pendingAction].descr; if (pendingActions[pendingAction]) return pendingActions[pendingAction].descr;

Wyświetl plik

@ -0,0 +1,26 @@
const dict = [
{k: 'NO', v: 0, human: "No"}, // Don't resize
{k: 'YES', v: 1, human: "Yes"}, // Resize on server
{k: 'YESINBROWSER', v: 2, human: "Yes (In browser)"} // Resize on browser
];
const exp = {
all: () => dict.map(d => d.v),
fromString: (s) => {
let v = parseInt(s);
if (!isNaN(v) && v >= 0 && v <= 2) return v;
else return 0;
},
toHuman: (v) => {
for (let i in dict){
if (dict[i].v === v) return dict[i].human;
}
throw new Error("Invalid value: " + v);
}
};
dict.forEach(en => {
exp[en.k] = en.v;
});
export default exp;

Wyświetl plik

@ -63,6 +63,23 @@ export default {
parser.href = href; parser.href = href;
return `${parser.protocol}//${parser.host}/${path}`; return `${parser.protocol}//${parser.host}/${path}`;
},
assert: function(condition, message) {
if (!condition) {
message = message || "Assertion failed";
if (typeof Error !== "undefined") {
throw new Error(message);
}
throw message; // Fallback
}
},
getCurrentScriptDir: function(){
let scripts= document.getElementsByTagName('script');
let path= scripts[scripts.length-1].src.split('?')[0]; // remove any ?query
let mydir= path.split('/').slice(0, -1).join('/')+'/'; // remove last filename part of path
return mydir;
} }
}; };

Wyświetl plik

@ -0,0 +1,30 @@
import { EventEmitter } from 'fbemitter';
import ApiFactory from './ApiFactory';
import Map from './Map';
import $ from 'jquery';
import SystemJS from 'SystemJS';
if (!window.PluginsAPI){
const events = new EventEmitter();
const factory = new ApiFactory(events);
SystemJS.config({
baseURL: '/plugins',
map: {
css: '/static/app/js/vendor/css.js'
},
meta: {
'*.css': { loader: 'css' }
}
});
window.PluginsAPI = {
Map: factory.create(Map),
SystemJS,
events
};
}
export default window.PluginsAPI;

Wyświetl plik

@ -0,0 +1,53 @@
import SystemJS from 'SystemJS';
export default class ApiFactory{
// @param events {EventEmitter}
constructor(events){
this.events = events;
}
// @param api {Object}
create(api){
// Adds two functions to obj
// - eventName
// - triggerEventName
// We could just use events, but methods
// are more robust as we can detect more easily if
// things break
const addEndpoint = (obj, eventName, preTrigger = () => {}) => {
obj[eventName] = (callbackOrDeps, callbackOrUndef) => {
if (Array.isArray(callbackOrDeps)){
// Deps
// Load dependencies, then raise event as usual
// by appending the dependencies to the argument list
this.events.addListener(`${api.namespace}::${eventName}`, (...args) => {
Promise.all(callbackOrDeps.map(dep => SystemJS.import(dep)))
.then((...deps) => {
callbackOrUndef(...(Array.from(args).concat(...deps)));
});
});
}else{
// Callback
this.events.addListener(`${api.namespace}::${eventName}`, callbackOrDeps);
}
}
const triggerEventName = "trigger" + eventName[0].toUpperCase() + eventName.slice(1);
obj[triggerEventName] = (...args) => {
preTrigger(...args);
this.events.emit(`${api.namespace}::${eventName}`, ...args);
};
}
const obj = {};
api.endpoints.forEach(endpoint => {
if (!Array.isArray(endpoint)) endpoint = [endpoint];
addEndpoint(obj, ...endpoint);
});
return obj;
}
}

Wyświetl plik

@ -0,0 +1,17 @@
import Utils from '../Utils';
const { assert } = Utils;
const leafletPreCheck = (options) => {
assert(options.map !== undefined);
};
export default {
namespace: "Map",
endpoints: [
["willAddControls", leafletPreCheck],
["didAddControls", leafletPreCheck]
]
};

Wyświetl plik

@ -1,12 +1,8 @@
import React from 'react'; import React from 'react';
import ReactDOMServer from 'react-dom/server';
import ReactDOM from 'react-dom';
import '../css/Map.scss'; import '../css/Map.scss';
import 'leaflet/dist/leaflet.css'; import 'leaflet/dist/leaflet.css';
import Leaflet from 'leaflet'; import Leaflet from 'leaflet';
import async from 'async'; import async from 'async';
import 'leaflet-measure/dist/leaflet-measure.css';
import 'leaflet-measure/dist/leaflet-measure';
import '../vendor/leaflet/L.Control.MousePosition.css'; import '../vendor/leaflet/L.Control.MousePosition.css';
import '../vendor/leaflet/L.Control.MousePosition'; import '../vendor/leaflet/L.Control.MousePosition';
import '../vendor/leaflet/Leaflet.Autolayers/css/leaflet.auto-layers.css'; import '../vendor/leaflet/Leaflet.Autolayers/css/leaflet.auto-layers.css';
@ -17,6 +13,7 @@ import SwitchModeButton from './SwitchModeButton';
import ShareButton from './ShareButton'; import ShareButton from './ShareButton';
import AssetDownloads from '../classes/AssetDownloads'; import AssetDownloads from '../classes/AssetDownloads';
import PropTypes from 'prop-types'; import PropTypes from 'prop-types';
import PluginsAPI from '../classes/plugins/API';
class Map extends React.Component { class Map extends React.Component {
static defaultProps = { static defaultProps = {
@ -174,16 +171,22 @@ class Map extends React.Component {
this.map = Leaflet.map(this.container, { this.map = Leaflet.map(this.container, {
scrollWheelZoom: true, scrollWheelZoom: true,
positionControl: true positionControl: true,
zoomControl: false
}); });
const measureControl = Leaflet.control.measure({ PluginsAPI.Map.triggerWillAddControls({
primaryLengthUnit: 'meters', map: this.map
secondaryLengthUnit: 'feet',
primaryAreaUnit: 'sqmeters',
secondaryAreaUnit: 'acres'
}); });
measureControl.addTo(this.map);
Leaflet.control.scale({
maxWidth: 250,
}).addTo(this.map);
//add zoom control with your options
Leaflet.control.zoom({
position:'bottomleft'
}).addTo(this.map);
if (showBackground) { if (showBackground) {
this.basemaps = { this.basemaps = {
@ -216,10 +219,6 @@ class Map extends React.Component {
}).addTo(this.map); }).addTo(this.map);
this.map.fitWorld(); this.map.fitWorld();
Leaflet.control.scale({
maxWidth: 250,
}).addTo(this.map);
this.map.attributionControl.setPrefix(""); this.map.attributionControl.setPrefix("");
this.loadImageryLayers(true).then(() => { this.loadImageryLayers(true).then(() => {
@ -236,6 +235,13 @@ class Map extends React.Component {
} }
}); });
}); });
// PluginsAPI.events.addListener('Map::AddPanel', (e) => {
// console.log("Received response: " + e);
// });
PluginsAPI.Map.triggerDidAddControls({
map: this.map
});
} }
componentDidUpdate(prevProps) { componentDidUpdate(prevProps) {
@ -269,6 +275,7 @@ class Map extends React.Component {
return ( return (
<div style={{height: "100%"}} className="map"> <div style={{height: "100%"}} className="map">
<ErrorMessage bind={[this, 'error']} /> <ErrorMessage bind={[this, 'error']} />
<div <div
style={{height: "100%"}} style={{height: "100%"}}
ref={(domNode) => (this.container = domNode)} ref={(domNode) => (this.container = domNode)}

Wyświetl plik

@ -3,6 +3,7 @@ import React from 'react';
import EditTaskForm from './EditTaskForm'; import EditTaskForm from './EditTaskForm';
import PropTypes from 'prop-types'; import PropTypes from 'prop-types';
import Storage from '../classes/Storage'; import Storage from '../classes/Storage';
import ResizeModes from '../classes/ResizeModes';
class NewTaskPanel extends React.Component { class NewTaskPanel extends React.Component {
static defaultProps = { static defaultProps = {
@ -25,14 +26,14 @@ class NewTaskPanel extends React.Component {
this.state = { this.state = {
name: props.name, name: props.name,
editTaskFormLoaded: false, editTaskFormLoaded: false,
resize: Storage.getItem('do_resize') !== null ? Storage.getItem('do_resize') == "1" : true, resizeMode: Storage.getItem('resize_mode') === null ? ResizeModes.YES : ResizeModes.fromString(Storage.getItem('resize_mode')),
resizeSize: parseInt(Storage.getItem('resize_size')) || 2048 resizeSize: parseInt(Storage.getItem('resize_size')) || 2048
}; };
this.save = this.save.bind(this); this.save = this.save.bind(this);
this.handleFormTaskLoaded = this.handleFormTaskLoaded.bind(this); this.handleFormTaskLoaded = this.handleFormTaskLoaded.bind(this);
this.getTaskInfo = this.getTaskInfo.bind(this); this.getTaskInfo = this.getTaskInfo.bind(this);
this.setResize = this.setResize.bind(this); this.setResizeMode = this.setResizeMode.bind(this);
this.handleResizeSizeChange = this.handleResizeSizeChange.bind(this); this.handleResizeSizeChange = this.handleResizeSizeChange.bind(this);
} }
@ -40,7 +41,7 @@ class NewTaskPanel extends React.Component {
e.preventDefault(); e.preventDefault();
this.taskForm.saveLastPresetToStorage(); this.taskForm.saveLastPresetToStorage();
Storage.setItem('resize_size', this.state.resizeSize); Storage.setItem('resize_size', this.state.resizeSize);
Storage.setItem('do_resize', this.state.resize ? "1" : "0"); Storage.setItem('resize_mode', this.state.resizeMode);
if (this.props.onSave) this.props.onSave(this.getTaskInfo()); if (this.props.onSave) this.props.onSave(this.getTaskInfo());
} }
@ -54,13 +55,14 @@ class NewTaskPanel extends React.Component {
getTaskInfo(){ getTaskInfo(){
return Object.assign(this.taskForm.getTaskInfo(), { return Object.assign(this.taskForm.getTaskInfo(), {
resizeTo: (this.state.resize && this.state.resizeSize > 0) ? this.state.resizeSize : null resizeSize: this.state.resizeSize,
resizeMode: this.state.resizeMode
}); });
} }
setResize(flag){ setResizeMode(v){
return e => { return e => {
this.setState({resize: flag}); this.setState({resizeMode: v});
} }
} }
@ -91,23 +93,19 @@ class NewTaskPanel extends React.Component {
<div className="col-sm-10"> <div className="col-sm-10">
<div className="btn-group"> <div className="btn-group">
<button type="button" className="btn btn-default dropdown-toggle" data-toggle="dropdown"> <button type="button" className="btn btn-default dropdown-toggle" data-toggle="dropdown">
{this.state.resize ? {ResizeModes.toHuman(this.state.resizeMode)} <span className="caret"></span>
"Yes" : "Skip"} <span className="caret"></span>
</button> </button>
<ul className="dropdown-menu"> <ul className="dropdown-menu">
<li> {ResizeModes.all().map(mode =>
<li key={mode}>
<a href="javascript:void(0);" <a href="javascript:void(0);"
onClick={this.setResize(true)}> onClick={this.setResizeMode(mode)}>
<i style={{opacity: this.state.resize ? 1 : 0}} className="fa fa-check"></i> Yes</a> <i style={{opacity: this.state.resizeMode === mode ? 1 : 0}} className="fa fa-check"></i> {ResizeModes.toHuman(mode)}</a>
</li>
<li>
<a href="javascript:void(0);"
onClick={this.setResize(false)}>
<i style={{opacity: !this.state.resize ? 1 : 0}} className="fa fa-check"></i> Skip</a>
</li> </li>
)}
</ul> </ul>
</div> </div>
<div className={"resize-control " + (!this.state.resize ? "hide" : "")}> <div className={"resize-control " + (this.state.resizeMode === ResizeModes.NO ? "hide" : "")}>
<input <input
type="number" type="number"
step="100" step="100"

Wyświetl plik

@ -11,6 +11,8 @@ import Dropzone from '../vendor/dropzone';
import csrf from '../django/csrf'; import csrf from '../django/csrf';
import HistoryNav from '../classes/HistoryNav'; import HistoryNav from '../classes/HistoryNav';
import PropTypes from 'prop-types'; import PropTypes from 'prop-types';
import ResizeModes from '../classes/ResizeModes';
import Gcp from '../classes/Gcp';
import $ from 'jquery'; import $ from 'jquery';
class ProjectListItem extends React.Component { class ProjectListItem extends React.Component {
@ -115,6 +117,37 @@ class ProjectListItem extends React.Component {
headers: { headers: {
[csrf.header]: csrf.token [csrf.header]: csrf.token
},
transformFile: (file, done) => {
// Resize image?
if ((this.dz.options.resizeWidth || this.dz.options.resizeHeight) && file.type.match(/image.*/)) {
return this.dz.resizeImage(file, this.dz.options.resizeWidth, this.dz.options.resizeHeight, this.dz.options.resizeMethod, done);
// Resize GCP? This should always be executed last (we sort in transformstart)
} else if (this.dz.options.resizeWidth && file.type.match(/text.*/)){
// Read GCP content
const fileReader = new FileReader();
fileReader.onload = (e) => {
const originalGcp = new Gcp(e.target.result);
const resizedGcp = originalGcp.resize(this.dz._resizeMap);
// Create new GCP file
let gcp = new Blob([resizedGcp.toString()], {type: "text/plain"});
gcp.lastModifiedDate = file.lastModifiedDate;
gcp.lastModified = file.lastModified;
gcp.name = file.name;
gcp.previewElement = file.previewElement;
gcp.previewTemplate = file.previewTemplate;
gcp.processing = file.processing;
gcp.status = file.status;
gcp.upload = file.upload;
gcp.upload.total = gcp.size; // not a typo
gcp.webkitRelativePath = file.webkitRelativePath;
done(gcp);
};
fileReader.readAsText(file);
} else {
return done(file);
}
} }
}); });
@ -129,9 +162,19 @@ class ProjectListItem extends React.Component {
totalCount: this.state.upload.totalCount + files.length totalCount: this.state.upload.totalCount + files.length
}); });
}) })
.on("transformcompleted", (total) => { .on("transformcompleted", (file, total) => {
if (this.dz._resizeMap) this.dz._resizeMap[file.name] = this.dz._taskInfo.resizeSize / Math.max(file.width, file.height);
this.setUploadState({resizedImages: total}); this.setUploadState({resizedImages: total});
}) })
.on("transformstart", (files) => {
if (this.dz.options.resizeWidth){
// Sort so that a GCP file is always last
files.sort(f => f.type.match(/text.*/) ? 1 : -1)
// Create filename --> resize ratio dict
this.dz._resizeMap = {};
}
})
.on("transformend", () => { .on("transformend", () => {
this.setUploadState({resizing: false, uploading: true}); this.setUploadState({resizing: false, uploading: true});
}) })
@ -180,6 +223,10 @@ class ProjectListItem extends React.Component {
if (!formData.has || !formData.has("options")) formData.append("options", JSON.stringify(taskInfo.options)); if (!formData.has || !formData.has("options")) formData.append("options", JSON.stringify(taskInfo.options));
if (!formData.has || !formData.has("processing_node")) formData.append("processing_node", taskInfo.selectedNode.id); if (!formData.has || !formData.has("processing_node")) formData.append("processing_node", taskInfo.selectedNode.id);
if (!formData.has || !formData.has("auto_processing_node")) formData.append("auto_processing_node", taskInfo.selectedNode.key == "auto"); if (!formData.has || !formData.has("auto_processing_node")) formData.append("auto_processing_node", taskInfo.selectedNode.key == "auto");
if (taskInfo.resizeMode === ResizeModes.YES){
if (!formData.has || !formData.has("resize_to")) formData.append("resize_to", taskInfo.resizeSize);
}
}); });
} }
} }
@ -225,8 +272,8 @@ class ProjectListItem extends React.Component {
this.dz._taskInfo = taskInfo; // Allow us to access the task info from dz this.dz._taskInfo = taskInfo; // Allow us to access the task info from dz
// Update dropzone settings // Update dropzone settings
if (taskInfo.resizeTo !== null){ if (taskInfo.resizeMode === ResizeModes.YESINBROWSER){
this.dz.options.resizeWidth = taskInfo.resizeTo; this.dz.options.resizeWidth = taskInfo.resizeSize;
this.dz.options.resizeQuality = 1.0; this.dz.options.resizeQuality = 1.0;
this.setUploadState({resizing: true, editing: false}); this.setUploadState({resizing: true, editing: false});

Wyświetl plik

@ -287,7 +287,6 @@ class TaskListItem extends React.Component {
const restartAction = this.genActionApiCall("restart", { const restartAction = this.genActionApiCall("restart", {
success: () => { success: () => {
if (this.console) this.console.clear();
this.setState({time: -1}); this.setState({time: -1});
}, },
defaultError: "Cannot restart task." defaultError: "Cannot restart task."
@ -351,7 +350,7 @@ class TaskListItem extends React.Component {
let status = statusCodes.description(task.status); let status = statusCodes.description(task.status);
if (status === "") status = "Uploading images"; if (status === "") status = "Uploading images";
if (!task.processing_node) status = ""; if (!task.processing_node) status = "Waiting for a node...";
if (task.pending_action !== null) status = pendingActions.description(task.pending_action); if (task.pending_action !== null) status = pendingActions.description(task.pending_action);
let expanded = ""; let expanded = "";

Wyświetl plik

@ -1,4 +1,6 @@
.map{ .map{
position: relative;
.leaflet-popup-content{ .leaflet-popup-content{
.title{ .title{
font-weight: bold; font-weight: bold;
@ -24,7 +26,7 @@
.shareButton{ .shareButton{
z-index: 2000; z-index: 2000;
bottom: -11px; bottom: 11px;
right: 38px; right: 38px;
} }
} }

Wyświetl plik

@ -1,10 +1,6 @@
@import '../vendor/potree/js/potree.css'; @import '../vendor/potree/js/potree.css';
@import '../vendor/potree/js/jquery-ui.css'; @import '../vendor/potree/js/jquery-ui.css';
[data-modelview]{
height: calc(100vh - 100px);
padding-bottom: 12px;
}
.model-view{ .model-view{
position: relative; position: relative;
height: 100%; height: 100%;

Wyświetl plik

@ -3,7 +3,7 @@
display: block; display: block;
&.top{ &.top{
top: -32px; top: -54px;
} }
&.bottom{ &.bottom{
top: 32px; top: 32px;

Wyświetl plik

@ -2,6 +2,6 @@
border-width: 1px; border-width: 1px;
position: absolute; position: absolute;
z-index: 2000; z-index: 2000;
bottom: -22px; bottom: 22px;
right: 12px; right: 12px;
} }

Wyświetl plik

@ -1,6 +1,7 @@
import '../css/main.scss'; import '../css/main.scss';
import './django/csrf'; import './django/csrf';
import ReactDOM from 'react-dom'; import ReactDOM from 'react-dom';
import PluginsAPI from './classes/plugins/API';
// Main is always executed first in the page // Main is always executed first in the page

Wyświetl plik

@ -0,0 +1,10 @@
// Define a mock for System.JS
export default {
import: function(dep){
throw new Error("Not implemented")
},
config: function(conf){
// Nothing
}
}

Wyświetl plik

@ -0,0 +1,89 @@
var waitSeconds = 100;
var head = document.getElementsByTagName('head')[0];
var isWebkit = !!window.navigator.userAgent.match(/AppleWebKit\/([^ ;]*)/);
var webkitLoadCheck = function(link, callback) {
setTimeout(function() {
for (var i = 0; i < document.styleSheets.length; i++) {
var sheet = document.styleSheets[i];
if (sheet.href == link.href)
return callback();
}
webkitLoadCheck(link, callback);
}, 10);
};
var cssIsReloadable = function cssIsReloadable(links) {
// Css loaded on the page initially should be skipped by the first
// systemjs load, and marked for reload
var reloadable = true;
forEach(links, function(link) {
if(!link.hasAttribute('data-systemjs-css')) {
reloadable = false;
link.setAttribute('data-systemjs-css', '');
}
});
return reloadable;
}
var findExistingCSS = function findExistingCSS(url){
// Search for existing link to reload
var links = head.getElementsByTagName('link')
return filter(links, function(link){ return link.href === url; });
}
var noop = function() {};
var loadCSS = function(url, existingLinks) {
return new Promise(function(resolve, reject) {
var timeout = setTimeout(function() {
reject('Unable to load CSS');
}, waitSeconds * 1000);
var _callback = function(error) {
clearTimeout(timeout);
link.onload = link.onerror = noop;
setTimeout(function() {
if (error)
reject(error);
else
resolve('');
}, 7);
};
var link = document.createElement('link');
link.type = 'text/css';
link.rel = 'stylesheet';
link.href = url;
link.setAttribute('data-systemjs-css', '');
if (!isWebkit) {
link.onload = function() {
_callback();
}
} else {
webkitLoadCheck(link, _callback);
}
link.onerror = function(event) {
_callback(event.error || new Error('Error loading CSS file.'));
};
if (existingLinks.length)
head.insertBefore(link, existingLinks[0]);
else
head.appendChild(link);
})
// Remove the old link regardless of loading outcome
.then(function(result){
forEach(existingLinks, function(link){link.parentElement.removeChild(link);})
return result;
}, function(err){
forEach(existingLinks, function(link){link.parentElement.removeChild(link);})
throw err;
})
};
exports.fetch = function(load) {
// dont reload styles loaded in the head
var links = findExistingCSS(load.address);
if(!cssIsReloadable(links))
return '';
return loadCSS(load.address, links);
};

Wyświetl plik

@ -0,0 +1,70 @@
/*
* Base CSS Plugin Class
*/
function CSSPluginBase(compileCSS) {
this.compileCSS = compileCSS;
this.translate = function(load, opts) {
var loader = this;
if (loader.builder && loader.buildCSS === false) {
load.metadata.build = false;
return;
}
var path = this._nodeRequire && this._nodeRequire('path');
return Promise.resolve(compileCSS.call(loader, load.source, load.address, load.metadata.loaderOptions || {}))
.then(function(result) {
load.metadata.style = result.css;
load.metadata.styleSourceMap = result.map;
if (result.moduleFormat)
load.metadata.format = result.moduleFormat;
return result.moduleSource || '';
});
};
}
var isWin = typeof process != 'undefined' && process.platform.match(/^win/);
function toFileURL(path) {
return 'file://' + (isWin ? '/' : '') + path.replace(/\\/g, '/');
}
var builderPromise;
function getBuilder(loader) {
if (builderPromise)
return builderPromise;
return builderPromise = loader['import']('./css-plugin-base-builder.js', module.id);
}
CSSPluginBase.prototype.bundle = function(loads, compileOpts, outputOpts) {
var loader = this;
return getBuilder(loader)
.then(function(builder) {
return builder.bundle.call(loader, loads, compileOpts, outputOpts);
});
};
CSSPluginBase.prototype.listAssets = function(loads, opts) {
var loader = this;
return getBuilder(loader)
.then(function(builder) {
return builder.listAssets.call(loader, loads, opts);
});
};
/*
* <style> injection browser plugin
*/
// NB hot reloading support here
CSSPluginBase.prototype.instantiate = function(load) {
if (this.builder || typeof document === 'undefined')
return;
var style = document.createElement('style');
style.type = 'text/css';
style.innerHTML = load.metadata.style;
document.head.appendChild(style);
};
module.exports = CSSPluginBase;

162
app/static/app/js/vendor/css.js vendored 100644
Wyświetl plik

@ -0,0 +1,162 @@
if (typeof window !== 'undefined') {
var waitSeconds = 100;
var head = document.getElementsByTagName('head')[0];
var isWebkit = !!window.navigator.userAgent.match(/AppleWebKit\/([^ ;]*)/);
var webkitLoadCheck = function(link, callback) {
setTimeout(function() {
for (var i = 0; i < document.styleSheets.length; i++) {
var sheet = document.styleSheets[i];
if (sheet.href == link.href)
return callback();
}
webkitLoadCheck(link, callback);
}, 10);
};
var cssIsReloadable = function cssIsReloadable(links) {
// Css loaded on the page initially should be skipped by the first
// systemjs load, and marked for reload
var reloadable = true;
forEach(links, function(link) {
if(!link.hasAttribute('data-systemjs-css')) {
reloadable = false;
link.setAttribute('data-systemjs-css', '');
}
});
return reloadable;
}
var findExistingCSS = function findExistingCSS(url){
// Search for existing link to reload
var links = head.getElementsByTagName('link')
return filter(links, function(link){ return link.href === url; });
}
var noop = function() {};
var loadCSS = function(url, existingLinks) {
return new Promise(function(resolve, reject) {
var timeout = setTimeout(function() {
reject('Unable to load CSS');
}, waitSeconds * 1000);
var _callback = function(error) {
clearTimeout(timeout);
link.onload = link.onerror = noop;
setTimeout(function() {
if (error)
reject(error);
else
resolve('');
}, 7);
};
var link = document.createElement('link');
link.type = 'text/css';
link.rel = 'stylesheet';
link.href = url;
link.setAttribute('data-systemjs-css', '');
if (!isWebkit) {
link.onload = function() {
_callback();
}
} else {
webkitLoadCheck(link, _callback);
}
link.onerror = function(event) {
_callback(event.error || new Error('Error loading CSS file.'));
};
if (existingLinks.length)
head.insertBefore(link, existingLinks[0]);
else
head.appendChild(link);
})
// Remove the old link regardless of loading outcome
.then(function(result){
forEach(existingLinks, function(link){link.parentElement.removeChild(link);})
return result;
}, function(err){
forEach(existingLinks, function(link){link.parentElement.removeChild(link);})
throw err;
})
};
exports.fetch = function(load) {
// dont reload styles loaded in the head
var links = findExistingCSS(load.address);
if(!cssIsReloadable(links))
return '';
return loadCSS(load.address, links);
};
}
else {
var builderPromise;
function getBuilder(loader) {
if (builderPromise)
return builderPromise;
return builderPromise = System['import']('./css-plugin-base.js', module.id)
.then(function(CSSPluginBase) {
return new CSSPluginBase(function compile(source, address) {
return {
css: source,
map: null,
moduleSource: null,
moduleFormat: null
};
});
});
}
exports.cssPlugin = true;
exports.fetch = function(load, fetch) {
if (!this.builder)
return '';
return fetch(load);
};
exports.translate = function(load, opts) {
if (!this.builder)
return '';
var loader = this;
return getBuilder(loader).then(function(builder) {
return builder.translate.call(loader, load, opts);
});
};
exports.instantiate = function(load, opts) {
if (!this.builder)
return;
var loader = this;
return getBuilder(loader).then(function(builder) {
return builder.instantiate.call(loader, load, opts);
});
};
exports.bundle = function(loads, compileOpts, outputOpts) {
var loader = this;
return getBuilder(loader).then(function(builder) {
return builder.bundle.call(loader, loads, compileOpts, outputOpts);
});
};
exports.listAssets = function(loads, opts) {
var loader = this;
return getBuilder(loader).then(function(builder) {
return builder.listAssets.call(loader, loads, opts);
});
};
}
// Because IE8?
function filter(arrayLike, func) {
var arr = []
forEach(arrayLike, function(item){
if(func(item))
arr.push(item);
});
return arr;
}
// Because IE8?
function forEach(arrayLike, func){
for (var i = 0; i < arrayLike.length; i++) {
func(arrayLike[i])
}
}

Wyświetl plik

@ -2496,27 +2496,22 @@ var Dropzone = function (_Emitter) {
// Modified for WebODM // Modified for WebODM
_this17.emit("transformstart", files); _this17.emit("transformstart", files);
// Process in batches based on the available number of cores var process = function(i){
var stride = Math.max(1, (navigator.hardwareConcurrency || 2) - 1); if (files[i]){
_this17.options.transformFile.call(_this17, files[i], function (transformedFile) {
var process = function(i, s){ transformedFiles[i] = transformedFile;
if (files[i + s]){ _this17.emit("transformcompleted", files[i], doneCounter + 1);
_this17.options.transformFile.call(_this17, files[i + s], function (transformedFile) {
transformedFiles[i + s] = transformedFile;
_this17.emit("transformcompleted", doneCounter + 1);
if (++doneCounter === files.length) { if (++doneCounter === files.length) {
_this17.emit("transformend", files); _this17.emit("transformend", files);
done(transformedFiles); done(transformedFiles);
}else{ }else{
process(i + stride, s); process(i + 1);
} }
}); });
} }
} }
for (var s = 0; s < stride; s++){ process(0);
process(0, s);
}
} }
// Takes care of adding other input elements of the form to the AJAX request // Takes care of adding other input elements of the form to the AJAX request

File diff suppressed because one or more lines are too long

Wyświetl plik

@ -7,7 +7,7 @@
<h3><i class="fa fa-cube"></i> {{title}}</h3> <h3><i class="fa fa-cube"></i> {{title}}</h3>
<div data-modelview <div data-modelview class="full-height"
{% for key, value in params %} {% for key, value in params %}
data-{{key}}="{{value}}" data-{{key}}="{{value}}"
{% endfor %} {% endfor %}

Wyświetl plik

@ -0,0 +1,11 @@
{% extends "app/base.html" %}
{% load settings %}
{% block page-wrapper %}
<div style="text-align: center;">
<h3>404 Page Not Found</h3>
<h5>Are you sure the address is correct?</h5>
<img src="/static/app/img/404.png" alt="404"/>
</div>
{{ SETTINGS.theme.html_after_header|safe }}
{% endblock %}

Wyświetl plik

@ -0,0 +1,11 @@
{% extends "app/base.html" %}
{% load settings %}
{% block page-wrapper %}
<div style="text-align: center;">
<h3>500 Internal Server Error</h3>
<h5>Something happened. The server logs contain more information.</h5>
<img src="/static/app/img/500.png" alt="500"/>
</div>
{{ SETTINGS.theme.html_after_header|safe }}
{% endblock %}

Wyświetl plik

@ -1,7 +1,7 @@
<!DOCTYPE html> <!DOCTYPE html>
<html lang="en"> <html lang="en">
<head> <head>
{% load i18n static settings compress %} {% load i18n static settings compress plugins %}
<meta charset="UTF-8"> <meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta http-equiv="X-UA-Compatible" content="IE=edge">
@ -18,10 +18,16 @@
<script src="{% static 'app/js/vendor/modernizr-2.8.3.min.js' %}"></script> <script src="{% static 'app/js/vendor/modernizr-2.8.3.min.js' %}"></script>
<script src="{% static 'app/js/vendor/jquery-1.11.2.min.js' %}"></script> <script src="{% static 'app/js/vendor/jquery-1.11.2.min.js' %}"></script>
<script src="{% static 'app/js/vendor/system.js' %}"></script>
{% load render_bundle from webpack_loader %} {% load render_bundle from webpack_loader %}
{% render_bundle 'main' %} {% render_bundle 'main' %}
{% autoescape off %}
{% get_plugins_js_includes %}
{% get_plugins_css_includes %}
{% endautoescape %}
<title>{{title|default:"Login"}} - {{ SETTINGS.app_name }}</title> <title>{{title|default:"Login"}} - {{ SETTINGS.app_name }}</title>
{% compress css %} {% compress css %}
@ -67,7 +73,7 @@
</button> </button>
{% block navbar-top-links %}{% endblock %} {% block navbar-top-links %}{% endblock %}
<a class="navbar-brand" href="/"><img src="{% settings_image_url 'app_logo_36' %}" alt="{{ SETTINGS.app_name }}" /></a> <a class="navbar-brand" href="/"><img src="{% settings_image_url 'app_logo_36' %}" alt="{{ SETTINGS.app_name }}" /></a>
<a class="navbar-link" href="/"><p class="navbar-text">{{ SETTINGS.app_name }}</a></p> <a class="navbar-link" href="/"><p class="navbar-text">{{ SETTINGS.app_name }}</p></a>
</div> </div>
{% block navbar-sidebar %}{% endblock %} {% block navbar-sidebar %}{% endblock %}

Wyświetl plik

@ -1,5 +0,0 @@
{% extends "app/base.html" %}
{% block content %}
{{ hello }}
{% endblock %}

Wyświetl plik

@ -226,11 +226,27 @@
<!--<li> <!--<li>
<a href="#"><i class="fa fa-plane fa-fw"></i> Mission Planner</a> <a href="#"><i class="fa fa-plane fa-fw"></i> Mission Planner</a>
</li> --> </li> -->
{% load processingnode_extras %} {% load processingnode_extras plugins %}
{% can_view_processing_nodes as view_nodes %} {% can_view_processing_nodes as view_nodes %}
{% can_add_processing_nodes as add_nodes %} {% can_add_processing_nodes as add_nodes %}
{% get_visible_processing_nodes as nodes %} {% get_visible_processing_nodes as nodes %}
{% get_plugins_main_menus as plugin_menus %}
{% for menu in plugin_menus %}
<li>
<a href="{{menu.link}}"><i class="{{menu.css_icon}}"></i> {{menu.label}}{% if menu.has_submenu %}<span class="fa arrow"></span>{% endif %}</a>
{% if menu.has_submenu %}
<ul class="nav nav-second-level">
{% for menu in menu.submenu %}
<li>
<a href="{{menu.link}}"><i class="{{menu.css_icon}}"></i> {{menu.label}}{% if menu.has_submenu %}<span class="fa arrow"></span>{% endif %}</a>
</li>
{% endfor %}
</ul>
{% endif %}
</li>
{% endfor %}
{% if view_nodes %} {% if view_nodes %}
<li> <li>
@ -282,7 +298,6 @@
<!-- /.sidebar-collapse --> <!-- /.sidebar-collapse -->
</div> </div>
<!-- /.navbar-static-side --> <!-- /.navbar-static-side -->
</nav>
{% endblock %} {% endblock %}
{% block page-wrapper %} {% block page-wrapper %}

Wyświetl plik

@ -6,7 +6,7 @@
<h3><i class="fa fa-cube"></i> {{title}}</h3> <h3><i class="fa fa-cube"></i> {{title}}</h3>
<div data-modelview <div data-modelview class="full-height"
{% for key, value in params %} {% for key, value in params %}
data-{{key}}="{{value}}" data-{{key}}="{{value}}"
{% endfor %} {% endfor %}

Wyświetl plik

@ -4,7 +4,7 @@
{% load render_bundle from webpack_loader %} {% load render_bundle from webpack_loader %}
{% render_bundle 'ModelView' attrs='async' %} {% render_bundle 'ModelView' attrs='async' %}
<div data-modelview <div data-modelview class="full-height"
{% for key, value in params %} {% for key, value in params %}
data-{{key}}="{{value}}" data-{{key}}="{{value}}"
{% endfor %} {% endfor %}

Wyświetl plik

@ -0,0 +1,22 @@
from django import template
from app.plugins import get_active_plugins
import itertools
register = template.Library()
@register.simple_tag(takes_context=False)
def get_plugins_js_includes():
# Flatten all urls for all plugins
js_urls = list(itertools.chain(*[plugin.get_include_js_urls() for plugin in get_active_plugins()]))
return "\n".join(map(lambda url: "<script src='{}'></script>".format(url), js_urls))
@register.simple_tag(takes_context=False)
def get_plugins_css_includes():
# Flatten all urls for all plugins
css_urls = list(itertools.chain(*[plugin.get_include_css_urls() for plugin in get_active_plugins()]))
return "\n".join(map(lambda url: "<link href='{}' rel='stylesheet' type='text/css'>".format(url), css_urls))
@register.assignment_tag()
def get_plugins_main_menus():
# Flatten list of menus
return list(itertools.chain(*[plugin.main_menu() for plugin in get_active_plugins()]))

Wyświetl plik

@ -182,14 +182,16 @@ class TestApi(BootTestCase):
res = client.post('/api/projects/{}/tasks/{}/cancel/'.format(project.id, task.id)) res = client.post('/api/projects/{}/tasks/{}/cancel/'.format(project.id, task.id))
self.assertTrue(res.data["success"]) self.assertTrue(res.data["success"])
task.refresh_from_db() task.refresh_from_db()
self.assertTrue(task.last_error is None)
self.assertTrue(task.pending_action == pending_actions.CANCEL) # Task should have failed to be canceled
self.assertTrue("has no processing node or UUID" in task.last_error)
res = client.post('/api/projects/{}/tasks/{}/restart/'.format(project.id, task.id)) res = client.post('/api/projects/{}/tasks/{}/restart/'.format(project.id, task.id))
self.assertTrue(res.data["success"]) self.assertTrue(res.data["success"])
task.refresh_from_db() task.refresh_from_db()
self.assertTrue(task.last_error is None)
self.assertTrue(task.pending_action == pending_actions.RESTART) # Task should have failed to be restarted
self.assertTrue("has no processing node" in task.last_error)
# Cannot cancel, restart or delete a task for which we don't have permission # Cannot cancel, restart or delete a task for which we don't have permission
for action in ['cancel', 'remove', 'restart']: for action in ['cancel', 'remove', 'restart']:
@ -199,10 +201,9 @@ class TestApi(BootTestCase):
# Can delete # Can delete
res = client.post('/api/projects/{}/tasks/{}/remove/'.format(project.id, task.id)) res = client.post('/api/projects/{}/tasks/{}/remove/'.format(project.id, task.id))
self.assertTrue(res.data["success"]) self.assertTrue(res.data["success"])
task.refresh_from_db() self.assertFalse(Task.objects.filter(id=task.id).exists())
self.assertTrue(task.last_error is None)
self.assertTrue(task.pending_action == pending_actions.REMOVE)
task = Task.objects.create(project=project)
temp_project = Project.objects.create(owner=user) temp_project = Project.objects.create(owner=user)
# We have permissions to do anything on a project that we own # We have permissions to do anything on a project that we own

Wyświetl plik

@ -53,7 +53,7 @@ class TestApiPreset(BootTestCase):
self.assertTrue(res.status_code == status.HTTP_200_OK) self.assertTrue(res.status_code == status.HTTP_200_OK)
# Only ours and global presets are available # Only ours and global presets are available
self.assertTrue(len(res.data) == 6) self.assertTrue(len(res.data) == 7)
self.assertTrue('My Local Preset' in [preset['name'] for preset in res.data]) self.assertTrue('My Local Preset' in [preset['name'] for preset in res.data])
self.assertTrue('High Quality' in [preset['name'] for preset in res.data]) self.assertTrue('High Quality' in [preset['name'] for preset in res.data])
self.assertTrue('Global Preset #1' in [preset['name'] for preset in res.data]) self.assertTrue('Global Preset #1' in [preset['name'] for preset in res.data])

Wyświetl plik

@ -1,22 +1,18 @@
import os import os
import subprocess
import time import time
import shutil
import logging import logging
from datetime import timedelta from datetime import timedelta
import json import json
import requests import requests
from PIL import Image
from django.contrib.auth.models import User from django.contrib.auth.models import User
from django.contrib.gis.gdal import GDALRaster
from django.contrib.gis.gdal import OGRGeometry
from rest_framework import status from rest_framework import status
from rest_framework.test import APIClient from rest_framework.test import APIClient
from app import pending_actions import worker
from app import scheduler
from django.utils import timezone from django.utils import timezone
from app.models import Project, Task, ImageUpload from app.models import Project, Task, ImageUpload
from app.models.task import task_directory_path, full_task_directory_path from app.models.task import task_directory_path, full_task_directory_path
@ -24,6 +20,7 @@ from app.tests.classes import BootTransactionTestCase
from nodeodm import status_codes from nodeodm import status_codes
from nodeodm.models import ProcessingNode, OFFLINE_MINUTES from nodeodm.models import ProcessingNode, OFFLINE_MINUTES
from app.testwatch import testWatch from app.testwatch import testWatch
from .utils import start_processing_node, clear_test_media_root
# We need to test the task API in a TransactionTestCase because # We need to test the task API in a TransactionTestCase because
# task processing happens on a separate thread, and normal TestCases # task processing happens on a separate thread, and normal TestCases
@ -34,26 +31,10 @@ logger = logging.getLogger('app.logger')
DELAY = 2 # time to sleep for during process launch, background processing, etc. DELAY = 2 # time to sleep for during process launch, background processing, etc.
def start_processing_node(*args):
current_dir = os.path.dirname(os.path.realpath(__file__))
node_odm = subprocess.Popen(['node', 'index.js', '--port', '11223', '--test'] + list(args), shell=False,
cwd=os.path.join(current_dir, "..", "..", "nodeodm", "external", "node-OpenDroneMap"))
time.sleep(DELAY) # Wait for the server to launch
return node_odm
class TestApiTask(BootTransactionTestCase): class TestApiTask(BootTransactionTestCase):
def setUp(self): def setUp(self):
super().setUp() super().setUp()
clear_test_media_root()
# We need to clear previous media_root content
# This points to the test directory, but just in case
# we double check that the directory is indeed a test directory
if "_test" in settings.MEDIA_ROOT:
if os.path.exists(settings.MEDIA_ROOT):
logger.info("Cleaning up {}".format(settings.MEDIA_ROOT))
shutil.rmtree(settings.MEDIA_ROOT)
else:
logger.warning("We did not remove MEDIA_ROOT because we couldn't find a _test suffix in its path.")
def test_task(self): def test_task(self):
client = APIClient() client = APIClient()
@ -87,11 +68,15 @@ class TestApiTask(BootTransactionTestCase):
image1 = open("app/fixtures/tiny_drone_image.jpg", 'rb') image1 = open("app/fixtures/tiny_drone_image.jpg", 'rb')
image2 = open("app/fixtures/tiny_drone_image_2.jpg", 'rb') image2 = open("app/fixtures/tiny_drone_image_2.jpg", 'rb')
img1 = Image.open("app/fixtures/tiny_drone_image.jpg")
# Not authenticated? # Not authenticated?
res = client.post("/api/projects/{}/tasks/".format(project.id), { res = client.post("/api/projects/{}/tasks/".format(project.id), {
'images': [image1, image2] 'images': [image1, image2]
}, format="multipart") }, format="multipart")
self.assertTrue(res.status_code == status.HTTP_403_FORBIDDEN); self.assertTrue(res.status_code == status.HTTP_403_FORBIDDEN);
image1.seek(0)
image2.seek(0)
client.login(username="testuser", password="test1234") client.login(username="testuser", password="test1234")
@ -100,12 +85,16 @@ class TestApiTask(BootTransactionTestCase):
'images': [image1, image2] 'images': [image1, image2]
}, format="multipart") }, format="multipart")
self.assertTrue(res.status_code == status.HTTP_404_NOT_FOUND) self.assertTrue(res.status_code == status.HTTP_404_NOT_FOUND)
image1.seek(0)
image2.seek(0)
# Cannot create a task for a project for which we have no access to # Cannot create a task for a project for which we have no access to
res = client.post("/api/projects/{}/tasks/".format(other_project.id), { res = client.post("/api/projects/{}/tasks/".format(other_project.id), {
'images': [image1, image2] 'images': [image1, image2]
}, format="multipart") }, format="multipart")
self.assertTrue(res.status_code == status.HTTP_404_NOT_FOUND) self.assertTrue(res.status_code == status.HTTP_404_NOT_FOUND)
image1.seek(0)
image2.seek(0)
# Cannot create a task without images # Cannot create a task without images
res = client.post("/api/projects/{}/tasks/".format(project.id), { res = client.post("/api/projects/{}/tasks/".format(project.id), {
@ -118,6 +107,7 @@ class TestApiTask(BootTransactionTestCase):
'images': image1 'images': image1
}, format="multipart") }, format="multipart")
self.assertTrue(res.status_code == status.HTTP_400_BAD_REQUEST) self.assertTrue(res.status_code == status.HTTP_400_BAD_REQUEST)
image1.seek(0)
# Normal case with images[], name and processing node parameter # Normal case with images[], name and processing node parameter
res = client.post("/api/projects/{}/tasks/".format(project.id), { res = client.post("/api/projects/{}/tasks/".format(project.id), {
@ -129,6 +119,67 @@ class TestApiTask(BootTransactionTestCase):
multiple_param_task = Task.objects.latest('created_at') multiple_param_task = Task.objects.latest('created_at')
self.assertTrue(multiple_param_task.name == 'test_task') self.assertTrue(multiple_param_task.name == 'test_task')
self.assertTrue(multiple_param_task.processing_node.id == pnode.id) self.assertTrue(multiple_param_task.processing_node.id == pnode.id)
image1.seek(0)
image2.seek(0)
# Uploaded images should be the same size as originals
with Image.open(multiple_param_task.task_path("tiny_drone_image.jpg")) as im:
self.assertTrue(im.size == img1.size)
# Normal case with images[], GCP, name and processing node parameter and resize_to option
gcp = open("app/fixtures/gcp.txt", 'r')
res = client.post("/api/projects/{}/tasks/".format(project.id), {
'images': [image1, image2, gcp],
'name': 'test_task',
'processing_node': pnode.id,
'resize_to': img1.size[0] / 2.0
}, format="multipart")
self.assertTrue(res.status_code == status.HTTP_201_CREATED)
resized_task = Task.objects.latest('created_at')
image1.seek(0)
image2.seek(0)
gcp.seek(0)
# Uploaded images should have been resized
with Image.open(resized_task.task_path("tiny_drone_image.jpg")) as im:
self.assertTrue(im.size[0] == img1.size[0] / 2.0)
# GCP should have been scaled
with open(resized_task.task_path("gcp.txt")) as f:
lines = list(map(lambda l: l.strip(), f.readlines()))
[x, y, z, px, py, imagename, *extras] = lines[1].split(' ')
self.assertTrue(imagename == "tiny_drone_image.JPG") # case insensitive
self.assertTrue(float(px) == 2.0) # scaled by half
self.assertTrue(float(py) == 3.0) # scaled by half
self.assertTrue(float(x) == 576529.22) # Didn't change
[x, y, z, px, py, imagename, *extras] = lines[5].split(' ')
self.assertTrue(imagename == "missing_image.jpg")
self.assertTrue(float(px) == 8.0) # Didn't change
self.assertTrue(float(py) == 8.0) # Didn't change
# Case with malformed GCP file option
with open("app/fixtures/gcp_malformed.txt", 'r') as malformed_gcp:
res = client.post("/api/projects/{}/tasks/".format(project.id), {
'images': [image1, image2, malformed_gcp],
'name': 'test_task',
'processing_node': pnode.id,
'resize_to': img1.size[0] / 2.0
}, format="multipart")
self.assertTrue(res.status_code == status.HTTP_201_CREATED)
malformed_gcp_task = Task.objects.latest('created_at')
# We just pass it along, it will get errored out during processing
# But we shouldn't fail.
with open(malformed_gcp_task.task_path("gcp_malformed.txt")) as f:
lines = list(map(lambda l: l.strip(), f.readlines()))
self.assertTrue(lines[1] == "<O_O>")
image1.seek(0)
image2.seek(0)
# Cannot create a task with images[], name, but invalid processing node parameter # Cannot create a task with images[], name, but invalid processing node parameter
res = client.post("/api/projects/{}/tasks/".format(project.id), { res = client.post("/api/projects/{}/tasks/".format(project.id), {
@ -137,6 +188,8 @@ class TestApiTask(BootTransactionTestCase):
'processing_node': 9999 'processing_node': 9999
}, format="multipart") }, format="multipart")
self.assertTrue(res.status_code == status.HTTP_400_BAD_REQUEST) self.assertTrue(res.status_code == status.HTTP_400_BAD_REQUEST)
image1.seek(0)
image2.seek(0)
# Normal case with images[] parameter # Normal case with images[] parameter
res = client.post("/api/projects/{}/tasks/".format(project.id), { res = client.post("/api/projects/{}/tasks/".format(project.id), {
@ -144,6 +197,8 @@ class TestApiTask(BootTransactionTestCase):
'auto_processing_node': 'false' 'auto_processing_node': 'false'
}, format="multipart") }, format="multipart")
self.assertTrue(res.status_code == status.HTTP_201_CREATED) self.assertTrue(res.status_code == status.HTTP_201_CREATED)
image1.seek(0)
image2.seek(0)
# Should have returned the id of the newly created task # Should have returned the id of the newly created task
task = Task.objects.latest('created_at') task = Task.objects.latest('created_at')
@ -193,8 +248,6 @@ class TestApiTask(BootTransactionTestCase):
}) })
self.assertTrue(res.status_code == status.HTTP_404_NOT_FOUND) self.assertTrue(res.status_code == status.HTTP_404_NOT_FOUND)
testWatch.clear()
# No UUID at this point # No UUID at this point
self.assertTrue(len(task.uuid) == 0) self.assertTrue(len(task.uuid) == 0)
@ -204,8 +257,8 @@ class TestApiTask(BootTransactionTestCase):
}) })
self.assertTrue(res.status_code == status.HTTP_200_OK) self.assertTrue(res.status_code == status.HTTP_200_OK)
# On update scheduler.processing_pending_tasks should have been called in the background # On update worker.tasks.process_pending_tasks should have been called in the background
testWatch.wait_until_call("app.scheduler.process_pending_tasks", timeout=5) # (during tests this is sync)
# Processing should have started and a UUID is assigned # Processing should have started and a UUID is assigned
task.refresh_from_db() task.refresh_from_db()
@ -226,7 +279,7 @@ class TestApiTask(BootTransactionTestCase):
time.sleep(DELAY) time.sleep(DELAY)
# Calling process pending tasks should finish the process # Calling process pending tasks should finish the process
scheduler.process_pending_tasks() worker.tasks.process_pending_tasks()
task.refresh_from_db() task.refresh_from_db()
self.assertTrue(task.status == status_codes.COMPLETED) self.assertTrue(task.status == status_codes.COMPLETED)
@ -295,16 +348,15 @@ class TestApiTask(BootTransactionTestCase):
testWatch.clear() testWatch.clear()
res = client.post("/api/projects/{}/tasks/{}/restart/".format(project.id, task.id)) res = client.post("/api/projects/{}/tasks/{}/restart/".format(project.id, task.id))
self.assertTrue(res.status_code == status.HTTP_200_OK) self.assertTrue(res.status_code == status.HTTP_200_OK)
testWatch.wait_until_call("app.scheduler.process_pending_tasks", timeout=5) # process_task is called in the background
task.refresh_from_db() task.refresh_from_db()
self.assertTrue(task.status in [status_codes.RUNNING, status_codes.COMPLETED]) self.assertTrue(task.status in [status_codes.RUNNING, status_codes.COMPLETED])
# Cancel a task # Cancel a task
testWatch.clear()
res = client.post("/api/projects/{}/tasks/{}/cancel/".format(project.id, task.id)) res = client.post("/api/projects/{}/tasks/{}/cancel/".format(project.id, task.id))
self.assertTrue(res.status_code == status.HTTP_200_OK) self.assertTrue(res.status_code == status.HTTP_200_OK)
testWatch.wait_until_call("app.scheduler.process_pending_tasks", timeout=5) # task is processed right away
# Should have been canceled # Should have been canceled
task.refresh_from_db() task.refresh_from_db()
@ -313,7 +365,7 @@ class TestApiTask(BootTransactionTestCase):
# Remove a task # Remove a task
res = client.post("/api/projects/{}/tasks/{}/remove/".format(project.id, task.id)) res = client.post("/api/projects/{}/tasks/{}/remove/".format(project.id, task.id))
self.assertTrue(res.status_code == status.HTTP_200_OK) self.assertTrue(res.status_code == status.HTTP_200_OK)
testWatch.wait_until_call("app.scheduler.process_pending_tasks", 2, timeout=5) # task is processed right away
# Has been removed along with assets # Has been removed along with assets
self.assertFalse(Task.objects.filter(pk=task.id).exists()) self.assertFalse(Task.objects.filter(pk=task.id).exists())
@ -322,10 +374,10 @@ class TestApiTask(BootTransactionTestCase):
task_assets_path = os.path.join(settings.MEDIA_ROOT, task_directory_path(task.id, task.project.id)) task_assets_path = os.path.join(settings.MEDIA_ROOT, task_directory_path(task.id, task.project.id))
self.assertFalse(os.path.exists(task_assets_path)) self.assertFalse(os.path.exists(task_assets_path))
testWatch.clear() # Stop processing node
testWatch.intercept("app.scheduler.process_pending_tasks") node_odm.terminate()
# Create a task, then kill the processing node # Create a task
res = client.post("/api/projects/{}/tasks/".format(project.id), { res = client.post("/api/projects/{}/tasks/".format(project.id), {
'images': [image1, image2], 'images': [image1, image2],
'name': 'test_task_offline', 'name': 'test_task_offline',
@ -334,13 +386,8 @@ class TestApiTask(BootTransactionTestCase):
}, format="multipart") }, format="multipart")
self.assertTrue(res.status_code == status.HTTP_201_CREATED) self.assertTrue(res.status_code == status.HTTP_201_CREATED)
task = Task.objects.get(pk=res.data['id']) task = Task.objects.get(pk=res.data['id'])
image1.seek(0)
# Stop processing node image2.seek(0)
node_odm.terminate()
task.refresh_from_db()
self.assertTrue(task.last_error is None)
scheduler.process_pending_tasks()
# Processing should fail and set an error # Processing should fail and set an error
task.refresh_from_db() task.refresh_from_db()
@ -354,23 +401,20 @@ class TestApiTask(BootTransactionTestCase):
res = client.post("/api/projects/{}/tasks/{}/restart/".format(project.id, task.id)) res = client.post("/api/projects/{}/tasks/{}/restart/".format(project.id, task.id))
self.assertTrue(res.status_code == status.HTTP_200_OK) self.assertTrue(res.status_code == status.HTTP_200_OK)
task.refresh_from_db() task.refresh_from_db()
self.assertTrue(task.pending_action == pending_actions.RESTART)
# After processing, the task should have restarted, and have no UUID or status # After processing, the task should have restarted, and have no UUID or status
scheduler.process_pending_tasks()
task.refresh_from_db()
self.assertTrue(task.status is None) self.assertTrue(task.status is None)
self.assertTrue(len(task.uuid) == 0) self.assertTrue(len(task.uuid) == 0)
# Another step and it should have acquired a UUID # Another step and it should have acquired a UUID
scheduler.process_pending_tasks() worker.tasks.process_pending_tasks()
task.refresh_from_db() task.refresh_from_db()
self.assertTrue(task.status in [status_codes.RUNNING, status_codes.COMPLETED]) self.assertTrue(task.status in [status_codes.RUNNING, status_codes.COMPLETED])
self.assertTrue(len(task.uuid) > 0) self.assertTrue(len(task.uuid) > 0)
# Another step and it should be completed # Another step and it should be completed
time.sleep(DELAY) time.sleep(DELAY)
scheduler.process_pending_tasks() worker.tasks.process_pending_tasks()
task.refresh_from_db() task.refresh_from_db()
self.assertTrue(task.status == status_codes.COMPLETED) self.assertTrue(task.status == status_codes.COMPLETED)
@ -388,12 +432,9 @@ class TestApiTask(BootTransactionTestCase):
# 3. Restart the task # 3. Restart the task
res = client.post("/api/projects/{}/tasks/{}/restart/".format(project.id, task.id)) res = client.post("/api/projects/{}/tasks/{}/restart/".format(project.id, task.id))
self.assertTrue(res.status_code == status.HTTP_200_OK) self.assertTrue(res.status_code == status.HTTP_200_OK)
task.refresh_from_db()
self.assertTrue(task.pending_action == pending_actions.RESTART)
# 4. Check that the rerun_from parameter has been cleared # 4. Check that the rerun_from parameter has been cleared
# but the other parameters are still set # but the other parameters are still set
scheduler.process_pending_tasks()
task.refresh_from_db() task.refresh_from_db()
self.assertTrue(len(task.uuid) == 0) self.assertTrue(len(task.uuid) == 0)
self.assertTrue(len(list(filter(lambda d: d['name'] == 'rerun-from', task.options))) == 0) self.assertTrue(len(list(filter(lambda d: d['name'] == 'rerun-from', task.options))) == 0)
@ -404,7 +445,7 @@ class TestApiTask(BootTransactionTestCase):
raise requests.exceptions.ConnectTimeout("Simulated timeout") raise requests.exceptions.ConnectTimeout("Simulated timeout")
testWatch.intercept("nodeodm.api_client.task_output", connTimeout) testWatch.intercept("nodeodm.api_client.task_output", connTimeout)
scheduler.process_pending_tasks() worker.tasks.process_pending_tasks()
# Timeout errors should be handled by retrying again at a later time # Timeout errors should be handled by retrying again at a later time
# and not fail # and not fail
@ -440,9 +481,9 @@ class TestApiTask(BootTransactionTestCase):
}, format="multipart") }, format="multipart")
self.assertTrue(res.status_code == status.HTTP_201_CREATED) self.assertTrue(res.status_code == status.HTTP_201_CREATED)
scheduler.process_pending_tasks() worker.tasks.process_pending_tasks()
time.sleep(DELAY) time.sleep(DELAY)
scheduler.process_pending_tasks() worker.tasks.process_pending_tasks()
task = Task.objects.get(pk=res.data['id']) task = Task.objects.get(pk=res.data['id'])
self.assertTrue(task.status == status_codes.COMPLETED) self.assertTrue(task.status == status_codes.COMPLETED)
@ -476,6 +517,7 @@ class TestApiTask(BootTransactionTestCase):
image1.close() image1.close()
image2.close() image2.close()
gcp.close()
node_odm.terminate() node_odm.terminate()
def test_task_auto_processing_node(self): def test_task_auto_processing_node(self):
@ -492,7 +534,7 @@ class TestApiTask(BootTransactionTestCase):
task.last_error = "Test error" task.last_error = "Test error"
task.save() task.save()
scheduler.process_pending_tasks() worker.tasks.process_pending_tasks()
# A processing node should not have been assigned # A processing node should not have been assigned
task.refresh_from_db() task.refresh_from_db()
@ -502,19 +544,19 @@ class TestApiTask(BootTransactionTestCase):
task.last_error = None task.last_error = None
task.save() task.save()
scheduler.process_pending_tasks() worker.tasks.process_pending_tasks()
# A processing node should not have been assigned because no processing nodes are online # A processing node should not have been assigned because no processing nodes are online
task.refresh_from_db() task.refresh_from_db()
self.assertTrue(task.processing_node is None) self.assertTrue(task.processing_node is None)
# Bring a proessing node online # Bring a processing node online
pnode.last_refreshed = timezone.now() pnode.last_refreshed = timezone.now()
pnode.save() pnode.save()
self.assertTrue(pnode.is_online()) self.assertTrue(pnode.is_online())
# A processing node has been assigned # A processing node has been assigned
scheduler.process_pending_tasks() worker.tasks.process_pending_tasks()
task.refresh_from_db() task.refresh_from_db()
self.assertTrue(task.processing_node.id == pnode.id) self.assertTrue(task.processing_node.id == pnode.id)
@ -533,13 +575,13 @@ class TestApiTask(BootTransactionTestCase):
task.status = None task.status = None
task.save() task.save()
scheduler.process_pending_tasks() worker.tasks.process_pending_tasks()
# Processing node is now cleared and a new one will be assigned on the next tick # Processing node is now cleared and a new one will be assigned on the next tick
task.refresh_from_db() task.refresh_from_db()
self.assertTrue(task.processing_node is None) self.assertTrue(task.processing_node is None)
scheduler.process_pending_tasks() worker.tasks.process_pending_tasks()
task.refresh_from_db() task.refresh_from_db()
self.assertTrue(task.processing_node.id == another_pnode.id) self.assertTrue(task.processing_node.id == another_pnode.id)
@ -555,7 +597,7 @@ class TestApiTask(BootTransactionTestCase):
pnode.save() pnode.save()
self.assertTrue(pnode.is_online()) self.assertTrue(pnode.is_online())
scheduler.process_pending_tasks() worker.tasks.process_pending_tasks()
# A processing node should not have been assigned because we asked # A processing node should not have been assigned because we asked
# not to via auto_processing_node = false # not to via auto_processing_node = false

Wyświetl plik

@ -4,7 +4,6 @@ from rest_framework import status
from app.models import Project, Task from app.models import Project, Task
from .classes import BootTestCase from .classes import BootTestCase
from app import scheduler
from django.core.exceptions import ValidationError from django.core.exceptions import ValidationError
class TestApp(BootTestCase): class TestApp(BootTestCase):
@ -199,17 +198,3 @@ class TestApp(BootTestCase):
task.options = [{'name': 'test', 'value': 1}, {"invalid": 1}] task.options = [{'name': 'test', 'value': 1}, {"invalid": 1}]
self.assertRaises(ValidationError, task.save) self.assertRaises(ValidationError, task.save)
def test_scheduler(self):
self.assertTrue(scheduler.setup() is None)
# Can call update_nodes_info()
self.assertTrue(scheduler.update_nodes_info() is None)
# Can call function in background
self.assertTrue(scheduler.update_nodes_info(background=True).join() is None)
self.assertTrue(scheduler.teardown() is None)

Wyświetl plik

@ -0,0 +1,42 @@
from django.test import Client
from rest_framework import status
from .classes import BootTestCase
class TestPlugins(BootTestCase):
def setUp(self):
pass
def tearDown(self):
pass
def test_core_plugins(self):
client = Client()
# We can access public files core plugins (without auth)
res = client.get('/plugins/test/file.txt')
self.assertEqual(res.status_code, status.HTTP_200_OK)
# We mounted an endpoint
res = client.get('/plugins/test/app_mountpoint/')
self.assertEqual(res.status_code, status.HTTP_200_OK)
self.assertTemplateUsed(res, 'plugins/test/templates/app.html')
# It uses regex properly
res = client.get('/plugins/test/app_mountpoint/a')
self.assertEqual(res.status_code, status.HTTP_404_NOT_FOUND)
# Querying a page should show the included CSS/JS files
client.login(username='testuser', password='test1234')
res = client.get('/dashboard/')
self.assertEqual(res.status_code, status.HTTP_200_OK)
self.assertContains(res, "<link href='/plugins/test/test.css' rel='stylesheet' type='text/css'>", html=True)
self.assertContains(res, "<script src='/plugins/test/test.js'></script>", html=True)
# And our menu entry
self.assertContains(res, '<li><a href="/plugins/test/menu_url/"><i class="test-icon"></i> Test</a></li>', html=True)
# TODO:
# test API endpoints
# test python hooks

Wyświetl plik

@ -1,5 +1,4 @@
from django.test import TestCase from django.test import TestCase
from app.testwatch import TestWatch from app.testwatch import TestWatch
@ -9,7 +8,6 @@ def test(a, b):
class TestTestWatch(TestCase): class TestTestWatch(TestCase):
def test_methods(self): def test_methods(self):
tw = TestWatch() tw = TestWatch()
self.assertTrue(tw.get_calls_count("app.tests.test_testwatch.test") == 0) self.assertTrue(tw.get_calls_count("app.tests.test_testwatch.test") == 0)
self.assertTrue(tw.get_calls_count("app.tests.test_testwatch.nonexistent") == 0) self.assertTrue(tw.get_calls_count("app.tests.test_testwatch.nonexistent") == 0)
@ -53,5 +51,3 @@ class TestTestWatch(TestCase):
self.assertFalse(d['a']) self.assertFalse(d['a'])
self.assertTrue(d['b']) self.assertTrue(d['b'])

Wyświetl plik

@ -1,11 +1,9 @@
from django.contrib.auth.models import User, Group from django.contrib.auth.models import User
from django.test import Client from django.test import Client
from rest_framework import status
from app.models import Project, Task from app.models import Project
from .classes import BootTestCase from .classes import BootTestCase
from app import scheduler
from django.core.exceptions import ValidationError
class TestWelcome(BootTestCase): class TestWelcome(BootTestCase):

Wyświetl plik

@ -0,0 +1,51 @@
import worker
from app import pending_actions
from app.models import Project
from app.models import Task
from nodeodm.models import ProcessingNode
from .classes import BootTestCase
from .utils import start_processing_node
class TestWorker(BootTestCase):
def setUp(self):
super().setUp()
def tearDown(self):
pass
def test_worker_tasks(self):
project = Project.objects.get(name="User Test Project")
pnode = ProcessingNode.objects.create(hostname="localhost", port=11223)
self.assertTrue(pnode.api_version is None)
pnserver = start_processing_node()
worker.tasks.update_nodes_info()
pnode.refresh_from_db()
self.assertTrue(pnode.api_version is not None)
# Create task
task = Task.objects.create(project=project)
# Delete project
project.deleting = True
project.save()
worker.tasks.cleanup_projects()
# Task and project should still be here (since task still exists)
self.assertTrue(Task.objects.filter(pk=task.id).exists())
self.assertTrue(Project.objects.filter(pk=project.id).exists())
# Remove task
task.delete()
worker.tasks.cleanup_projects()
# Task and project should have been removed (now that task count is zero)
self.assertFalse(Task.objects.filter(pk=task.id).exists())
self.assertFalse(Project.objects.filter(pk=project.id).exists())
pnserver.terminate()

29
app/tests/utils.py 100644
Wyświetl plik

@ -0,0 +1,29 @@
import os
import shutil
import time
import subprocess
import logging
from webodm import settings
logger = logging.getLogger('app.logger')
def start_processing_node(*args):
current_dir = os.path.dirname(os.path.realpath(__file__))
node_odm = subprocess.Popen(['node', 'index.js', '--port', '11223', '--test'] + list(args), shell=False,
cwd=os.path.join(current_dir, "..", "..", "nodeodm", "external", "node-OpenDroneMap"))
time.sleep(2) # Wait for the server to launch
return node_odm
# We need to clear previous media_root content
# This points to the test directory, but just in case
# we double check that the directory is indeed a test directory
def clear_test_media_root():
if "_test" in settings.MEDIA_ROOT:
if os.path.exists(settings.MEDIA_ROOT):
logger.info("Cleaning up {}".format(settings.MEDIA_ROOT))
shutil.rmtree(settings.MEDIA_ROOT)
else:
logger.warning("We did not remove MEDIA_ROOT because we couldn't find a _test suffix in its path.")

Wyświetl plik

@ -1,7 +1,6 @@
import time import time
import logging import logging
from webodm import settings from webodm import settings
logger = logging.getLogger('app.logger') logger = logging.getLogger('app.logger')
@ -10,26 +9,32 @@ class TestWatch:
def __init__(self): def __init__(self):
self.clear() self.clear()
def func_to_name(f):
return "{}.{}".format(f.__module__, f.__name__)
def clear(self): def clear(self):
self._calls = {} self._calls = {}
self._intercept_list = {} self._intercept_list = {}
def func_to_name(f):
return "{}.{}".format(f.__module__, f.__name__)
def intercept(self, fname, f = None): def intercept(self, fname, f = None):
self._intercept_list[fname] = f if f is not None else True self._intercept_list[fname] = f if f is not None else True
def execute_intercept_function_replacement(self, fname, *args, **kwargs): def intercept_list_has(self, fname):
if fname in self._intercept_list and callable(self._intercept_list[fname]): return fname in self._intercept_list
(self._intercept_list[fname])(*args, **kwargs)
def should_prevent_execution(self, func): def execute_intercept_function_replacement(self, fname, *args, **kwargs):
return TestWatch.func_to_name(func) in self._intercept_list if self.intercept_list_has(fname) and callable(self._intercept_list[fname]):
(self._intercept_list[fname])(*args, **kwargs)
def get_calls(self, fname): def get_calls(self, fname):
return self._calls[fname] if fname in self._calls else [] return self._calls[fname] if fname in self._calls else []
def set_calls(self, fname, value):
self._calls[fname] = value
def should_prevent_execution(self, func):
return self.intercept_list_has(TestWatch.func_to_name(func))
def get_calls_count(self, fname): def get_calls_count(self, fname):
return len(self.get_calls(fname)) return len(self.get_calls(fname))
@ -48,10 +53,13 @@ class TestWatch:
def log_call(self, func, *args, **kwargs): def log_call(self, func, *args, **kwargs):
fname = TestWatch.func_to_name(func) fname = TestWatch.func_to_name(func)
self.manual_log_call(fname, *args, **kwargs)
def manual_log_call(self, fname, *args, **kwargs):
logger.info("{} called".format(fname)) logger.info("{} called".format(fname))
list = self._calls[fname] if fname in self._calls else [] list = self.get_calls(fname)
list.append({'f': fname, 'args': args, 'kwargs': kwargs}) list.append({'f': fname, 'args': args, 'kwargs': kwargs})
self._calls[fname] = list self.set_calls(fname, list)
def hook_pre(self, func, *args, **kwargs): def hook_pre(self, func, *args, **kwargs):
if settings.TESTING and self.should_prevent_execution(func): if settings.TESTING and self.should_prevent_execution(func):

Wyświetl plik

@ -1,6 +1,10 @@
import sys
from django.conf.urls import url, include from django.conf.urls import url, include
from django.shortcuts import render_to_response
from django.template import RequestContext
from .views import app as app_views, public as public_views from .views import app as app_views, public as public_views
from .plugins import get_url_patterns
from app.boot import boot from app.boot import boot
from webodm import settings from webodm import settings
@ -14,16 +18,24 @@ urlpatterns = [
url(r'^3d/project/(?P<project_pk>[^/.]+)/task/(?P<task_pk>[^/.]+)/$', app_views.model_display, name='model_display'), url(r'^3d/project/(?P<project_pk>[^/.]+)/task/(?P<task_pk>[^/.]+)/$', app_views.model_display, name='model_display'),
url(r'^public/task/(?P<task_pk>[^/.]+)/map/$', public_views.map, name='public_map'), url(r'^public/task/(?P<task_pk>[^/.]+)/map/$', public_views.map, name='public_map'),
url(r'^public/task/(?P<task_pk>[^/.]+)/iframe/map/$', public_views.map_iframe, name='public_map'), url(r'^public/task/(?P<task_pk>[^/.]+)/iframe/map/$', public_views.map_iframe, name='public_iframe_map'),
url(r'^public/task/(?P<task_pk>[^/.]+)/3d/$', public_views.model_display, name='public_map'), url(r'^public/task/(?P<task_pk>[^/.]+)/3d/$', public_views.model_display, name='public_3d'),
url(r'^public/task/(?P<task_pk>[^/.]+)/iframe/3d/$', public_views.model_display_iframe, name='public_map'), url(r'^public/task/(?P<task_pk>[^/.]+)/iframe/3d/$', public_views.model_display_iframe, name='public_iframe_3d'),
url(r'^public/task/(?P<task_pk>[^/.]+)/json/$', public_views.task_json, name='public_map'), url(r'^public/task/(?P<task_pk>[^/.]+)/json/$', public_views.task_json, name='public_json'),
url(r'^processingnode/([\d]+)/$', app_views.processing_node, name='processing_node'), url(r'^processingnode/([\d]+)/$', app_views.processing_node, name='processing_node'),
url(r'^api/', include("app.api.urls")), url(r'^api/', include("app.api.urls")),
] ]
# TODO: is there a way to place plugins /public directories
# into the static build directories and let nginx serve them?
urlpatterns += get_url_patterns()
handler404 = app_views.handler404
handler500 = app_views.handler500
# Test cases call boot() independently # Test cases call boot() independently
if not settings.TESTING: # Also don't execute boot with celery workers
if not settings.WORKER_RUNNING and not settings.TESTING:
boot() boot()

Wyświetl plik

@ -134,3 +134,10 @@ def welcome(request):
'title': 'Welcome', 'title': 'Welcome',
'firstuserform': fuf 'firstuserform': fuf
}) })
def handler404(request):
return render(request, '404.html', status=404)
def handler500(request):
return render(request, '500.html', status=500)

Wyświetl plik

@ -1,6 +1,6 @@
version: '2' version: '2'
services: services:
webapp: webapp:
entrypoint: /bin/bash -c "chmod +x /webodm/*.sh && /bin/bash -c \"/webodm/wait-for-postgres.sh db /webodm/start.sh --create-default-pnode --setup-devenv\"" entrypoint: /bin/bash -c "chmod +x /webodm/*.sh && /bin/bash -c \"/webodm/wait-for-postgres.sh db /webodm/wait-for-it.sh broker:6379 -- /webodm/start.sh --create-default-pnode --setup-devenv\""
volumes: volumes:
- .:/webodm - .:/webodm

Wyświetl plik

@ -5,7 +5,7 @@
version: '2' version: '2'
services: services:
webapp: webapp:
entrypoint: /bin/bash -c "chmod +x /webodm/*.sh && /bin/bash -c \"/webodm/wait-for-postgres.sh db /webodm/start.sh --create-default-pnode\"" entrypoint: /bin/bash -c "chmod +x /webodm/*.sh && /bin/bash -c \"/webodm/wait-for-postgres.sh db /webodm/wait-for-it.sh broker:6379 -- /webodm/start.sh --create-default-pnode\""
depends_on: depends_on:
- node-odm-1 - node-odm-1
node-odm-1: node-odm-1:

Wyświetl plik

@ -3,9 +3,7 @@
version: '2' version: '2'
volumes: volumes:
dbdata: dbdata:
driver: local
appmedia: appmedia:
driver: local
services: services:
db: db:
image: opendronemap/webodm_db image: opendronemap/webodm_db
@ -17,15 +15,33 @@ services:
webapp: webapp:
image: opendronemap/webodm_webapp image: opendronemap/webodm_webapp
container_name: webapp container_name: webapp
entrypoint: /bin/bash -c "chmod +x /webodm/*.sh && /bin/bash -c \"/webodm/wait-for-postgres.sh db /webodm/start.sh\"" entrypoint: /bin/bash -c "chmod +x /webodm/*.sh && /bin/bash -c \"/webodm/wait-for-postgres.sh db /webodm/wait-for-it.sh broker:6379 -- /webodm/start.sh\""
volumes: volumes:
- ${WO_MEDIA_DIR}:/webodm/app/media - ${WO_MEDIA_DIR}:/webodm/app/media
ports: ports:
- "${WO_PORT}:8000" - "${WO_PORT}:8000"
depends_on: depends_on:
- db - db
- broker
- worker
environment: environment:
- WO_PORT - WO_PORT
- WO_HOST - WO_HOST
- WO_DEBUG - WO_DEBUG
- WO_BROKER
restart: on-failure:10 restart: on-failure:10
broker:
image: redis
container_name: broker
worker:
image: opendronemap/webodm_webapp
container_name: worker
entrypoint: /bin/bash -c "/webodm/wait-for-postgres.sh db /webodm/wait-for-it.sh broker:6379 -- /webodm/worker.sh start"
volumes:
- ${WO_MEDIA_DIR}:/webodm/app/media
depends_on:
- db
- broker
environment:
- WO_BROKER
- WO_DEBUG

Wyświetl plik

@ -2,7 +2,8 @@ module.exports = {
roots: ["./app/static/app/js"], roots: ["./app/static/app/js"],
moduleNameMapper: { moduleNameMapper: {
"^.*\\.s?css$": "<rootDir>/app/static/app/js/tests/mocks/empty.scss.js", "^.*\\.s?css$": "<rootDir>/app/static/app/js/tests/mocks/empty.scss.js",
"jquery": "<rootDir>/app/static/app/js/vendor/jquery-1.11.2.min.js" "jquery": "<rootDir>/app/static/app/js/vendor/jquery-1.11.2.min.js",
"SystemJS": "<rootDir>/app/static/app/js/tests/mocks/system.js"
}, },
setupFiles: ["<rootDir>/app/static/app/js/tests/setup/shims.js", setupFiles: ["<rootDir>/app/static/app/js/tests/setup/shims.js",
"<rootDir>/app/static/app/js/tests/setup/setupTests.js", "<rootDir>/app/static/app/js/tests/setup/setupTests.js",

Wyświetl plik

@ -28,7 +28,7 @@ if [ $? -eq 0 ]; then
fi fi
# Generate/update certificate # Generate/update certificate
certbot certonly --tls-sni-01-port 8000 --work-dir ./letsencrypt --config-dir ./letsencrypt --logs-dir ./letsencrypt --standalone -d $DOMAIN --register-unsafely-without-email --agree-tos --keep certbot certonly --tls-sni-01-port 8000 --http-01-port 8080 --work-dir ./letsencrypt --config-dir ./letsencrypt --logs-dir ./letsencrypt --standalone -d $DOMAIN --register-unsafely-without-email --agree-tos --keep
# Create ssl dir if necessary # Create ssl dir if necessary
if [ ! -e ssl/ ]; then if [ ! -e ssl/ ]; then

Wyświetl plik

@ -11,7 +11,7 @@ from guardian.models import UserObjectPermissionBase
from .api_client import ApiClient from .api_client import ApiClient
import json import json
from django.db.models import signals from django.db.models import signals
from datetime import datetime, timedelta from datetime import timedelta
from .exceptions import ProcessingError, ProcessingTimeout from .exceptions import ProcessingError, ProcessingTimeout
import simplejson import simplejson

Wyświetl plik

@ -1,6 +1,6 @@
{ {
"name": "WebODM", "name": "WebODM",
"version": "0.4.2", "version": "0.5.0",
"description": "Open Source Drone Image Processing", "description": "Open Source Drone Image Processing",
"main": "index.js", "main": "index.js",
"scripts": { "scripts": {
@ -35,6 +35,7 @@
"enzyme": "^3.3.0", "enzyme": "^3.3.0",
"enzyme-adapter-react-16": "^1.1.1", "enzyme-adapter-react-16": "^1.1.1",
"extract-text-webpack-plugin": "^3.0.0", "extract-text-webpack-plugin": "^3.0.0",
"fbemitter": "^2.1.1",
"file-loader": "^0.9.0", "file-loader": "^0.9.0",
"gl-matrix": "^2.3.2", "gl-matrix": "^2.3.2",
"history": "^4.7.2", "history": "^4.7.2",
@ -42,7 +43,6 @@
"jest": "^21.0.1", "jest": "^21.0.1",
"json-loader": "^0.5.4", "json-loader": "^0.5.4",
"leaflet": "^1.0.1", "leaflet": "^1.0.1",
"leaflet-measure": "^2.0.5",
"node-sass": "^3.10.1", "node-sass": "^3.10.1",
"object.values": "^1.0.3", "object.values": "^1.0.3",
"proj4": "^2.4.3", "proj4": "^2.4.3",

Wyświetl plik

@ -0,0 +1 @@
from .plugin import *

Wyświetl plik

@ -0,0 +1,13 @@
{
"name": "Area/Length Measurements",
"webodmMinVersion": "0.5.0",
"description": "A plugin to compute area and length measurements on Leaflet",
"version": "0.1.0",
"author": "Piero Toffanin",
"email": "pt@masseranolabs.com",
"repository": "https://github.com/OpenDroneMap/WebODM",
"tags": ["area", "length", "measurements"],
"homepage": "https://github.com/OpenDroneMap/WebODM",
"experimental": false,
"deprecated": false
}

Wyświetl plik

@ -0,0 +1,5 @@
from app.plugins import PluginBase
class Plugin(PluginBase):
def include_js_files(self):
return ['main.js']

Plik binarny nie jest wyświetlany.

Po

Szerokość:  |  Wysokość:  |  Rozmiar: 397 B

Plik binarny nie jest wyświetlany.

Po

Szerokość:  |  Wysokość:  |  Rozmiar: 762 B

Plik binarny nie jest wyświetlany.

Po

Szerokość:  |  Wysokość:  |  Rozmiar: 387 B

Plik binarny nie jest wyświetlany.

Po

Szerokość:  |  Wysokość:  |  Rozmiar: 692 B

Plik binarny nie jest wyświetlany.

Po

Szerokość:  |  Wysokość:  |  Rozmiar: 326 B

Plik binarny nie jest wyświetlany.

Po

Szerokość:  |  Wysokość:  |  Rozmiar: 462 B

Plik binarny nie jest wyświetlany.

Po

Szerokość:  |  Wysokość:  |  Rozmiar: 192 B

Plik binarny nie jest wyświetlany.

Po

Szerokość:  |  Wysokość:  |  Rozmiar: 277 B

Plik binarny nie jest wyświetlany.

Po

Szerokość:  |  Wysokość:  |  Rozmiar: 491 B

Plik binarny nie jest wyświetlany.

Po

Szerokość:  |  Wysokość:  |  Rozmiar: 1003 B

Plik binarny nie jest wyświetlany.

Po

Szerokość:  |  Wysokość:  |  Rozmiar: 279 B

Plik binarny nie jest wyświetlany.

Po

Szerokość:  |  Wysokość:  |  Rozmiar: 460 B

Wyświetl plik

@ -0,0 +1 @@
.leaflet-control-measure h3,.leaflet-measure-resultpopup h3{margin:0 0 12px 0;padding-bottom:10px;line-height:1em;font-weight:normal;font-size:1.1em;border-bottom:solid 1px #DDD}.leaflet-control-measure p,.leaflet-measure-resultpopup p{margin:10px 0 0;line-height:1em}.leaflet-control-measure p:first-child,.leaflet-measure-resultpopup p:first-child{margin-top:0}.leaflet-control-measure a,.leaflet-measure-resultpopup a{color:#5E66CC;text-decoration:none}.leaflet-control-measure a:hover,.leaflet-measure-resultpopup a:hover{opacity:0.5;text-decoration:none}.leaflet-control-measure .tasks,.leaflet-measure-resultpopup .tasks{margin:12px 0 0;padding:10px 0 0;border-top:solid 1px #DDD;list-style:none;list-style-image:none}.leaflet-control-measure .tasks li,.leaflet-measure-resultpopup .tasks li{display:inline;margin:0 10px 0 0}.leaflet-control-measure .tasks li:last-child,.leaflet-measure-resultpopup .tasks li:last-child{margin-right:0}.leaflet-control-measure .coorddivider,.leaflet-measure-resultpopup .coorddivider{color:#999}.leaflet-control-measure{background:#fff;border-radius:5px;box-shadow:0 1px 5px rgba(0,0,0,0.4)}.leaflet-control-measure .leaflet-control-measure-toggle,.leaflet-control-measure .leaflet-control-measure-toggle:hover{display:block;width:36px;height:36px;background-position:50% 50%;background-repeat:no-repeat;background-image:url(images/rulers.png);border-radius:5px;text-indent:100%;white-space:nowrap;overflow:hidden}.leaflet-retina .leaflet-control-measure .leaflet-control-measure-toggle,.leaflet-retina .leaflet-control-measure .leaflet-control-measure-toggle:hover{background-image:url(images/rulers_@2X.png);background-size:16px 16px}.leaflet-touch .leaflet-control-measure .leaflet-control-measure-toggle,.leaflet-touch .leaflet-control-measure .leaflet-control-measure-toggle:hover{width:44px;height:44px}.leaflet-control-measure .startprompt h3{margin-bottom:10px}.leaflet-control-measure .startprompt .tasks{margin-top:0;padding-top:0;border-top:0}.leaflet-control-measure .leaflet-control-measure-interaction{padding:10px 12px}.leaflet-control-measure .results .group{margin-top:10px;padding-top:10px;border-top:dotted 1px #eaeaea}.leaflet-control-measure .results .group:first-child{margin-top:0;padding-top:0;border-top:0}.leaflet-control-measure .results .heading{margin-right:5px;color:#999}.leaflet-control-measure a.start{padding-left:18px;background-repeat:no-repeat;background-position:0% 50%;background-image:url(images/start.png)}.leaflet-retina .leaflet-control-measure a.start{background-image:url(images/start_@2X.png);background-size:12px 12px}.leaflet-control-measure a.cancel{padding-left:18px;background-repeat:no-repeat;background-position:0% 50%;background-image:url(images/cancel.png)}.leaflet-retina .leaflet-control-measure a.cancel{background-image:url(images/cancel_@2X.png);background-size:12px 12px}.leaflet-control-measure a.finish{padding-left:18px;background-repeat:no-repeat;background-position:0% 50%;background-image:url(images/check.png)}.leaflet-retina .leaflet-control-measure a.finish{background-image:url(images/check_@2X.png);background-size:12px 12px}.leaflet-measure-resultpopup a.zoomto{padding-left:18px;background-repeat:no-repeat;background-position:0% 50%;background-image:url(images/focus.png)}.leaflet-retina .leaflet-measure-resultpopup a.zoomto{background-image:url(images/focus_@2X.png);background-size:12px 12px}.leaflet-measure-resultpopup a.deletemarkup{padding-left:18px;background-repeat:no-repeat;background-position:0% 50%;background-image:url(images/trash.png)}.leaflet-retina .leaflet-measure-resultpopup a.deletemarkup{background-image:url(images/trash_@2X.png);background-size:11px 12px}

File diff suppressed because one or more lines are too long

Wyświetl plik

@ -0,0 +1,11 @@
PluginsAPI.Map.willAddControls([
'measure/leaflet-measure.css',
'measure/leaflet-measure.min.js'
], function(options){
L.control.measure({
primaryLengthUnit: 'meters',
secondaryLengthUnit: 'feet',
primaryAreaUnit: 'sqmeters',
secondaryAreaUnit: 'acres'
}).addTo(options.map);
});

Wyświetl plik

@ -0,0 +1 @@
from .plugin import *

Wyświetl plik

@ -0,0 +1,13 @@
{
"name": "POSM GCP Interface",
"webodmMinVersion": "0.5.0",
"description": "A plugin to create GCP files from images",
"version": "0.1.0",
"author": "Piero Toffanin",
"email": "pt@masseranolabs.com",
"repository": "https://github.com/OpenDroneMap/WebODM",
"tags": ["gcp", "posm"],
"homepage": "https://github.com/OpenDroneMap/WebODM",
"experimental": true,
"deprecated": false
}

Wyświetl plik

@ -0,0 +1,14 @@
from app.plugins import PluginBase, Menu, MountPoint
from django.shortcuts import render
class Plugin(PluginBase):
def main_menu(self):
return [Menu("GCP Interface", self.public_url(""), "fa fa-map-marker fa-fw")]
def mount_points(self):
return [
MountPoint('$', lambda request: render(request, self.template_path("app.html"), {'title': 'GCP Editor'}))
]

Wyświetl plik

@ -0,0 +1,35 @@
{
"main.css": "static/css/main.d9d37f4b.css",
"main.css.map": "static/css/main.d9d37f4b.css.map",
"main.js": "static/js/main.ce50390f.js",
"main.js.map": "static/js/main.ce50390f.js.map",
"static/media/add.png": "static/media/add.5a2714f3.png",
"static/media/add@2x.png": "static/media/add@2x.b53b9f2d.png",
"static/media/add_point.png": "static/media/add_point.e65f1d0c.png",
"static/media/add_point@2x.png": "static/media/add_point@2x.bf317640.png",
"static/media/add_point_green.png": "static/media/add_point_green.013c6b67.png",
"static/media/add_point_green@2x.png": "static/media/add_point_green@2x.1dd546dd.png",
"static/media/add_point_yellow.png": "static/media/add_point_yellow.a6d933c3.png",
"static/media/add_point_yellow@2x.png": "static/media/add_point_yellow@2x.5b290820.png",
"static/media/close.png": "static/media/close.729ab67b.png",
"static/media/close@2x.png": "static/media/close@2x.c65c9577.png",
"static/media/fit_markers.png": "static/media/fit_markers.be9754ad.png",
"static/media/fit_markers@2x.png": "static/media/fit_markers@2x.cf8c8fad.png",
"static/media/gcp-green.png": "static/media/gcp-green.cfc5c722.png",
"static/media/gcp-yellow.png": "static/media/gcp-yellow.3793065e.png",
"static/media/gcp.png": "static/media/gcp.44ed9ab1.png",
"static/media/layers-2x.png": "static/media/layers-2x.4f0283c6.png",
"static/media/layers.png": "static/media/layers.a6137456.png",
"static/media/loading.gif": "static/media/loading.e56d6770.gif",
"static/media/loading@2x.gif": "static/media/loading@2x.0ab4b1d1.gif",
"static/media/logo.png": "static/media/logo.b38a9426.png",
"static/media/marker-icon.png": "static/media/marker-icon.2273e3d8.png",
"static/media/point_icon.png": "static/media/point_icon.e206131a.png",
"static/media/point_icon@2x.png": "static/media/point_icon@2x.dd1da9a3.png",
"static/media/polygon_icon.png": "static/media/polygon_icon.83cffeed.png",
"static/media/polygon_icon@2x.png": "static/media/polygon_icon@2x.53277be6.png",
"static/media/providers.png": "static/media/providers.ad5af2f5.png",
"static/media/providers@2x.png": "static/media/providers@2x.51ed570c.png",
"static/media/search.png": "static/media/search.57a8b421.png",
"static/media/search@2x.png": "static/media/search@2x.44cf1bbe.png"
}

Plik binarny nie jest wyświetlany.

Po

Szerokość:  |  Wysokość:  |  Rozmiar: 1.1 KiB

Wyświetl plik

@ -0,0 +1,17 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width,initial-scale=1">
<link rel="shortcut icon" href="/plugins/posm-gcpi/favicon.ico">
<title>GCPi</title>
<link href="/plugins/posm-gcpi/static/css/main.d9d37f4b.css" rel="stylesheet">
<style type="text/css">
.header .logo{ zoom: 0.5; }
</style>
</head>
<body>
<div id="root"></div>
<script type="text/javascript" src="/plugins/posm-gcpi/static/js/main.ce50390f.js"></script>
</body>
</html>

Some files were not shown because too many files have changed in this diff Show More