kopia lustrzana https://github.com/OpenDroneMap/ODM
Merge pull request #1057 from pierotofy/multispec
Multispectral support, TIFF support, split-merge speed-ups
Former-commit-id: 48a5369f86
pull/1161/head
v0.9.8
commit
14bb2cec73
|
@ -24,4 +24,5 @@ ceres-solver.tar.gz
|
|||
*.pyc
|
||||
opencv.zip
|
||||
settings.yaml
|
||||
docker.settings.yaml
|
||||
docker.settings.yaml
|
||||
.setupdevenv
|
44
README.md
44
README.md
|
@ -1,7 +1,7 @@
|
|||
# ODM
|
||||
|
||||
![](https://raw.githubusercontent.com/OpenDroneMap/OpenDroneMap/master/img/odm_image.png)
|
||||
|
||||
For documentation, see https://docs.opendronemap.org and Quickstart below
|
||||
|
||||
## What is it?
|
||||
|
||||
ODM is an open source command line toolkit for processing aerial drone imagery. Typical drones use simple point-and-shoot cameras, so the images from drones, while from a different perspective, are similar to any pictures taken from point-and-shoot cameras, i.e. non-metric imagery. OpenDroneMap turns those simple images into three dimensional geographic data that can be used in combination with other geographic datasets.
|
||||
|
@ -22,16 +22,15 @@ In a word, ODM is a toolchain for processing raw civilian UAS imagery to other u
|
|||
|
||||
ODM now includes state-of-the-art 3D reconstruction work by Michael Waechter, Nils Moehrle, and Michael Goesele. See their publication at [http://www.gcc.tu-darmstadt.de/media/gcc/papers/Waechter-2014-LTB.pdf](http://www.gcc.tu-darmstadt.de/media/gcc/papers/Waechter-2014-LTB.pdf).
|
||||
|
||||
For Docs, see Quickstart below and also https://docs.opendronemap.org
|
||||
|
||||
## QUICKSTART
|
||||
## Quickstart
|
||||
|
||||
### Docker (All platforms)
|
||||
|
||||
The easiest way to run ODM is through Docker. If you don't have it installed,
|
||||
see the [Docker Ubuntu installation tutorial](https://docs.docker.com/engine/installation/linux/ubuntulinux/) and follow the
|
||||
instructions through "Create a Docker group". The Docker image workflow
|
||||
has equivalent procedures for Mac OS X and Windows found at [docs.docker.com](docs.docker.com). Then run the following command which will build a pre-built image and run on images found in `$(pwd)/images` (you can change this if you need to, see the [wiki](https://github.com/OpenDroneMap/OpenDroneMap/wiki/Docker) for more detailed instructions.
|
||||
has equivalent procedures for Mac OS X and Windows found at [docs.docker.com](https://docs.docker.com). Then run the following command which will build a pre-built image and run on images found in `$(pwd)/images` (you can change this if you need to, see the [wiki](https://github.com/OpenDroneMap/OpenDroneMap/wiki/Docker) for more detailed instructions.
|
||||
|
||||
```
|
||||
docker run -it --rm \
|
||||
|
@ -43,11 +42,10 @@ docker run -it --rm \
|
|||
|
||||
### Native Install (Ubuntu 16.04)
|
||||
|
||||
** Please note that we need help getting ODM updated to work for 16.10+. Look at [#659](https://github.com/OpenDroneMap/OpenDroneMap/issues/659) or drop into the [gitter](https://gitter.im/OpenDroneMap/OpenDroneMap) for more info.
|
||||
** Please note that we need help getting ODM updated to work for 16.10+. Look at [#659](https://github.com/OpenDroneMap/OpenDroneMap/issues/659).
|
||||
|
||||
|
||||
**[Download the latest release here](https://github.com/OpenDroneMap/OpenDroneMap/releases)**
|
||||
Current version: 0.3.1 (this software is in beta)
|
||||
**[Download the latest release here](https://github.com/OpenDroneMap/ODM/archive/master.zip)**
|
||||
|
||||
1. Extract and enter the OpenDroneMap directory
|
||||
2. Run `bash configure.sh install`
|
||||
|
@ -147,8 +145,6 @@ When the process finishes, the results will be organized as follows:
|
|||
|
||||
Any file ending in .obj or .ply can be opened and viewed in [MeshLab](http://meshlab.sourceforge.net/) or similar software. That includes `opensfm/depthmaps/merged.ply`, `odm_meshing/odm_mesh.ply`, `odm_texturing/odm_textured_model[_geo].obj`, or `odm_georeferencing/odm_georeferenced_model.ply`. Below is an example textured mesh:
|
||||
|
||||
![](https://raw.githubusercontent.com/alexhagiopol/OpenDroneMap/feature-better-docker/toledo_dataset_example_mesh.jpg)
|
||||
|
||||
You can also view the orthophoto GeoTIFF in [QGIS](http://www.qgis.org/) or other mapping software:
|
||||
|
||||
![](https://raw.githubusercontent.com/OpenDroneMap/OpenDroneMap/master/img/bellus_map.png)
|
||||
|
@ -233,28 +229,34 @@ Experimental flags need to be enabled in Docker to use the ```--squash``` flag.
|
|||
|
||||
After this, you must restart docker by typing ```sudo service docker restart``` into your Linux terminal.
|
||||
|
||||
|
||||
## User Interface
|
||||
|
||||
A web interface and API to OpenDroneMap is currently under active development in the [WebODM](https://github.com/OpenDroneMap/WebODM) repository.
|
||||
|
||||
## Video Support
|
||||
|
||||
Currently we have an experimental feature that uses ORB_SLAM to render a textured mesh from video. It is only supported on Ubuntu 14.04 on machines with X11 support. See the [wiki](https://github.com/OpenDroneMap/OpenDroneMap/wiki/Reconstruction-from-Video) for details on installation and use.
|
||||
|
||||
## Examples
|
||||
|
||||
Coming soon...
|
||||
|
||||
## Documentation:
|
||||
|
||||
For documentation, everything is being moved to [http://docs.opendronemap.org/](http://docs.opendronemap.org/) but you can also take a look at our [wiki](https://github.com/OpenDroneMap/OpenDroneMap/wiki). Check those places first if you are having problems. There's also help at [community forum](http://community.opendronemap.org/), and if you still need help and think you've found a bug or need an enhancement, look through the issue queue or create one.
|
||||
For documentation, see http://docs.opendronemap.org/ and https://github.com/OpenDroneMap/ODM/wiki. Check those places first if you are having problems. There's also help at [community forum](https://community.opendronemap.org/), and if you still need help and think you've found a bug or need an enhancement, look through the issue queue or create one.
|
||||
|
||||
## Developers
|
||||
|
||||
Help improve our software!
|
||||
|
||||
[![Join the chat at https://gitter.im/OpenDroneMap/OpenDroneMap](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/OpenDroneMap/OpenDroneMap?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
|
||||
For Linux users, the easiest way to modify the software is to make sure docker is installed, clone the repository and then run from a shell:
|
||||
|
||||
```bash
|
||||
$ DATA=/path/to/datasets ./start-dev-env.sh
|
||||
```
|
||||
|
||||
Where `/path/to/datasets` is a directory where you can place test datasets (it can also point to an empty directory if you don't have test datasets).
|
||||
|
||||
You can now make changes to the ODM source. When you are ready to test the changes you can simply invoke:
|
||||
|
||||
```bash
|
||||
(odmdev) [user:/code] master+* ± ./run.sh --project-path /datasets mydataset
|
||||
```
|
||||
|
||||
If you have questions, join the developer's chat at https://community.opendronemap.org/c/developers-chat/21
|
||||
|
||||
1. Try to keep commits clean and simple
|
||||
2. Submit a pull request with detailed changes and test results
|
||||
3. Have fun!
|
|
@ -129,7 +129,7 @@ endforeach()
|
|||
|
||||
externalproject_add(mve
|
||||
GIT_REPOSITORY https://github.com/OpenDroneMap/mve.git
|
||||
GIT_TAG 070
|
||||
GIT_TAG 098
|
||||
UPDATE_COMMAND ""
|
||||
SOURCE_DIR ${SB_SOURCE_DIR}/elibs/mve
|
||||
CONFIGURE_COMMAND ""
|
||||
|
@ -168,3 +168,16 @@ externalproject_add(dem2points
|
|||
BUILD_COMMAND make
|
||||
INSTALL_COMMAND ""
|
||||
)
|
||||
|
||||
externalproject_add(lastools
|
||||
GIT_REPOSITORY https://github.com/LAStools/LAStools.git
|
||||
GIT_TAG 2ef44281645999ec7217facec84a5913bbbbe165
|
||||
SOURCE_DIR ${SB_SOURCE_DIR}/lastools
|
||||
CONFIGURE_COMMAND ""
|
||||
CMAKE_COMMAND ""
|
||||
CMAKE_GENERATOR ""
|
||||
UPDATE_COMMAND ""
|
||||
BUILD_IN_SOURCE 1
|
||||
BUILD_COMMAND make -C LASlib -j$(nproc) CXXFLAGS='-std=c++11' && make -C src -j$(nproc) CXXFLAGS='-std=c++11' lasmerge
|
||||
INSTALL_COMMAND mv ${SB_SOURCE_DIR}/lastools/bin/lasmerge ${SB_INSTALL_DIR}/bin
|
||||
)
|
||||
|
|
|
@ -8,7 +8,8 @@ ExternalProject_Add(${_proj_name}
|
|||
STAMP_DIR ${_SB_BINARY_DIR}/stamp
|
||||
#--Download step--------------
|
||||
DOWNLOAD_DIR ${SB_DOWNLOAD_DIR}
|
||||
URL https://github.com/connormanning/entwine/archive/01ff206ca15c5001150f3de9fb202491c388e63c.zip
|
||||
GIT_REPOSITORY https://github.com/connormanning/entwine/
|
||||
GIT_TAG 2.1.0
|
||||
#--Update/Patch step----------
|
||||
UPDATE_COMMAND ""
|
||||
#--Configure step-------------
|
||||
|
|
|
@ -8,7 +8,8 @@ ExternalProject_Add(${_proj_name}
|
|||
STAMP_DIR ${_SB_BINARY_DIR}/stamp
|
||||
#--Download step--------------
|
||||
DOWNLOAD_DIR ${SB_DOWNLOAD_DIR}/${_proj_name}
|
||||
URL https://github.com/OpenDroneMap/mvs-texturing/archive/master.zip
|
||||
GIT_REPOSITORY https://github.com/OpenDroneMap/mvs-texturing
|
||||
GIT_TAG master
|
||||
#--Update/Patch step----------
|
||||
UPDATE_COMMAND ""
|
||||
#--Configure step-------------
|
||||
|
|
|
@ -9,7 +9,7 @@ ExternalProject_Add(${_proj_name}
|
|||
#--Download step--------------
|
||||
DOWNLOAD_DIR ${SB_DOWNLOAD_DIR}
|
||||
GIT_REPOSITORY https://github.com/OpenDroneMap/OpenSfM/
|
||||
GIT_TAG 090
|
||||
GIT_TAG 098
|
||||
#--Update/Patch step----------
|
||||
UPDATE_COMMAND git submodule update --init --recursive
|
||||
#--Configure step-------------
|
||||
|
|
2
VERSION
2
VERSION
|
@ -1 +1 @@
|
|||
0.9.1
|
||||
0.9.8
|
||||
|
|
|
@ -1 +0,0 @@
|
|||
db8d210f5994e4e1782de7fd7d51241aa5e82d3e
|
BIN
img/odm_icon.png
BIN
img/odm_icon.png
Plik binarny nie jest wyświetlany.
Przed Szerokość: | Wysokość: | Rozmiar: 27 KiB |
|
@ -1 +0,0 @@
|
|||
113f8182f61db99c24d194ddfba0f6b0a07fe272
|
|
@ -1 +0,0 @@
|
|||
2d1bc1543ef974c0c6a41db46c8c813c6a3b5692
|
|
@ -1 +0,0 @@
|
|||
fd7c406148027f41dc619fd833b9f55f9973a202
|
|
@ -1 +0,0 @@
|
|||
715b5748537e5b44e0631d83da797e764531df8c
|
11
index.html
11
index.html
|
@ -1,11 +0,0 @@
|
|||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta http-equiv="Refresh" content="0; url=https://opendronemap.org" />
|
||||
<title>OpenDroneMap</title>
|
||||
</head>
|
||||
<body>
|
||||
The project has moved to <a href="https://opendronemap.org">https://opendronemap.org</a>!
|
||||
</body>
|
||||
</html>
|
|
@ -11,6 +11,8 @@ add_definitions(-Wall -Wextra)
|
|||
# Find pcl at the location specified by PCL_DIR
|
||||
find_package(VTK 6.0 REQUIRED)
|
||||
find_package(PCL 1.8 HINTS "${PCL_DIR}/share/pcl-1.8" REQUIRED)
|
||||
find_package(GDAL REQUIRED)
|
||||
include_directories(${GDAL_INCLUDE_DIR})
|
||||
|
||||
# Find OpenCV at the default location
|
||||
find_package(OpenCV HINTS "${OPENCV_DIR}" REQUIRED)
|
||||
|
@ -31,4 +33,7 @@ aux_source_directory("./src" SRC_LIST)
|
|||
|
||||
# Add exectuteable
|
||||
add_executable(${PROJECT_NAME} ${SRC_LIST})
|
||||
target_link_libraries(odm_orthophoto ${PCL_COMMON_LIBRARIES} ${PCL_IO_LIBRARIES} ${PCL_SURFACE_LIBRARIES} ${OpenCV_LIBS})
|
||||
set_target_properties(${PROJECT_NAME} PROPERTIES
|
||||
CXX_STANDARD 11
|
||||
)
|
||||
target_link_libraries(odm_orthophoto ${PCL_COMMON_LIBRARIES} ${PCL_IO_LIBRARIES} ${PCL_SURFACE_LIBRARIES} ${OpenCV_LIBS} ${GDAL_LIBRARY})
|
||||
|
|
Plik diff jest za duży
Load Diff
|
@ -20,37 +20,28 @@
|
|||
// OpenCV
|
||||
#include <opencv2/core/core.hpp>
|
||||
|
||||
// GDAL
|
||||
#include "gdal_priv.h"
|
||||
#include "cpl_conv.h" // for CPLMalloc()
|
||||
|
||||
// Logger
|
||||
#include "Logger.hpp"
|
||||
|
||||
/*!
|
||||
* \brief The WorldPoint struct encapsules world coordinates used for the ortho photo boundary.
|
||||
* Points are separated into integers and fractional parts for high numerical stability.
|
||||
*/
|
||||
struct WorldPoint
|
||||
{
|
||||
int eastInteger_; /**< The inger part of the east point. */
|
||||
float eastFractional_; /**< The farctional part of the east point. */
|
||||
int northInteger_; /**< The inger part of the east point. */
|
||||
float northFractional_; /**< The farctional part of the east point. */
|
||||
|
||||
/*!
|
||||
* \brief Overloads operator '<<' for WorldPoint.
|
||||
*
|
||||
* \param os The output stream in which the WorldPoint should be printed.
|
||||
* \param worldPoint The WorldPoint should be printed.
|
||||
* \return A reference to the given output stream.
|
||||
*/
|
||||
friend std::ostream & operator<< (std::ostream &os, const WorldPoint &worldPoint);
|
||||
|
||||
/*!
|
||||
* \brief Overloads operator '>>' for WorldPoint.
|
||||
*
|
||||
* \param is The input stream from which the WorldPoint should be extracted
|
||||
* \param worldPoint The modified WorldPoint.
|
||||
* \return A reference to the given input stream.
|
||||
*/
|
||||
friend std::istream & operator>> (std::istream &os, WorldPoint &worldPoint);
|
||||
struct Bounds{
|
||||
float xMin;
|
||||
float xMax;
|
||||
float yMin;
|
||||
float yMax;
|
||||
|
||||
Bounds() : xMin(0), xMax(0), yMin(0), yMax(0) {}
|
||||
Bounds(float xMin, float xMax, float yMin, float yMax) :
|
||||
xMin(xMin), xMax(xMax), yMin(yMin), yMax(yMax){}
|
||||
Bounds(const Bounds &b) {
|
||||
xMin = b.xMin;
|
||||
xMax = b.xMax;
|
||||
yMin = b.yMin;
|
||||
yMax = b.yMax;
|
||||
}
|
||||
};
|
||||
|
||||
/*!
|
||||
|
@ -76,53 +67,34 @@ public:
|
|||
int run(int argc, char* argv[]);
|
||||
|
||||
private:
|
||||
|
||||
/*!
|
||||
* \brief parseArguments Parses command line arguments.
|
||||
*
|
||||
* \param argc Application argument count.
|
||||
* \param argv Argument values.
|
||||
*/
|
||||
int width, height;
|
||||
void parseArguments(int argc, char* argv[]);
|
||||
|
||||
/*!
|
||||
* \brief printHelp Prints help, explaining usage. Can be shown by calling the program with argument: "-help".
|
||||
*/
|
||||
void printHelp();
|
||||
|
||||
/*!
|
||||
* \brief Create the ortho photo using the current settings.
|
||||
*/
|
||||
void createOrthoPhoto();
|
||||
|
||||
|
||||
/*!
|
||||
* \brief Adjusts the boundary points according to the given georef system.
|
||||
*/
|
||||
void adjustBoundsForGeoRef();
|
||||
|
||||
/*!
|
||||
* \brief Adjusts the boundary points assuming the wolrd points are relative the local coordinate system.
|
||||
*/
|
||||
void adjustBoundsForLocal();
|
||||
|
||||
/*!
|
||||
* \brief Adjusts the boundary points so that the entire model fits inside the photo.
|
||||
* \brief Compute the boundary points so that the entire model fits inside the photo.
|
||||
*
|
||||
* \param mesh The model which decides the boundary.
|
||||
*/
|
||||
void adjustBoundsForEntireModel(const pcl::TextureMesh &mesh);
|
||||
Bounds computeBoundsForModel(const pcl::TextureMesh &mesh);
|
||||
|
||||
/*!
|
||||
* \brief Creates a transformation which aligns the area for the orthophoto.
|
||||
*/
|
||||
Eigen::Transform<float, 3, Eigen::Affine> getROITransform(float xMin, float yMin) const;
|
||||
|
||||
/*!
|
||||
* \brief Reads a transformation matrix from a file
|
||||
* @param transformFile_
|
||||
* @return
|
||||
*/
|
||||
Eigen::Transform<float, 3, Eigen::Affine> readTransform(std::string transformFile_) const;
|
||||
template <typename T>
|
||||
void initBands(int count);
|
||||
|
||||
template <typename T>
|
||||
void initAlphaBand();
|
||||
|
||||
template <typename T>
|
||||
void finalizeAlphaBand();
|
||||
|
||||
void saveTIFF(const std::string &filename, GDALDataType dataType);
|
||||
|
||||
/*!
|
||||
* \brief Renders a triangle into the ortho photo.
|
||||
|
@ -135,6 +107,7 @@ private:
|
|||
* \param uvs Contains the texture coordinates for the active material.
|
||||
* \param faceIndex The index of the face.
|
||||
*/
|
||||
template <typename T>
|
||||
void drawTexturedTriangle(const cv::Mat &texture, const pcl::Vertices &polygon, const pcl::PointCloud<pcl::PointXYZ>::Ptr &meshCloud, const std::vector<Eigen::Vector2f> &uvs, size_t faceIndex);
|
||||
|
||||
/*!
|
||||
|
@ -146,6 +119,7 @@ private:
|
|||
* \param t The v texture-coordinate, multiplied with the number of rows in the texture.
|
||||
* \param texture The texture from which to get the color.
|
||||
**/
|
||||
template <typename T>
|
||||
void renderPixel(int row, int col, float u, float v, const cv::Mat &texture);
|
||||
|
||||
/*!
|
||||
|
@ -186,7 +160,7 @@ private:
|
|||
* \param mesh The model.
|
||||
* \return True if model was loaded successfully.
|
||||
*/
|
||||
bool loadObjFile(std::string inputFile, pcl::TextureMesh &mesh);
|
||||
bool loadObjFile(std::string inputFile, pcl::TextureMesh &mesh, std::vector<pcl::MTLReader> &companions);
|
||||
|
||||
/*!
|
||||
* \brief Function is compied straight from the function in the pcl::io module.
|
||||
|
@ -194,38 +168,25 @@ private:
|
|||
bool readHeader (const std::string &file_name, pcl::PCLPointCloud2 &cloud,
|
||||
Eigen::Vector4f &origin, Eigen::Quaternionf &orientation,
|
||||
int &file_version, int &data_type, unsigned int &data_idx,
|
||||
const int offset);
|
||||
const int offset,
|
||||
std::vector<pcl::MTLReader> &companions);
|
||||
|
||||
Logger log_; /**< Logging object. */
|
||||
|
||||
std::string inputFile_; /**< Path to the textured mesh as an obj-file. */
|
||||
std::string inputGeoRefFile_; /**< Path to the georeference system file. */
|
||||
std::string inputTransformFile_;
|
||||
std::vector<std::string> inputFiles;
|
||||
std::string outputFile_; /**< Path to the destination file. */
|
||||
std::string outputCornerFile_; /**< Path to the output corner file. */
|
||||
std::string logFile_; /**< Path to the log file. */
|
||||
std::string bandsOrder;
|
||||
|
||||
float resolution_; /**< The number of pixels per meter in the ortho photo. */
|
||||
|
||||
bool transformOverride_;
|
||||
bool boundaryDefined_; /**< True if the user has defined a boundary. */
|
||||
std::vector<void *> bands;
|
||||
std::vector<GDALColorInterp> colorInterps;
|
||||
void *alphaBand; // Keep alpha band separate
|
||||
int currentBandIndex;
|
||||
|
||||
WorldPoint worldPoint1_; /**< The first boundary point for the ortho photo, in world coordinates. */
|
||||
WorldPoint worldPoint2_; /**< The second boundary point for the ortho photo, in world coordinates. */
|
||||
WorldPoint worldPoint3_; /**< The third boundary point for the ortho photo, in world coordinates. */
|
||||
WorldPoint worldPoint4_; /**< The fourth boundary point for the ortho photo, in world coordinates. */
|
||||
|
||||
Eigen::Vector2f boundaryPoint1_; /**< The first boundary point for the ortho photo, in local coordinates. */
|
||||
Eigen::Vector2f boundaryPoint2_; /**< The second boundary point for the ortho photo, in local coordinates. */
|
||||
Eigen::Vector2f boundaryPoint3_; /**< The third boundary point for the ortho photo, in local coordinates. */
|
||||
Eigen::Vector2f boundaryPoint4_; /**< The fourth boundary point for the ortho photo, in local coordinates. */
|
||||
|
||||
cv::Mat photo_; /**< The ortho photo as an OpenCV matrix, CV_8UC3. */
|
||||
cv::Mat depth_; /**< The depth of the ortho photo as an OpenCV matrix, CV_32F. */
|
||||
|
||||
bool multiMaterial_; /**< True if the mesh has multiple materials. **/
|
||||
|
||||
std::vector<pcl::MTLReader> companions_; /**< Materials (used by loadOBJFile). **/
|
||||
};
|
||||
|
||||
/*!
|
||||
|
|
|
@ -242,14 +242,14 @@ def config():
|
|||
|
||||
parser.add_argument('--mesh-size',
|
||||
metavar='<positive integer>',
|
||||
default=100000,
|
||||
default=200000,
|
||||
type=int,
|
||||
help=('The maximum vertex count of the output mesh. '
|
||||
'Default: %(default)s'))
|
||||
|
||||
parser.add_argument('--mesh-octree-depth',
|
||||
metavar='<positive integer>',
|
||||
default=9,
|
||||
default=11,
|
||||
type=int,
|
||||
help=('Oct-tree depth used in the mesh reconstruction, '
|
||||
'increase to get more vertices, recommended '
|
||||
|
@ -464,7 +464,7 @@ def config():
|
|||
metavar='<float>',
|
||||
type=float,
|
||||
default=5,
|
||||
help='DSM/DTM resolution in cm / pixel.'
|
||||
help='DSM/DTM resolution in cm / pixel. Note that this value is capped by a ground sampling distance (GSD) estimate. To remove the cap, check --ignore-gsd also.'
|
||||
'\nDefault: %(default)s')
|
||||
|
||||
parser.add_argument('--dem-decimation',
|
||||
|
@ -489,7 +489,7 @@ def config():
|
|||
metavar='<float > 0.0>',
|
||||
default=5,
|
||||
type=float,
|
||||
help=('Orthophoto resolution in cm / pixel.\n'
|
||||
help=('Orthophoto resolution in cm / pixel. Note that this value is capped by a ground sampling distance (GSD) estimate. To remove the cap, check --ignore-gsd also.\n'
|
||||
'Default: %(default)s'))
|
||||
|
||||
parser.add_argument('--orthophoto-no-tiled',
|
||||
|
@ -497,6 +497,12 @@ def config():
|
|||
default=False,
|
||||
help='Set this parameter if you want a stripped geoTIFF.\n'
|
||||
'Default: %(default)s')
|
||||
|
||||
parser.add_argument('--orthophoto-png',
|
||||
action='store_true',
|
||||
default=False,
|
||||
help='Set this parameter if you want to generate a PNG rendering of the orthophoto.\n'
|
||||
'Default: %(default)s')
|
||||
|
||||
parser.add_argument('--orthophoto-compression',
|
||||
metavar='<string>',
|
||||
|
|
|
@ -46,7 +46,7 @@ odm_modules_src_path = os.path.join(root_path, "modules")
|
|||
settings_path = os.path.join(root_path, 'settings.yaml')
|
||||
|
||||
# Define supported image extensions
|
||||
supported_extensions = {'.jpg','.jpeg','.png'}
|
||||
supported_extensions = {'.jpg','.jpeg','.png', '.tif', '.tiff'}
|
||||
|
||||
# Define the number of cores
|
||||
num_cores = multiprocessing.cpu_count()
|
||||
|
|
|
@ -17,7 +17,7 @@ class Cropper:
|
|||
return os.path.join(self.storage_dir, '{}.{}'.format(self.files_prefix, suffix))
|
||||
|
||||
@staticmethod
|
||||
def crop(gpkg_path, geotiff_path, gdal_options, keep_original=True):
|
||||
def crop(gpkg_path, geotiff_path, gdal_options, keep_original=True, warp_options=[]):
|
||||
if not os.path.exists(gpkg_path) or not os.path.exists(geotiff_path):
|
||||
log.ODM_WARNING("Either {} or {} does not exist, will skip cropping.".format(gpkg_path, geotiff_path))
|
||||
return geotiff_path
|
||||
|
@ -44,12 +44,14 @@ class Cropper:
|
|||
'geotiffInput': original_geotiff,
|
||||
'geotiffOutput': geotiff_path,
|
||||
'options': ' '.join(map(lambda k: '-co {}={}'.format(k, gdal_options[k]), gdal_options)),
|
||||
'warpOptions': ' '.join(warp_options),
|
||||
'max_memory': get_max_memory()
|
||||
}
|
||||
|
||||
run('gdalwarp -cutline {gpkg_path} '
|
||||
'-crop_to_cutline '
|
||||
'{options} '
|
||||
'{warpOptions} '
|
||||
'{geotiffInput} '
|
||||
'{geotiffOutput} '
|
||||
'--config GDAL_CACHEMAX {max_memory}%'.format(**kwargs))
|
||||
|
|
|
@ -1,5 +1,16 @@
|
|||
import os
|
||||
from opendm import log
|
||||
from opendm import system
|
||||
from opendm.cropper import Cropper
|
||||
from opendm.concurrency import get_max_memory
|
||||
import math
|
||||
import numpy as np
|
||||
import rasterio
|
||||
import fiona
|
||||
from scipy import ndimage
|
||||
from rasterio.transform import Affine, rowcol
|
||||
from rasterio.mask import mask
|
||||
from opendm import io
|
||||
|
||||
def get_orthophoto_vars(args):
|
||||
return {
|
||||
|
@ -20,4 +31,251 @@ def build_overviews(orthophoto_file):
|
|||
system.run('gdaladdo -ro -r average '
|
||||
'--config BIGTIFF_OVERVIEW IF_SAFER '
|
||||
'--config COMPRESS_OVERVIEW JPEG '
|
||||
'{orthophoto} 2 4 8 16'.format(**kwargs))
|
||||
'{orthophoto} 2 4 8 16'.format(**kwargs))
|
||||
|
||||
def generate_png(orthophoto_file):
|
||||
log.ODM_INFO("Generating PNG")
|
||||
base, ext = os.path.splitext(orthophoto_file)
|
||||
orthophoto_png = base + '.png'
|
||||
|
||||
system.run('gdal_translate -of png "%s" "%s" '
|
||||
'--config GDAL_CACHEMAX %s%% ' % (orthophoto_file, orthophoto_png, get_max_memory()))
|
||||
|
||||
|
||||
def post_orthophoto_steps(args, bounds_file_path, orthophoto_file):
|
||||
if args.crop > 0:
|
||||
Cropper.crop(bounds_file_path, orthophoto_file, get_orthophoto_vars(args), warp_options=['-dstalpha'])
|
||||
|
||||
if args.build_overviews:
|
||||
build_overviews(orthophoto_file)
|
||||
|
||||
if args.orthophoto_png:
|
||||
generate_png(orthophoto_file)
|
||||
|
||||
|
||||
def compute_mask_raster(input_raster, vector_mask, output_raster, blend_distance=20, only_max_coords_feature=False):
|
||||
if not os.path.exists(input_raster):
|
||||
log.ODM_WARNING("Cannot mask raster, %s does not exist" % input_raster)
|
||||
return
|
||||
|
||||
if not os.path.exists(vector_mask):
|
||||
log.ODM_WARNING("Cannot mask raster, %s does not exist" % vector_mask)
|
||||
return
|
||||
|
||||
log.ODM_INFO("Computing mask raster: %s" % output_raster)
|
||||
|
||||
with rasterio.open(input_raster, 'r') as rast:
|
||||
with fiona.open(vector_mask) as src:
|
||||
burn_features = src
|
||||
|
||||
if only_max_coords_feature:
|
||||
max_coords_count = 0
|
||||
max_coords_feature = None
|
||||
for feature in src:
|
||||
if feature is not None:
|
||||
# No complex shapes
|
||||
if len(feature['geometry']['coordinates'][0]) > max_coords_count:
|
||||
max_coords_count = len(feature['geometry']['coordinates'][0])
|
||||
max_coords_feature = feature
|
||||
if max_coords_feature is not None:
|
||||
burn_features = [max_coords_feature]
|
||||
|
||||
shapes = [feature["geometry"] for feature in burn_features]
|
||||
out_image, out_transform = mask(rast, shapes, nodata=0)
|
||||
|
||||
if blend_distance > 0:
|
||||
if out_image.shape[0] >= 4:
|
||||
# alpha_band = rast.dataset_mask()
|
||||
alpha_band = out_image[-1]
|
||||
dist_t = ndimage.distance_transform_edt(alpha_band)
|
||||
dist_t[dist_t <= blend_distance] /= blend_distance
|
||||
dist_t[dist_t > blend_distance] = 1
|
||||
np.multiply(alpha_band, dist_t, out=alpha_band, casting="unsafe")
|
||||
else:
|
||||
log.ODM_WARNING("%s does not have an alpha band, cannot blend cutline!" % input_raster)
|
||||
|
||||
with rasterio.open(output_raster, 'w', **rast.profile) as dst:
|
||||
dst.colorinterp = rast.colorinterp
|
||||
dst.write(out_image)
|
||||
|
||||
return output_raster
|
||||
|
||||
def feather_raster(input_raster, output_raster, blend_distance=20):
|
||||
if not os.path.exists(input_raster):
|
||||
log.ODM_WARNING("Cannot feather raster, %s does not exist" % input_raster)
|
||||
return
|
||||
|
||||
log.ODM_INFO("Computing feather raster: %s" % output_raster)
|
||||
|
||||
with rasterio.open(input_raster, 'r') as rast:
|
||||
out_image = rast.read()
|
||||
if blend_distance > 0:
|
||||
if out_image.shape[0] >= 4:
|
||||
alpha_band = out_image[-1]
|
||||
dist_t = ndimage.distance_transform_edt(alpha_band)
|
||||
dist_t[dist_t <= blend_distance] /= blend_distance
|
||||
dist_t[dist_t > blend_distance] = 1
|
||||
np.multiply(alpha_band, dist_t, out=alpha_band, casting="unsafe")
|
||||
else:
|
||||
log.ODM_WARNING("%s does not have an alpha band, cannot feather raster!" % input_raster)
|
||||
|
||||
with rasterio.open(output_raster, 'w', **rast.profile) as dst:
|
||||
dst.colorinterp = rast.colorinterp
|
||||
dst.write(out_image)
|
||||
|
||||
return output_raster
|
||||
|
||||
def merge(input_ortho_and_ortho_cuts, output_orthophoto, orthophoto_vars={}):
|
||||
"""
|
||||
Based on https://github.com/mapbox/rio-merge-rgba/
|
||||
Merge orthophotos around cutlines using a blend buffer.
|
||||
"""
|
||||
inputs = []
|
||||
bounds=None
|
||||
precision=7
|
||||
|
||||
for o, c in input_ortho_and_ortho_cuts:
|
||||
if not io.file_exists(o):
|
||||
log.ODM_WARNING("%s does not exist. Will skip from merged orthophoto." % o)
|
||||
continue
|
||||
if not io.file_exists(c):
|
||||
log.ODM_WARNING("%s does not exist. Will skip from merged orthophoto." % c)
|
||||
continue
|
||||
inputs.append((o, c))
|
||||
|
||||
if len(inputs) == 0:
|
||||
log.ODM_WARNING("No input orthophotos, skipping merge.")
|
||||
return
|
||||
|
||||
with rasterio.open(inputs[0][0]) as first:
|
||||
res = first.res
|
||||
dtype = first.dtypes[0]
|
||||
profile = first.profile
|
||||
num_bands = first.meta['count'] - 1 # minus alpha
|
||||
colorinterp = first.colorinterp
|
||||
|
||||
log.ODM_INFO("%s valid orthophoto rasters to merge" % len(inputs))
|
||||
sources = [(rasterio.open(o), rasterio.open(c)) for o,c in inputs]
|
||||
|
||||
# scan input files.
|
||||
# while we're at it, validate assumptions about inputs
|
||||
xs = []
|
||||
ys = []
|
||||
for src, _ in sources:
|
||||
left, bottom, right, top = src.bounds
|
||||
xs.extend([left, right])
|
||||
ys.extend([bottom, top])
|
||||
if src.profile["count"] < 4:
|
||||
raise ValueError("Inputs must be at least 4-band rasters")
|
||||
dst_w, dst_s, dst_e, dst_n = min(xs), min(ys), max(xs), max(ys)
|
||||
log.ODM_INFO("Output bounds: %r %r %r %r" % (dst_w, dst_s, dst_e, dst_n))
|
||||
|
||||
output_transform = Affine.translation(dst_w, dst_n)
|
||||
output_transform *= Affine.scale(res[0], -res[1])
|
||||
|
||||
# Compute output array shape. We guarantee it will cover the output
|
||||
# bounds completely.
|
||||
output_width = int(math.ceil((dst_e - dst_w) / res[0]))
|
||||
output_height = int(math.ceil((dst_n - dst_s) / res[1]))
|
||||
|
||||
# Adjust bounds to fit.
|
||||
dst_e, dst_s = output_transform * (output_width, output_height)
|
||||
log.ODM_INFO("Output width: %d, height: %d" % (output_width, output_height))
|
||||
log.ODM_INFO("Adjusted bounds: %r %r %r %r" % (dst_w, dst_s, dst_e, dst_n))
|
||||
|
||||
profile["transform"] = output_transform
|
||||
profile["height"] = output_height
|
||||
profile["width"] = output_width
|
||||
profile["tiled"] = orthophoto_vars.get('TILED', 'YES') == 'YES'
|
||||
profile["blockxsize"] = orthophoto_vars.get('BLOCKXSIZE', 512)
|
||||
profile["blockysize"] = orthophoto_vars.get('BLOCKYSIZE', 512)
|
||||
profile["compress"] = orthophoto_vars.get('COMPRESS', 'LZW')
|
||||
profile["predictor"] = orthophoto_vars.get('PREDICTOR', '2')
|
||||
profile["bigtiff"] = orthophoto_vars.get('BIGTIFF', 'IF_SAFER')
|
||||
profile.update()
|
||||
|
||||
# create destination file
|
||||
with rasterio.open(output_orthophoto, "w", **profile) as dstrast:
|
||||
dstrast.colorinterp = colorinterp
|
||||
for idx, dst_window in dstrast.block_windows():
|
||||
left, bottom, right, top = dstrast.window_bounds(dst_window)
|
||||
|
||||
blocksize = dst_window.width
|
||||
dst_rows, dst_cols = (dst_window.height, dst_window.width)
|
||||
|
||||
# initialize array destined for the block
|
||||
dst_count = first.count
|
||||
dst_shape = (dst_count, dst_rows, dst_cols)
|
||||
|
||||
dstarr = np.zeros(dst_shape, dtype=dtype)
|
||||
|
||||
# First pass, write all rasters naively without blending
|
||||
for src, _ in sources:
|
||||
src_window = tuple(zip(rowcol(
|
||||
src.transform, left, top, op=round, precision=precision
|
||||
), rowcol(
|
||||
src.transform, right, bottom, op=round, precision=precision
|
||||
)))
|
||||
|
||||
temp = np.zeros(dst_shape, dtype=dtype)
|
||||
temp = src.read(
|
||||
out=temp, window=src_window, boundless=True, masked=False
|
||||
)
|
||||
|
||||
# pixels without data yet are available to write
|
||||
write_region = np.logical_and(
|
||||
(dstarr[-1] == 0), (temp[-1] != 0) # 0 is nodata
|
||||
)
|
||||
np.copyto(dstarr, temp, where=write_region)
|
||||
|
||||
# check if dest has any nodata pixels available
|
||||
if np.count_nonzero(dstarr[-1]) == blocksize:
|
||||
break
|
||||
|
||||
# Second pass, write all feathered rasters
|
||||
# blending the edges
|
||||
for src, _ in sources:
|
||||
src_window = tuple(zip(rowcol(
|
||||
src.transform, left, top, op=round, precision=precision
|
||||
), rowcol(
|
||||
src.transform, right, bottom, op=round, precision=precision
|
||||
)))
|
||||
|
||||
temp = np.zeros(dst_shape, dtype=dtype)
|
||||
temp = src.read(
|
||||
out=temp, window=src_window, boundless=True, masked=False
|
||||
)
|
||||
|
||||
where = temp[-1] != 0
|
||||
for b in range(0, num_bands):
|
||||
blended = temp[-1] / 255.0 * temp[b] + (1 - temp[-1] / 255.0) * dstarr[b]
|
||||
np.copyto(dstarr[b], blended, casting='unsafe', where=where)
|
||||
dstarr[-1][where] = 255.0
|
||||
|
||||
# check if dest has any nodata pixels available
|
||||
if np.count_nonzero(dstarr[-1]) == blocksize:
|
||||
break
|
||||
|
||||
# Third pass, write cut rasters
|
||||
# blending the cutlines
|
||||
for _, cut in sources:
|
||||
src_window = tuple(zip(rowcol(
|
||||
cut.transform, left, top, op=round, precision=precision
|
||||
), rowcol(
|
||||
cut.transform, right, bottom, op=round, precision=precision
|
||||
)))
|
||||
|
||||
temp = np.zeros(dst_shape, dtype=dtype)
|
||||
temp = cut.read(
|
||||
out=temp, window=src_window, boundless=True, masked=False
|
||||
)
|
||||
|
||||
# For each band, average alpha values between
|
||||
# destination raster and cut raster
|
||||
for b in range(0, num_bands):
|
||||
blended = temp[-1] / 255.0 * temp[b] + (1 - temp[-1] / 255.0) * dstarr[b]
|
||||
np.copyto(dstarr[b], blended, casting='unsafe', where=temp[-1]!=0)
|
||||
|
||||
dstrast.write(dstarr, window=dst_window)
|
||||
|
||||
return output_orthophoto
|
||||
|
|
|
@ -17,7 +17,8 @@ class OSFMContext:
|
|||
self.opensfm_project_path = opensfm_project_path
|
||||
|
||||
def run(self, command):
|
||||
system.run('%s/bin/opensfm %s "%s"' %
|
||||
# Use Python 2.x by default, otherwise OpenSfM uses Python 3.x
|
||||
system.run('/usr/bin/env python2 %s/bin/opensfm %s "%s"' %
|
||||
(context.opensfm_path, command, self.opensfm_project_path))
|
||||
|
||||
def is_reconstruction_done(self):
|
||||
|
@ -41,17 +42,17 @@ class OSFMContext:
|
|||
log.ODM_WARNING('Found a valid OpenSfM reconstruction file in: %s' % reconstruction_file)
|
||||
|
||||
# Check that a reconstruction file has been created
|
||||
if not io.file_exists(reconstruction_file):
|
||||
if not self.reconstructed():
|
||||
log.ODM_ERROR("The program could not process this dataset using the current settings. "
|
||||
"Check that the images have enough overlap, "
|
||||
"that there are enough recognizable features "
|
||||
"and that the images are in focus. "
|
||||
"You could also try to increase the --min-num-features parameter."
|
||||
"The program will now exit.")
|
||||
raise Exception("Reconstruction could not be generated")
|
||||
exit(1)
|
||||
|
||||
|
||||
def setup(self, args, images_path, photos, gcp_path=None, append_config = [], rerun=False):
|
||||
def setup(self, args, images_path, photos, reconstruction, append_config = [], rerun=False):
|
||||
"""
|
||||
Setup a OpenSfM project
|
||||
"""
|
||||
|
@ -91,22 +92,33 @@ class OSFMContext:
|
|||
except Exception as e:
|
||||
log.ODM_WARNING("Cannot set camera_models_overrides.json: %s" % str(e))
|
||||
|
||||
use_bow = False
|
||||
|
||||
matcher_neighbors = args.matcher_neighbors
|
||||
if matcher_neighbors != 0 and reconstruction.multi_camera is not None:
|
||||
matcher_neighbors *= len(reconstruction.multi_camera)
|
||||
log.ODM_INFO("Increasing matcher neighbors to %s to accomodate multi-camera setup" % matcher_neighbors)
|
||||
log.ODM_INFO("Multi-camera setup, using BOW matching")
|
||||
use_bow = True
|
||||
|
||||
# create config file for OpenSfM
|
||||
config = [
|
||||
"use_exif_size: no",
|
||||
"feature_process_size: %s" % args.resize_to,
|
||||
"feature_min_frames: %s" % args.min_num_features,
|
||||
"processes: %s" % args.max_concurrency,
|
||||
"matching_gps_neighbors: %s" % args.matcher_neighbors,
|
||||
"matching_gps_neighbors: %s" % matcher_neighbors,
|
||||
"matching_gps_distance: %s" % args.matcher_distance,
|
||||
"depthmap_method: %s" % args.opensfm_depthmap_method,
|
||||
"depthmap_resolution: %s" % args.depthmap_resolution,
|
||||
"depthmap_min_patch_sd: %s" % args.opensfm_depthmap_min_patch_sd,
|
||||
"depthmap_min_consistent_views: %s" % args.opensfm_depthmap_min_consistent_views,
|
||||
"optimize_camera_parameters: %s" % ('no' if args.use_fixed_camera_params or args.cameras else 'yes'),
|
||||
"undistorted_image_format: png", # mvs-texturing exhibits artifacts with JPG
|
||||
"undistorted_image_format: tif",
|
||||
"bundle_outlier_filtering_type: AUTO",
|
||||
"align_orientation_prior: vertical",
|
||||
"triangulation_type: ROBUST",
|
||||
"bundle_common_position_constraints: %s" % ('no' if reconstruction.multi_camera is None else 'yes'),
|
||||
]
|
||||
|
||||
if args.camera_lens != 'auto':
|
||||
|
@ -114,12 +126,16 @@ class OSFMContext:
|
|||
|
||||
if not has_gps:
|
||||
log.ODM_INFO("No GPS information, using BOW matching")
|
||||
use_bow = True
|
||||
|
||||
if use_bow:
|
||||
config.append("matcher_type: WORDS")
|
||||
|
||||
if has_alt:
|
||||
log.ODM_INFO("Altitude data detected, enabling it for GPS alignment")
|
||||
config.append("use_altitude_tag: yes")
|
||||
|
||||
gcp_path = reconstruction.gcp.gcp_path
|
||||
if has_alt or gcp_path:
|
||||
config.append("align_method: auto")
|
||||
else:
|
||||
|
@ -152,6 +168,13 @@ class OSFMContext:
|
|||
def get_config_file_path(self):
|
||||
return io.join_paths(self.opensfm_project_path, 'config.yaml')
|
||||
|
||||
def reconstructed(self):
|
||||
if not io.file_exists(self.path("reconstruction.json")):
|
||||
return False
|
||||
|
||||
with open(self.path("reconstruction.json"), 'r') as f:
|
||||
return f.readline().strip() != "[]"
|
||||
|
||||
def extract_metadata(self, rerun=False):
|
||||
metadata_dir = self.path("exif")
|
||||
if not io.dir_exists(metadata_dir) or rerun:
|
||||
|
|
|
@ -3,6 +3,9 @@ from opendm import system
|
|||
from opendm import log
|
||||
from opendm import context
|
||||
from opendm.system import run
|
||||
from opendm import entwine
|
||||
from opendm import io
|
||||
from pipes import quote
|
||||
|
||||
def filter(input_point_cloud, output_point_cloud, standard_deviation=2.5, meank=16, confidence=None, sample_radius=0, verbose=False):
|
||||
"""
|
||||
|
@ -100,6 +103,50 @@ def get_extent(input_point_cloud):
|
|||
raise Exception("Cannot compute bounds for %s (invalid keys) %s" % (input_point_cloud, str(bounds)))
|
||||
|
||||
os.remove(json_file)
|
||||
return bounds
|
||||
return bounds
|
||||
|
||||
|
||||
def merge(input_point_cloud_files, output_file, rerun=False):
|
||||
num_files = len(input_point_cloud_files)
|
||||
if num_files == 0:
|
||||
log.ODM_WARNING("No input point cloud files to process")
|
||||
return
|
||||
|
||||
if rerun and io.file_exists(output_file):
|
||||
log.ODM_WARNING("Removing previous point cloud: %s" % output_file)
|
||||
os.remove(output_file)
|
||||
|
||||
kwargs = {
|
||||
'all_inputs': " ".join(map(quote, input_point_cloud_files)),
|
||||
'output': output_file
|
||||
}
|
||||
|
||||
system.run('lasmerge -i {all_inputs} -o "{output}"'.format(**kwargs))
|
||||
|
||||
|
||||
def post_point_cloud_steps(args, tree):
|
||||
# XYZ point cloud output
|
||||
if args.pc_csv:
|
||||
log.ODM_INFO("Creating geo-referenced CSV file (XYZ format)")
|
||||
|
||||
system.run("pdal translate -i \"{}\" "
|
||||
"-o \"{}\" "
|
||||
"--writers.text.format=csv "
|
||||
"--writers.text.order=\"X,Y,Z\" "
|
||||
"--writers.text.keep_unspecified=false ".format(
|
||||
tree.odm_georeferencing_model_laz,
|
||||
tree.odm_georeferencing_xyz_file))
|
||||
|
||||
# LAS point cloud output
|
||||
if args.pc_las:
|
||||
log.ODM_INFO("Creating geo-referenced LAS file")
|
||||
|
||||
system.run("pdal translate -i \"{}\" "
|
||||
"-o \"{}\" ".format(
|
||||
tree.odm_georeferencing_model_laz,
|
||||
tree.odm_georeferencing_model_las))
|
||||
|
||||
# EPT point cloud output
|
||||
if args.pc_ept:
|
||||
log.ODM_INFO("Creating geo-referenced Entwine Point Tile output")
|
||||
entwine.build([tree.odm_georeferencing_model_laz], tree.entwine_pointcloud, max_concurrency=args.max_concurrency, rerun=False)
|
||||
|
|
|
@ -501,8 +501,11 @@ class ToolchainTask(Task):
|
|||
"opensfm/exif/empty"],
|
||||
outputs=["odm_orthophoto/odm_orthophoto.tif",
|
||||
"odm_orthophoto/cutline.gpkg",
|
||||
"odm_orthophoto/odm_orthophoto_cut.tif",
|
||||
"odm_orthophoto/odm_orthophoto_feathered.tif",
|
||||
"odm_dem",
|
||||
"odm_georeferencing"])
|
||||
"odm_georeferencing",
|
||||
"odm_georeferencing_25d"])
|
||||
else:
|
||||
log.ODM_INFO("Already processed toolchain for %s" % submodel_name)
|
||||
handle_result()
|
115
opendm/types.py
115
opendm/types.py
|
@ -3,11 +3,12 @@ import exifread
|
|||
import re
|
||||
import os
|
||||
from fractions import Fraction
|
||||
from opensfm.exif import sensor_string
|
||||
from opendm import get_image_size
|
||||
from opendm import location
|
||||
from opendm.gcp import GCPFile
|
||||
from pyproj import CRS
|
||||
import xmltodict as x2d
|
||||
from six import string_types
|
||||
|
||||
import log
|
||||
import io
|
||||
|
@ -28,10 +29,12 @@ class ODM_Photo:
|
|||
# other attributes
|
||||
self.camera_make = ''
|
||||
self.camera_model = ''
|
||||
self.make_model = ''
|
||||
self.latitude = None
|
||||
self.longitude = None
|
||||
self.altitude = None
|
||||
self.band_name = 'RGB'
|
||||
self.band_index = 0
|
||||
|
||||
# parse values from metadata
|
||||
self.parse_exif_values(path_file)
|
||||
|
||||
|
@ -40,8 +43,9 @@ class ODM_Photo:
|
|||
|
||||
|
||||
def __str__(self):
|
||||
return '{} | camera: {} | dimensions: {} x {} | lat: {} | lon: {} | alt: {}'.format(
|
||||
self.filename, self.make_model, self.width, self.height, self.latitude, self.longitude, self.altitude)
|
||||
return '{} | camera: {} {} | dimensions: {} x {} | lat: {} | lon: {} | alt: {} | band: {} ({})'.format(
|
||||
self.filename, self.camera_make, self.camera_model, self.width, self.height,
|
||||
self.latitude, self.longitude, self.altitude, self.band_name, self.band_index)
|
||||
|
||||
def parse_exif_values(self, _path_file):
|
||||
# Disable exifread log
|
||||
|
@ -66,10 +70,61 @@ class ODM_Photo:
|
|||
except IndexError as e:
|
||||
log.ODM_WARNING("Cannot read EXIF tags for %s: %s" % (_path_file, e.message))
|
||||
|
||||
if self.camera_make and self.camera_model:
|
||||
self.make_model = sensor_string(self.camera_make, self.camera_model)
|
||||
# Extract XMP tags
|
||||
f.seek(0)
|
||||
xmp = self.get_xmp(f)
|
||||
|
||||
# Find band name and camera index (if available)
|
||||
camera_index_tags = [
|
||||
'DLS:SensorId', # Micasense RedEdge
|
||||
'@Camera:RigCameraIndex', # Parrot Sequoia
|
||||
'Camera:RigCameraIndex', # MicaSense Altum
|
||||
]
|
||||
|
||||
for tags in xmp:
|
||||
if 'Camera:BandName' in tags:
|
||||
cbt = tags['Camera:BandName']
|
||||
band_name = None
|
||||
|
||||
if isinstance(cbt, string_types):
|
||||
band_name = str(tags['Camera:BandName'])
|
||||
elif isinstance(cbt, dict):
|
||||
items = cbt.get('rdf:Seq', {}).get('rdf:li', {})
|
||||
if items:
|
||||
band_name = " ".join(items)
|
||||
|
||||
if band_name is not None:
|
||||
self.band_name = band_name.replace(" ", "")
|
||||
else:
|
||||
log.ODM_WARNING("Camera:BandName tag found in XMP, but we couldn't parse it. Multispectral bands might be improperly classified.")
|
||||
|
||||
for cit in camera_index_tags:
|
||||
if cit in tags:
|
||||
self.band_index = int(tags[cit])
|
||||
|
||||
self.width, self.height = get_image_size.get_image_size(_path_file)
|
||||
|
||||
# Sanitize band name since we use it in folder paths
|
||||
self.band_name = re.sub('[^A-Za-z0-9]+', '', self.band_name)
|
||||
|
||||
# From https://github.com/mapillary/OpenSfM/blob/master/opensfm/exif.py
|
||||
def get_xmp(self, file):
|
||||
img_str = str(file.read())
|
||||
xmp_start = img_str.find('<x:xmpmeta')
|
||||
xmp_end = img_str.find('</x:xmpmeta')
|
||||
|
||||
if xmp_start < xmp_end:
|
||||
xmp_str = img_str[xmp_start:xmp_end + 12]
|
||||
xdict = x2d.parse(xmp_str)
|
||||
xdict = xdict.get('x:xmpmeta', {})
|
||||
xdict = xdict.get('rdf:RDF', {})
|
||||
xdict = xdict.get('rdf:Description', {})
|
||||
if isinstance(xdict, list):
|
||||
return xdict
|
||||
else:
|
||||
return [xdict]
|
||||
else:
|
||||
return []
|
||||
|
||||
def dms_to_decimal(self, dms, sign):
|
||||
"""Converts dms coords to decimal degrees"""
|
||||
|
@ -93,6 +148,44 @@ class ODM_Reconstruction(object):
|
|||
self.photos = photos
|
||||
self.georef = None
|
||||
self.gcp = None
|
||||
self.multi_camera = self.detect_multi_camera()
|
||||
|
||||
def detect_multi_camera(self):
|
||||
"""
|
||||
Looks at the reconstruction photos and determines if this
|
||||
is a single or multi-camera setup.
|
||||
"""
|
||||
band_photos = {}
|
||||
band_indexes = {}
|
||||
|
||||
for p in self.photos:
|
||||
if not p.band_name in band_photos:
|
||||
band_photos[p.band_name] = []
|
||||
if not p.band_name in band_indexes:
|
||||
band_indexes[p.band_name] = p.band_index
|
||||
|
||||
band_photos[p.band_name].append(p)
|
||||
|
||||
bands_count = len(band_photos)
|
||||
if bands_count >= 2 and bands_count <= 8:
|
||||
# Validate that all bands have the same number of images,
|
||||
# otherwise this is not a multi-camera setup
|
||||
img_per_band = len(band_photos[p.band_name])
|
||||
for band in band_photos:
|
||||
if len(band_photos[band]) != img_per_band:
|
||||
log.ODM_ERROR("Multi-camera setup detected, but band \"%s\" (identified from \"%s\") has only %s images (instead of %s), perhaps images are missing or are corrupted. Please include all necessary files to process all bands and try again." % (band, band_photos[band][0].filename, len(band_photos[band]), img_per_band))
|
||||
raise RuntimeError("Invalid multi-camera images")
|
||||
|
||||
mc = []
|
||||
for band_name in band_indexes:
|
||||
mc.append({'name': band_name, 'photos': band_photos[band_name]})
|
||||
|
||||
# Sort by band index
|
||||
mc.sort(key=lambda x: band_indexes[x['name']])
|
||||
|
||||
return mc
|
||||
|
||||
return None
|
||||
|
||||
def is_georeferenced(self):
|
||||
return self.georef is not None
|
||||
|
@ -237,7 +330,7 @@ class ODM_Tree(object):
|
|||
self.odm_texturing = io.join_paths(self.root_path, 'odm_texturing')
|
||||
self.odm_25dtexturing = io.join_paths(self.root_path, 'odm_texturing_25d')
|
||||
self.odm_georeferencing = io.join_paths(self.root_path, 'odm_georeferencing')
|
||||
self.odm_25dgeoreferencing = io.join_paths(self.root_path, 'odm_25dgeoreferencing')
|
||||
self.odm_25dgeoreferencing = io.join_paths(self.root_path, 'odm_georeferencing_25d')
|
||||
self.odm_filterpoints = io.join_paths(self.root_path, 'odm_filterpoints')
|
||||
self.odm_orthophoto = io.join_paths(self.root_path, 'odm_orthophoto')
|
||||
|
||||
|
@ -253,8 +346,8 @@ class ODM_Tree(object):
|
|||
self.opensfm_bundle_list = io.join_paths(self.opensfm, 'list_r000.out')
|
||||
self.opensfm_image_list = io.join_paths(self.opensfm, 'image_list.txt')
|
||||
self.opensfm_reconstruction = io.join_paths(self.opensfm, 'reconstruction.json')
|
||||
self.opensfm_reconstruction_nvm = io.join_paths(self.opensfm, 'reconstruction.nvm')
|
||||
self.opensfm_model = io.join_paths(self.opensfm, 'depthmaps/merged.ply')
|
||||
self.opensfm_reconstruction_nvm = io.join_paths(self.opensfm, 'undistorted/reconstruction.nvm')
|
||||
self.opensfm_model = io.join_paths(self.opensfm, 'undistorted/depthmaps/merged.ply')
|
||||
self.opensfm_transformation = io.join_paths(self.opensfm, 'geocoords_transformation.txt')
|
||||
|
||||
# mve
|
||||
|
@ -279,8 +372,6 @@ class ODM_Tree(object):
|
|||
self.odm_texuring_log = 'odm_texturing_log.txt'
|
||||
|
||||
# odm_georeferencing
|
||||
self.odm_georeferencing_latlon = io.join_paths(
|
||||
self.odm_georeferencing, 'latlon.txt')
|
||||
self.odm_georeferencing_coords = io.join_paths(
|
||||
self.odm_georeferencing, 'coords.txt')
|
||||
self.odm_georeferencing_gcp = gcp_file or io.find('gcp_list.txt', self.root_path)
|
||||
|
@ -304,7 +395,7 @@ class ODM_Tree(object):
|
|||
self.odm_georeferencing, 'odm_georeferencing_model_dem.tif')
|
||||
|
||||
# odm_orthophoto
|
||||
self.odm_orthophoto_file = io.join_paths(self.odm_orthophoto, 'odm_orthophoto.png')
|
||||
self.odm_orthophoto_render = io.join_paths(self.odm_orthophoto, 'odm_orthophoto_render.tif')
|
||||
self.odm_orthophoto_tif = io.join_paths(self.odm_orthophoto, 'odm_orthophoto.tif')
|
||||
self.odm_orthophoto_corners = io.join_paths(self.odm_orthophoto, 'odm_orthophoto_corners.txt')
|
||||
self.odm_orthophoto_log = io.join_paths(self.odm_orthophoto, 'odm_orthophoto_log.txt')
|
||||
|
|
|
@ -15,3 +15,4 @@ numpy==1.15.4
|
|||
pyproj==2.2.2
|
||||
psutil==5.6.3
|
||||
joblib==0.13.2
|
||||
Fiona==1.8.9.post2
|
||||
|
|
4
run.py
4
run.py
|
@ -41,13 +41,13 @@ if __name__ == '__main__':
|
|||
os.system("rm -rf " +
|
||||
" ".join([
|
||||
quote(os.path.join(args.project_path, "odm_georeferencing")),
|
||||
quote(os.path.join(args.project_path, "odm_georeferencing_25d")),
|
||||
quote(os.path.join(args.project_path, "odm_meshing")),
|
||||
quote(os.path.join(args.project_path, "odm_orthophoto")),
|
||||
quote(os.path.join(args.project_path, "odm_texturing")),
|
||||
quote(os.path.join(args.project_path, "opensfm")),
|
||||
quote(os.path.join(args.project_path, "odm_filterpoints")),
|
||||
quote(os.path.join(args.project_path, "odm_25dmeshing")),
|
||||
quote(os.path.join(args.project_path, "odm_25dtexturing")),
|
||||
quote(os.path.join(args.project_path, "odm_texturing_25d")),
|
||||
quote(os.path.join(args.project_path, "mve")),
|
||||
quote(os.path.join(args.project_path, "entwine_pointcloud")),
|
||||
quote(os.path.join(args.project_path, "submodels")),
|
||||
|
|
|
@ -29,7 +29,13 @@ class ODMMveStage(types.ODM_Stage):
|
|||
|
||||
# run mve makescene
|
||||
if not io.dir_exists(tree.mve_views):
|
||||
system.run('%s "%s" "%s"' % (context.makescene_path, tree.opensfm_reconstruction_nvm, tree.mve), env_vars={'OMP_NUM_THREADS': args.max_concurrency})
|
||||
nvm_file = tree.opensfm_reconstruction_nvm
|
||||
if reconstruction.multi_camera:
|
||||
# Reconstruct only the primary band
|
||||
primary = reconstruction.multi_camera[0]
|
||||
nvm_file = os.path.join(tree.opensfm, "undistorted", "reconstruction_%s.nvm" % primary['name'].lower())
|
||||
|
||||
system.run('%s "%s" "%s"' % (context.makescene_path, nvm_file, tree.mve), env_vars={'OMP_NUM_THREADS': args.max_concurrency})
|
||||
|
||||
self.update_progress(10)
|
||||
|
||||
|
@ -43,54 +49,13 @@ class ODMMveStage(types.ODM_Stage):
|
|||
|
||||
dmrecon_config = [
|
||||
"-s%s" % mve_output_scale,
|
||||
"--progress=silent",
|
||||
"--progress=fancy",
|
||||
"--local-neighbors=2",
|
||||
# "--filter-width=3",
|
||||
]
|
||||
|
||||
# Run MVE's dmrecon
|
||||
log.ODM_INFO(' ')
|
||||
log.ODM_INFO(' ,*/** ')
|
||||
log.ODM_INFO(' ,*@%*/@%* ')
|
||||
log.ODM_INFO(' ,/@%******@&*. ')
|
||||
log.ODM_INFO(' ,*@&*********/@&* ')
|
||||
log.ODM_INFO(' ,*@&**************@&* ')
|
||||
log.ODM_INFO(' ,/@&******************@&*. ')
|
||||
log.ODM_INFO(' ,*@&*********************/@&* ')
|
||||
log.ODM_INFO(' ,*@&**************************@&*. ')
|
||||
log.ODM_INFO(' ,/@&******************************&&*, ')
|
||||
log.ODM_INFO(' ,*&&**********************************@&*. ')
|
||||
log.ODM_INFO(' ,*@&**************************************@&*. ')
|
||||
log.ODM_INFO(' ,*@&***************#@@@@@@@@@%****************&&*, ')
|
||||
log.ODM_INFO(' .*&&***************&@@@@@@@@@@@@@@****************@@*. ')
|
||||
log.ODM_INFO(' .*@&***************&@@@@@@@@@@@@@@@@@%****(@@%********@@*. ')
|
||||
log.ODM_INFO(' .*@@***************%@@@@@@@@@@@@@@@@@@@@@#****&@@@@%******&@*, ')
|
||||
log.ODM_INFO(' .*&@****************@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@/*****@@*. ')
|
||||
log.ODM_INFO(' .*@@****************@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@%*************@@*. ')
|
||||
log.ODM_INFO(' .*@@****/***********@@@@@&**(@@@@@@@@@@@@@@@@@@@@@@@#*****************%@*, ')
|
||||
log.ODM_INFO(' */@*******@*******#@@@@%*******/@@@@@@@@@@@@@@@@@@@@********************/@(, ')
|
||||
log.ODM_INFO(' ,*@(********&@@@@@@#**************/@@@@@@@#**(@@&/**********************@&* ')
|
||||
log.ODM_INFO(' *#@/*******************************@@@@@***&@&**********************&@*, ')
|
||||
log.ODM_INFO(' *#@#******************************&@@@***@#*********************&@*, ')
|
||||
log.ODM_INFO(' */@#*****************************@@@************************@@*. ')
|
||||
log.ODM_INFO(' *#@/***************************/@@/*********************%@*, ')
|
||||
log.ODM_INFO(' *#@#**************************#@@%******************%@*, ')
|
||||
log.ODM_INFO(' */@#*************************(@@@@@@@&%/********&@*. ')
|
||||
log.ODM_INFO(' *(@(*********************************/%@@%**%@*, ')
|
||||
log.ODM_INFO(' *(@%************************************%@** ')
|
||||
log.ODM_INFO(' **@%********************************&@*, ')
|
||||
log.ODM_INFO(' *(@(****************************%@/* ')
|
||||
log.ODM_INFO(' ,(@%************************#@/* ')
|
||||
log.ODM_INFO(' ,*@%********************&@/, ')
|
||||
log.ODM_INFO(' */@#****************#@/* ')
|
||||
log.ODM_INFO(' ,/@&************#@/* ')
|
||||
log.ODM_INFO(' ,*@&********%@/, ')
|
||||
log.ODM_INFO(' */@#****(@/* ')
|
||||
log.ODM_INFO(' ,/@@@@(* ')
|
||||
log.ODM_INFO(' .**, ')
|
||||
log.ODM_INFO('')
|
||||
log.ODM_INFO("Running dense reconstruction. This might take a while. Please be patient, the process is not dead or hung.")
|
||||
log.ODM_INFO(" Process is running")
|
||||
log.ODM_INFO("Running dense reconstruction. This might take a while.")
|
||||
|
||||
# TODO: find out why MVE is crashing at random
|
||||
# MVE *seems* to have a race condition, triggered randomly, regardless of dataset
|
||||
|
|
|
@ -11,27 +11,45 @@ class ODMMvsTexStage(types.ODM_Stage):
|
|||
tree = outputs['tree']
|
||||
reconstruction = outputs['reconstruction']
|
||||
|
||||
# define paths and create working directories
|
||||
system.mkdir_p(tree.odm_texturing)
|
||||
if not args.use_3dmesh: system.mkdir_p(tree.odm_25dtexturing)
|
||||
|
||||
runs = [{
|
||||
'out_dir': tree.odm_texturing,
|
||||
'model': tree.odm_mesh,
|
||||
'nadir': False
|
||||
}]
|
||||
|
||||
if args.skip_3dmodel:
|
||||
class nonloc:
|
||||
runs = []
|
||||
|
||||
if not args.use_3dmesh:
|
||||
runs += [{
|
||||
'out_dir': tree.odm_25dtexturing,
|
||||
'model': tree.odm_25dmesh,
|
||||
'nadir': True
|
||||
def add_run(nvm_file, primary=True, band=None):
|
||||
subdir = ""
|
||||
if not primary and band is not None:
|
||||
subdir = band
|
||||
|
||||
if not args.skip_3dmodel and (primary or args.use_3dmesh):
|
||||
nonloc.runs += [{
|
||||
'out_dir': os.path.join(tree.odm_texturing, subdir),
|
||||
'model': tree.odm_mesh,
|
||||
'nadir': False,
|
||||
'nvm_file': nvm_file
|
||||
}]
|
||||
|
||||
for r in runs:
|
||||
if not args.use_3dmesh:
|
||||
nonloc.runs += [{
|
||||
'out_dir': os.path.join(tree.odm_25dtexturing, subdir),
|
||||
'model': tree.odm_25dmesh,
|
||||
'nadir': True,
|
||||
'nvm_file': nvm_file
|
||||
}]
|
||||
|
||||
if reconstruction.multi_camera:
|
||||
for band in reconstruction.multi_camera:
|
||||
primary = band == reconstruction.multi_camera[0]
|
||||
nvm_file = os.path.join(tree.opensfm, "undistorted", "reconstruction_%s.nvm" % band['name'].lower())
|
||||
add_run(nvm_file, primary, band['name'].lower())
|
||||
else:
|
||||
add_run(tree.opensfm_reconstruction_nvm)
|
||||
|
||||
progress_per_run = 100.0 / len(nonloc.runs)
|
||||
progress = 0.0
|
||||
|
||||
for r in nonloc.runs:
|
||||
if not io.dir_exists(r['out_dir']):
|
||||
system.mkdir_p(r['out_dir'])
|
||||
|
||||
odm_textured_model_obj = os.path.join(r['out_dir'], tree.odm_textured_model_obj)
|
||||
|
||||
if not io.file_exists(odm_textured_model_obj) or self.rerun():
|
||||
|
@ -74,7 +92,7 @@ class ODMMvsTexStage(types.ODM_Stage):
|
|||
'toneMapping': self.params.get('tone_mapping'),
|
||||
'nadirMode': nadir,
|
||||
'nadirWeight': 2 ** args.texturing_nadir_weight - 1,
|
||||
'nvm_file': io.join_paths(tree.opensfm, "reconstruction.nvm")
|
||||
'nvm_file': r['nvm_file']
|
||||
}
|
||||
|
||||
mvs_tmp_dir = os.path.join(r['out_dir'], 'tmp')
|
||||
|
@ -96,7 +114,8 @@ class ODMMvsTexStage(types.ODM_Stage):
|
|||
'{nadirMode} '
|
||||
'-n {nadirWeight}'.format(**kwargs))
|
||||
|
||||
self.update_progress(50)
|
||||
progress += progress_per_run
|
||||
self.update_progress(progress)
|
||||
else:
|
||||
log.ODM_WARNING('Found a valid ODM Texture file in: %s'
|
||||
% odm_textured_model_obj)
|
||||
|
|
|
@ -9,7 +9,6 @@ from opendm import system
|
|||
from opendm import context
|
||||
from opendm.cropper import Cropper
|
||||
from opendm import point_cloud
|
||||
from opendm import entwine
|
||||
|
||||
class ODMGeoreferencingStage(types.ODM_Stage):
|
||||
def process(self, args, outputs):
|
||||
|
@ -20,28 +19,46 @@ class ODMGeoreferencingStage(types.ODM_Stage):
|
|||
transformPointCloud = True
|
||||
verbose = '-verbose' if self.params.get('verbose') else ''
|
||||
|
||||
runs = [{
|
||||
'georeferencing_dir': tree.odm_georeferencing,
|
||||
'texturing_dir': tree.odm_texturing,
|
||||
'model': os.path.join(tree.odm_texturing, tree.odm_textured_model_obj)
|
||||
}]
|
||||
|
||||
if args.skip_3dmodel:
|
||||
class nonloc:
|
||||
runs = []
|
||||
|
||||
if not args.use_3dmesh:
|
||||
def add_run(primary=True, band=None):
|
||||
subdir = ""
|
||||
if not primary and band is not None:
|
||||
subdir = band
|
||||
|
||||
# Make sure 2.5D mesh is georeferenced before the 3D mesh
|
||||
# Because it will be used to calculate a transform
|
||||
# for the point cloud. If we use the 3D model transform,
|
||||
# DEMs and orthophoto might not align!
|
||||
runs.insert(0, {
|
||||
'georeferencing_dir': tree.odm_25dgeoreferencing,
|
||||
'texturing_dir': tree.odm_25dtexturing,
|
||||
'model': os.path.join(tree.odm_25dtexturing, tree.odm_textured_model_obj)
|
||||
})
|
||||
if not args.use_3dmesh:
|
||||
nonloc.runs += [{
|
||||
'georeferencing_dir': os.path.join(tree.odm_25dgeoreferencing, subdir),
|
||||
'texturing_dir': os.path.join(tree.odm_25dtexturing, subdir),
|
||||
}]
|
||||
|
||||
if not args.skip_3dmodel and (primary or args.use_3dmesh):
|
||||
nonloc.runs += [{
|
||||
'georeferencing_dir': tree.odm_georeferencing,
|
||||
'texturing_dir': os.path.join(tree.odm_texturing, subdir),
|
||||
}]
|
||||
|
||||
if reconstruction.multi_camera:
|
||||
for band in reconstruction.multi_camera:
|
||||
primary = band == reconstruction.multi_camera[0]
|
||||
add_run(primary, band['name'].lower())
|
||||
else:
|
||||
add_run()
|
||||
|
||||
progress_per_run = 100.0 / len(nonloc.runs)
|
||||
progress = 0.0
|
||||
|
||||
for r in nonloc.runs:
|
||||
if not io.dir_exists(r['georeferencing_dir']):
|
||||
system.mkdir_p(r['georeferencing_dir'])
|
||||
|
||||
for r in runs:
|
||||
odm_georeferencing_model_obj_geo = os.path.join(r['texturing_dir'], tree.odm_georeferencing_model_obj_geo)
|
||||
odm_georeferencing_model_obj = os.path.join(r['texturing_dir'], tree.odm_textured_model_obj)
|
||||
odm_georeferencing_log = os.path.join(r['georeferencing_dir'], tree.odm_georeferencing_log)
|
||||
odm_georeferencing_transform_file = os.path.join(r['georeferencing_dir'], tree.odm_georeferencing_transform_file)
|
||||
odm_georeferencing_model_txt_geo_file = os.path.join(r['georeferencing_dir'], tree.odm_georeferencing_model_txt_geo)
|
||||
|
@ -56,7 +73,7 @@ class ODMGeoreferencingStage(types.ODM_Stage):
|
|||
'bundle': tree.opensfm_bundle,
|
||||
'imgs': tree.dataset_raw,
|
||||
'imgs_list': tree.opensfm_bundle_list,
|
||||
'model': r['model'],
|
||||
'model': odm_georeferencing_model_obj,
|
||||
'log': odm_georeferencing_log,
|
||||
'input_trans_file': tree.opensfm_transformation,
|
||||
'transform_file': odm_georeferencing_transform_file,
|
||||
|
@ -97,32 +114,7 @@ class ODMGeoreferencingStage(types.ODM_Stage):
|
|||
|
||||
if doPointCloudGeo:
|
||||
reconstruction.georef.extract_offsets(odm_georeferencing_model_txt_geo_file)
|
||||
|
||||
# XYZ point cloud output
|
||||
if args.pc_csv:
|
||||
log.ODM_INFO("Creating geo-referenced CSV file (XYZ format)")
|
||||
|
||||
system.run("pdal translate -i \"{}\" "
|
||||
"-o \"{}\" "
|
||||
"--writers.text.format=csv "
|
||||
"--writers.text.order=\"X,Y,Z\" "
|
||||
"--writers.text.keep_unspecified=false ".format(
|
||||
tree.odm_georeferencing_model_laz,
|
||||
tree.odm_georeferencing_xyz_file))
|
||||
|
||||
# LAS point cloud output
|
||||
if args.pc_las:
|
||||
log.ODM_INFO("Creating geo-referenced LAS file")
|
||||
|
||||
system.run("pdal translate -i \"{}\" "
|
||||
"-o \"{}\" ".format(
|
||||
tree.odm_georeferencing_model_laz,
|
||||
tree.odm_georeferencing_model_las))
|
||||
|
||||
# EPT point cloud output
|
||||
if args.pc_ept:
|
||||
log.ODM_INFO("Creating geo-referenced Entwine Point Tile output")
|
||||
entwine.build([tree.odm_georeferencing_model_laz], tree.entwine_pointcloud, max_concurrency=args.max_concurrency, rerun=self.rerun())
|
||||
point_cloud.post_point_cloud_steps(args, tree)
|
||||
|
||||
if args.crop > 0:
|
||||
log.ODM_INFO("Calculating cropping area and generating bounds shapefile from point cloud")
|
||||
|
@ -145,3 +137,6 @@ class ODMGeoreferencingStage(types.ODM_Stage):
|
|||
else:
|
||||
log.ODM_WARNING('Found a valid georeferenced model in: %s'
|
||||
% tree.odm_georeferencing_model_laz)
|
||||
|
||||
progress += progress_per_run
|
||||
self.update_progress(progress)
|
||||
|
|
|
@ -8,9 +8,8 @@ from opendm import types
|
|||
from opendm import gsd
|
||||
from opendm import orthophoto
|
||||
from opendm.concurrency import get_max_memory
|
||||
from opendm.cropper import Cropper
|
||||
from opendm.cutline import compute_cutline
|
||||
|
||||
from pipes import quote
|
||||
|
||||
class ODMOrthoPhotoStage(types.ODM_Stage):
|
||||
def process(self, args, outputs):
|
||||
|
@ -21,15 +20,16 @@ class ODMOrthoPhotoStage(types.ODM_Stage):
|
|||
# define paths and create working directories
|
||||
system.mkdir_p(tree.odm_orthophoto)
|
||||
|
||||
if not io.file_exists(tree.odm_orthophoto_file) or self.rerun():
|
||||
if not io.file_exists(tree.odm_orthophoto_tif) or self.rerun():
|
||||
|
||||
# odm_orthophoto definitions
|
||||
kwargs = {
|
||||
'bin': context.odm_modules_path,
|
||||
'log': tree.odm_orthophoto_log,
|
||||
'ortho': tree.odm_orthophoto_file,
|
||||
'ortho': tree.odm_orthophoto_render,
|
||||
'corners': tree.odm_orthophoto_corners,
|
||||
'res': 1.0 / (gsd.cap_resolution(args.orthophoto_resolution, tree.opensfm_reconstruction, ignore_gsd=args.ignore_gsd) / 100.0),
|
||||
'bands': '',
|
||||
'verbose': verbose
|
||||
}
|
||||
|
||||
|
@ -45,21 +45,35 @@ class ODMOrthoPhotoStage(types.ODM_Stage):
|
|||
else:
|
||||
log.ODM_WARNING('Cannot read UTM offset from {}. An orthophoto will not be generated.'.format(odm_georeferencing_model_txt_geo_file))
|
||||
|
||||
if reconstruction.is_georeferenced():
|
||||
if args.use_3dmesh:
|
||||
kwargs['model_geo'] = os.path.join(tree.odm_texturing, tree.odm_georeferencing_model_obj_geo)
|
||||
else:
|
||||
kwargs['model_geo'] = os.path.join(tree.odm_25dtexturing, tree.odm_georeferencing_model_obj_geo)
|
||||
models = []
|
||||
|
||||
if args.use_3dmesh:
|
||||
base_dir = tree.odm_texturing
|
||||
else:
|
||||
if args.use_3dmesh:
|
||||
kwargs['model_geo'] = os.path.join(tree.odm_texturing, tree.odm_textured_model_obj)
|
||||
else:
|
||||
kwargs['model_geo'] = os.path.join(tree.odm_25dtexturing, tree.odm_textured_model_obj)
|
||||
base_dir = tree.odm_25dtexturing
|
||||
|
||||
if reconstruction.is_georeferenced():
|
||||
model_file = tree.odm_georeferencing_model_obj_geo
|
||||
else:
|
||||
model_file = tree.odm_textured_model_obj
|
||||
|
||||
if reconstruction.multi_camera:
|
||||
for band in reconstruction.multi_camera:
|
||||
primary = band == reconstruction.multi_camera[0]
|
||||
subdir = ""
|
||||
if not primary:
|
||||
subdir = band['name'].lower()
|
||||
models.append(os.path.join(base_dir, subdir, model_file))
|
||||
kwargs['bands'] = '-bands %s' % (','.join([quote(b['name'].lower()) for b in reconstruction.multi_camera]))
|
||||
else:
|
||||
models.append(os.path.join(base_dir, model_file))
|
||||
|
||||
kwargs['models'] = ','.join(map(quote, models))
|
||||
|
||||
# run odm_orthophoto
|
||||
system.run('{bin}/odm_orthophoto -inputFile {model_geo} '
|
||||
system.run('{bin}/odm_orthophoto -inputFiles {models} '
|
||||
'-logFile {log} -outputFile {ortho} -resolution {res} {verbose} '
|
||||
'-outputCornerFile {corners}'.format(**kwargs))
|
||||
'-outputCornerFile {corners} {bands}'.format(**kwargs))
|
||||
|
||||
# Create georeferenced GeoTiff
|
||||
geotiffcreated = False
|
||||
|
@ -90,8 +104,8 @@ class ODMOrthoPhotoStage(types.ODM_Stage):
|
|||
'lry': lry,
|
||||
'vars': ' '.join(['-co %s=%s' % (k, orthophoto_vars[k]) for k in orthophoto_vars]),
|
||||
'proj': reconstruction.georef.proj4(),
|
||||
'png': tree.odm_orthophoto_file,
|
||||
'tiff': tree.odm_orthophoto_tif,
|
||||
'input': tree.odm_orthophoto_render,
|
||||
'output': tree.odm_orthophoto_tif,
|
||||
'log': tree.odm_orthophoto_tif_log,
|
||||
'max_memory': get_max_memory(),
|
||||
}
|
||||
|
@ -100,25 +114,35 @@ class ODMOrthoPhotoStage(types.ODM_Stage):
|
|||
'{vars} '
|
||||
'-a_srs \"{proj}\" '
|
||||
'--config GDAL_CACHEMAX {max_memory}% '
|
||||
'{png} {tiff} > {log}'.format(**kwargs))
|
||||
'--config GDAL_TIFF_INTERNAL_MASK YES '
|
||||
'{input} {output} > {log}'.format(**kwargs))
|
||||
|
||||
bounds_file_path = os.path.join(tree.odm_georeferencing, 'odm_georeferenced_model.bounds.gpkg')
|
||||
|
||||
# Cutline computation, before cropping
|
||||
# We want to use the full orthophoto, not the cropped one.
|
||||
if args.orthophoto_cutline:
|
||||
cutline_file = os.path.join(tree.odm_orthophoto, "cutline.gpkg")
|
||||
|
||||
compute_cutline(tree.odm_orthophoto_tif,
|
||||
bounds_file_path,
|
||||
os.path.join(tree.odm_orthophoto, "cutline.gpkg"),
|
||||
cutline_file,
|
||||
args.max_concurrency,
|
||||
tmpdir=os.path.join(tree.odm_orthophoto, "grass_cutline_tmpdir"),
|
||||
scale=0.25)
|
||||
|
||||
if args.crop > 0:
|
||||
Cropper.crop(bounds_file_path, tree.odm_orthophoto_tif, orthophoto_vars)
|
||||
orthophoto.compute_mask_raster(tree.odm_orthophoto_tif, cutline_file,
|
||||
os.path.join(tree.odm_orthophoto, "odm_orthophoto_cut.tif"),
|
||||
blend_distance=20, only_max_coords_feature=True)
|
||||
|
||||
if args.build_overviews:
|
||||
orthophoto.build_overviews(tree.odm_orthophoto_tif)
|
||||
orthophoto.post_orthophoto_steps(args, bounds_file_path, tree.odm_orthophoto_tif)
|
||||
|
||||
# Generate feathered orthophoto also
|
||||
if args.orthophoto_cutline:
|
||||
orthophoto.feather_raster(tree.odm_orthophoto_tif,
|
||||
os.path.join(tree.odm_orthophoto, "odm_orthophoto_feathered.tif"),
|
||||
blend_distance=20
|
||||
)
|
||||
|
||||
geotiffcreated = True
|
||||
if not geotiffcreated:
|
||||
|
@ -126,4 +150,4 @@ class ODMOrthoPhotoStage(types.ODM_Stage):
|
|||
'to missing geo-referencing or corner coordinates.')
|
||||
|
||||
else:
|
||||
log.ODM_WARNING('Found a valid orthophoto in: %s' % tree.odm_orthophoto_file)
|
||||
log.ODM_WARNING('Found a valid orthophoto in: %s' % tree.odm_orthophoto_tif)
|
||||
|
|
|
@ -21,7 +21,7 @@ class ODMOpenSfMStage(types.ODM_Stage):
|
|||
exit(1)
|
||||
|
||||
octx = OSFMContext(tree.opensfm)
|
||||
octx.setup(args, tree.dataset_raw, photos, gcp_path=reconstruction.gcp.gcp_path, rerun=self.rerun())
|
||||
octx.setup(args, tree.dataset_raw, photos, reconstruction=reconstruction, rerun=self.rerun())
|
||||
octx.extract_metadata(self.rerun())
|
||||
self.update_progress(20)
|
||||
octx.feature_matching(self.rerun())
|
||||
|
@ -57,7 +57,7 @@ class ODMOpenSfMStage(types.ODM_Stage):
|
|||
octx.touch(updated_config_flag_file)
|
||||
|
||||
# These will be used for texturing / MVS
|
||||
undistorted_images_path = octx.path("undistorted")
|
||||
undistorted_images_path = octx.path("undistorted", "images")
|
||||
|
||||
if not io.dir_exists(undistorted_images_path) or self.rerun():
|
||||
octx.run('undistort')
|
||||
|
@ -66,8 +66,29 @@ class ODMOpenSfMStage(types.ODM_Stage):
|
|||
|
||||
self.update_progress(80)
|
||||
|
||||
if reconstruction.multi_camera:
|
||||
# Dump band image lists
|
||||
log.ODM_INFO("Multiple bands found")
|
||||
for band in reconstruction.multi_camera:
|
||||
log.ODM_INFO("Exporting %s band" % band['name'])
|
||||
image_list_file = octx.path("image_list_%s.txt" % band['name'].lower())
|
||||
|
||||
if not io.file_exists(image_list_file) or self.rerun():
|
||||
with open(image_list_file, "w") as f:
|
||||
f.write("\n".join([p.filename for p in band['photos']]))
|
||||
log.ODM_INFO("Wrote %s" % image_list_file)
|
||||
else:
|
||||
log.ODM_WARNING("Found a valid image list in %s for %s band" % (image_list_file, band['name']))
|
||||
|
||||
nvm_file = octx.path("undistorted", "reconstruction_%s.nvm" % band['name'].lower())
|
||||
if not io.file_exists(nvm_file) or self.rerun():
|
||||
octx.run('export_visualsfm --points --image_list "%s"' % image_list_file)
|
||||
os.rename(tree.opensfm_reconstruction_nvm, nvm_file)
|
||||
else:
|
||||
log.ODM_WARNING("Found a valid NVM file in %s for %s band" % (nvm_file, band['name']))
|
||||
|
||||
if not io.file_exists(tree.opensfm_reconstruction_nvm) or self.rerun():
|
||||
octx.run('export_visualsfm --undistorted --points')
|
||||
octx.run('export_visualsfm --points')
|
||||
else:
|
||||
log.ODM_WARNING('Found a valid OpenSfM NVM reconstruction file in: %s' %
|
||||
tree.opensfm_reconstruction_nvm)
|
||||
|
|
|
@ -13,7 +13,7 @@ from opensfm.large import metadataset
|
|||
from opendm.cropper import Cropper
|
||||
from opendm.concurrency import get_max_memory
|
||||
from opendm.remote import LocalRemoteExecutor
|
||||
from opendm import entwine
|
||||
from opendm import point_cloud
|
||||
from pipes import quote
|
||||
|
||||
class ODMSplitStage(types.ODM_Stage):
|
||||
|
@ -46,7 +46,7 @@ class ODMSplitStage(types.ODM_Stage):
|
|||
"submodel_overlap: %s" % args.split_overlap,
|
||||
]
|
||||
|
||||
octx.setup(args, tree.dataset_raw, photos, gcp_path=reconstruction.gcp.gcp_path, append_config=config, rerun=self.rerun())
|
||||
octx.setup(args, tree.dataset_raw, photos, reconstruction=reconstruction, append_config=config, rerun=self.rerun())
|
||||
octx.extract_metadata(self.rerun())
|
||||
|
||||
self.update_progress(5)
|
||||
|
@ -175,26 +175,17 @@ class ODMMergeStage(types.ODM_Stage):
|
|||
|
||||
# Merge point clouds
|
||||
if args.merge in ['all', 'pointcloud']:
|
||||
if not io.dir_exists(tree.entwine_pointcloud) or self.rerun():
|
||||
if not io.file_exists(tree.odm_georeferencing_model_laz) or self.rerun():
|
||||
all_point_clouds = get_submodel_paths(tree.submodels_path, "odm_georeferencing", "odm_georeferenced_model.laz")
|
||||
|
||||
try:
|
||||
entwine.build(all_point_clouds, tree.entwine_pointcloud, max_concurrency=args.max_concurrency, rerun=self.rerun())
|
||||
point_cloud.merge(all_point_clouds, tree.odm_georeferencing_model_laz, rerun=self.rerun())
|
||||
point_cloud.post_point_cloud_steps(args, tree)
|
||||
except Exception as e:
|
||||
log.ODM_WARNING("Could not merge EPT point cloud: %s (skipping)" % str(e))
|
||||
else:
|
||||
log.ODM_WARNING("Found merged EPT point cloud in %s" % tree.entwine_pointcloud)
|
||||
|
||||
if not io.file_exists(tree.odm_georeferencing_model_laz) or self.rerun():
|
||||
if io.dir_exists(tree.entwine_pointcloud):
|
||||
try:
|
||||
system.run('pdal translate "ept://{}" "{}"'.format(tree.entwine_pointcloud, tree.odm_georeferencing_model_laz))
|
||||
except Exception as e:
|
||||
log.ODM_WARNING("Cannot export EPT dataset to LAZ: %s" % str(e))
|
||||
else:
|
||||
log.ODM_WARNING("No EPT point cloud found (%s), skipping LAZ conversion)" % tree.entwine_pointcloud)
|
||||
log.ODM_WARNING("Could not merge point cloud: %s (skipping)" % str(e))
|
||||
else:
|
||||
log.ODM_WARNING("Found merged point cloud in %s" % tree.odm_georeferencing_model_laz)
|
||||
|
||||
|
||||
self.update_progress(25)
|
||||
|
||||
|
@ -217,84 +208,27 @@ class ODMMergeStage(types.ODM_Stage):
|
|||
system.mkdir_p(tree.odm_orthophoto)
|
||||
|
||||
if not io.file_exists(tree.odm_orthophoto_tif) or self.rerun():
|
||||
all_orthos_and_cutlines = get_all_submodel_paths(tree.submodels_path,
|
||||
os.path.join("odm_orthophoto", "odm_orthophoto.tif"),
|
||||
os.path.join("odm_orthophoto", "cutline.gpkg"),
|
||||
all_orthos_and_ortho_cuts = get_all_submodel_paths(tree.submodels_path,
|
||||
os.path.join("odm_orthophoto", "odm_orthophoto_feathered.tif"),
|
||||
os.path.join("odm_orthophoto", "odm_orthophoto_cut.tif"),
|
||||
)
|
||||
|
||||
if len(all_orthos_and_cutlines) > 1:
|
||||
log.ODM_INFO("Found %s submodels with valid orthophotos and cutlines" % len(all_orthos_and_cutlines))
|
||||
if len(all_orthos_and_ortho_cuts) > 1:
|
||||
log.ODM_INFO("Found %s submodels with valid orthophotos and cutlines" % len(all_orthos_and_ortho_cuts))
|
||||
|
||||
# TODO: histogram matching via rasterio
|
||||
# currently parts have different color tones
|
||||
|
||||
merged_geotiff = os.path.join(tree.odm_orthophoto, "odm_orthophoto.merged.tif")
|
||||
|
||||
kwargs = {
|
||||
'orthophoto_merged': merged_geotiff,
|
||||
'input_files': ' '.join(map(lambda i: quote(i[0]), all_orthos_and_cutlines)),
|
||||
'max_memory': get_max_memory(),
|
||||
'threads': args.max_concurrency,
|
||||
}
|
||||
|
||||
# use bounds as cutlines (blending)
|
||||
if io.file_exists(merged_geotiff):
|
||||
os.remove(merged_geotiff)
|
||||
|
||||
system.run('gdal_merge.py -o {orthophoto_merged} '
|
||||
#'-createonly '
|
||||
'-co "BIGTIFF=YES" '
|
||||
'-co "BLOCKXSIZE=512" '
|
||||
'-co "BLOCKYSIZE=512" '
|
||||
'--config GDAL_CACHEMAX {max_memory}% '
|
||||
'{input_files} '.format(**kwargs)
|
||||
)
|
||||
|
||||
for ortho_cutline in all_orthos_and_cutlines:
|
||||
kwargs['input_file'], kwargs['cutline'] = ortho_cutline
|
||||
|
||||
# Note: cblend has a high performance penalty
|
||||
system.run('gdalwarp -cutline {cutline} '
|
||||
'-cblend 20 '
|
||||
'-r bilinear -multi '
|
||||
'-wo NUM_THREADS={threads} '
|
||||
'--config GDAL_CACHEMAX {max_memory}% '
|
||||
'{input_file} {orthophoto_merged}'.format(**kwargs)
|
||||
)
|
||||
|
||||
# Apply orthophoto settings (compression, tiling, etc.)
|
||||
orthophoto_vars = orthophoto.get_orthophoto_vars(args)
|
||||
|
||||
if io.file_exists(tree.odm_orthophoto_tif):
|
||||
os.remove(tree.odm_orthophoto_tif)
|
||||
|
||||
kwargs = {
|
||||
'vars': ' '.join(['-co %s=%s' % (k, orthophoto_vars[k]) for k in orthophoto_vars]),
|
||||
'max_memory': get_max_memory(),
|
||||
'merged': merged_geotiff,
|
||||
'log': tree.odm_orthophoto_tif_log,
|
||||
'orthophoto': tree.odm_orthophoto_tif,
|
||||
}
|
||||
|
||||
system.run('gdal_translate '
|
||||
'{vars} '
|
||||
'--config GDAL_CACHEMAX {max_memory}% '
|
||||
'{merged} {orthophoto} > {log}'.format(**kwargs))
|
||||
|
||||
os.remove(merged_geotiff)
|
||||
|
||||
# Crop
|
||||
if args.crop > 0:
|
||||
Cropper.crop(merged_bounds_file, tree.odm_orthophoto_tif, orthophoto_vars)
|
||||
|
||||
# Overviews
|
||||
if args.build_overviews:
|
||||
orthophoto.build_overviews(tree.odm_orthophoto_tif)
|
||||
|
||||
elif len(all_orthos_and_cutlines) == 1:
|
||||
orthophoto_vars = orthophoto.get_orthophoto_vars(args)
|
||||
orthophoto.merge(all_orthos_and_ortho_cuts, tree.odm_orthophoto_tif, orthophoto_vars)
|
||||
orthophoto.post_orthophoto_steps(args, merged_bounds_file, tree.odm_orthophoto_tif)
|
||||
elif len(all_orthos_and_ortho_cuts) == 1:
|
||||
# Simply copy
|
||||
log.ODM_WARNING("A single orthophoto/cutline pair was found between all submodels.")
|
||||
shutil.copyfile(all_orthos_and_cutlines[0][0], tree.odm_orthophoto_tif)
|
||||
shutil.copyfile(all_orthos_and_ortho_cuts[0][0], tree.odm_orthophoto_tif)
|
||||
else:
|
||||
log.ODM_WARNING("No orthophoto/cutline pairs were found in any of the submodels. No orthophoto will be generated.")
|
||||
else:
|
||||
|
|
|
@ -0,0 +1,104 @@
|
|||
#!/bin/bash
|
||||
set -eo pipefail
|
||||
__dirname=$(cd $(dirname "$0"); pwd -P)
|
||||
cd "${__dirname}"
|
||||
|
||||
if [ "$1" = "--setup" ]; then
|
||||
export HOME=/home/$2
|
||||
|
||||
if [ ! -f .setupdevenv ]; then
|
||||
echo "Recompiling environment... this might take a while."
|
||||
#bash configure.sh reinstall
|
||||
|
||||
touch .setupdevenv
|
||||
apt install -y vim
|
||||
chown -R $3:$4 /code /var/www
|
||||
fi
|
||||
|
||||
echo "Adding $2 to /etc/passwd"
|
||||
echo "$2:x:$3:$4::/home/$2:/bin/bash" >> /etc/passwd
|
||||
echo "Adding $2 to /etc/group"
|
||||
echo "$2:x:$4:" >> /etc/group
|
||||
|
||||
echo "echo '' && echo '' && echo '' && echo '###################################' && echo 'ODM Dev Environment Ready. Hack on!' && echo '###################################' && echo '' && cd /code" > $HOME/.bashrc
|
||||
|
||||
# Install qt creator
|
||||
if hash qtcreator 2>/dev/null; then
|
||||
has_qtcreator="YES"
|
||||
fi
|
||||
|
||||
if [ "$has_qtcreator" != "YES" ] && [ "$5" == "YES" ]; then
|
||||
apt install -y libxrender1 gdb qtcreator
|
||||
fi
|
||||
|
||||
# Install liquidprompt
|
||||
if [ ! -e "$HOME/liquidprompt" ]; then
|
||||
git clone https://github.com/nojhan/liquidprompt.git --depth 1 $HOME/liquidprompt
|
||||
fi
|
||||
|
||||
if [ -e "$HOME/liquidprompt" ]; then
|
||||
echo "source $HOME/liquidprompt/liquidprompt" >> $HOME/.bashrc
|
||||
echo "export LP_PS1_PREFIX='(odmdev)'" >> $HOME/.bashrc
|
||||
fi
|
||||
|
||||
# Colors
|
||||
echo "alias ls='ls --color=auto'" >> $HOME/.bashrc
|
||||
|
||||
su -c bash $2
|
||||
exit 0
|
||||
fi
|
||||
|
||||
platform="Linux" # Assumed
|
||||
uname=$(uname)
|
||||
case $uname in
|
||||
"Darwin")
|
||||
platform="MacOS / OSX"
|
||||
;;
|
||||
MINGW*)
|
||||
platform="Windows"
|
||||
;;
|
||||
esac
|
||||
|
||||
if [[ $platform != "Linux" ]]; then
|
||||
echo "This script only works on Linux."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if hash docker 2>/dev/null; then
|
||||
has_docker="YES"
|
||||
fi
|
||||
|
||||
if [ "$has_docker" != "YES" ]; then
|
||||
echo "You need to install docker before running this script."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
export PORT="${PORT:=3000}"
|
||||
export QTC="${QTC:=NO}"
|
||||
|
||||
if [ -z "$DATA" ]; then
|
||||
echo "Usage: DATA=/path/to/datasets [VARS] $0"
|
||||
echo
|
||||
echo "VARS:"
|
||||
echo " DATA Path to directory that contains datasets for testing. The directory will be mounted in /datasets. If you don't have any, simply set it to a folder outside the ODM repository."
|
||||
echo " PORT Port to expose for NodeODM (default: $PORT)"
|
||||
echo " QTC When set to YES, installs QT Creator for C++ development (default: $QTC)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
||||
echo "Starting development environment..."
|
||||
echo "Datasets path: $DATA"
|
||||
echo "NodeODM port: $PORT"
|
||||
echo "QT Creator: $QTC"
|
||||
|
||||
if [ ! -e "$HOME"/.odm-dev-home ]; then
|
||||
mkdir -p "$HOME"/.odm-dev-home
|
||||
fi
|
||||
|
||||
USER_ID=$(id -u)
|
||||
GROUP_ID=$(id -g)
|
||||
USER=$(id -un)
|
||||
xhost +
|
||||
docker run -ti --entrypoint bash --name odmdev -v $(pwd):/code -v "$DATA":/datasets -p $PORT:3000 --privileged -e DISPLAY -e LANG=C.UTF-8 -e LC_ALL=C.UTF-8 -v="/tmp/.X11-unix:/tmp/.X11-unix:rw" -v="$HOME/.odm-dev-home:/home/$USER" opendronemap/nodeodm -c "/code/start-dev-env.sh --setup $USER $USER_ID $GROUP_ID $QTC"
|
||||
exit 0
|
|
@ -35,10 +35,11 @@ class TestRemote(unittest.TestCase):
|
|||
self.queue_num = queue_num
|
||||
self.uuid = 'xxxxx-xxxxx-xxxxx-xxxxx-xxxx' + str(queue_num)
|
||||
|
||||
def info(self):
|
||||
def info(self, with_output=None):
|
||||
class StatusMock:
|
||||
status = TaskStatus.RUNNING if self.running else TaskStatus.QUEUED
|
||||
processing_time = 1
|
||||
output = "test output"
|
||||
return StatusMock()
|
||||
|
||||
def remove(self):
|
||||
|
|
|
@ -0,0 +1,50 @@
|
|||
import unittest
|
||||
from opendm import types
|
||||
|
||||
class ODMPhotoMock:
|
||||
def __init__(self, filename, band_name, band_index):
|
||||
self.filename = filename
|
||||
self.band_name = band_name
|
||||
self.band_index = band_index
|
||||
|
||||
def __str__(self):
|
||||
return "%s (%s)" % (self.filename, self.band_name)
|
||||
|
||||
def __repr__(self):
|
||||
return self.__str__()
|
||||
|
||||
class TestTypes(unittest.TestCase):
|
||||
def setUp(self):
|
||||
pass
|
||||
|
||||
def test_reconstruction(self):
|
||||
# Multi camera setup
|
||||
micasa_redsense_files = [('IMG_0298_1.tif', 'Red', 1), ('IMG_0298_2.tif', 'Green', 2), ('IMG_0298_3.tif', 'Blue', 3), ('IMG_0298_4.tif', 'NIR', 4), ('IMG_0298_5.tif', 'Rededge', 5),
|
||||
('IMG_0299_1.tif', 'Red', 1), ('IMG_0299_2.tif', 'Green', 2), ('IMG_0299_3.tif', 'Blue', 3), ('IMG_0299_4.tif', 'NIR', 4), ('IMG_0299_5.tif', 'Rededge', 5),
|
||||
('IMG_0300_1.tif', 'Red', 1), ('IMG_0300_2.tif', 'Green', 2), ('IMG_0300_3.tif', 'Blue', 3), ('IMG_0300_4.tif', 'NIR', 4), ('IMG_0300_5.tif', 'Rededge', 5)]
|
||||
photos = [ODMPhotoMock(f, b, i) for f, b, i in micasa_redsense_files]
|
||||
recon = types.ODM_Reconstruction(photos)
|
||||
|
||||
self.assertTrue(recon.multi_camera is not None)
|
||||
|
||||
# Found all 5 bands
|
||||
bands = ["Red", "Green", "Blue", "NIR", "Rededge"]
|
||||
for i in range(len(bands)):
|
||||
self.assertEqual(bands[i], recon.multi_camera[i]['name'])
|
||||
self.assertTrue([p.filename for p in recon.multi_camera[0]['photos']] == ['IMG_0298_1.tif', 'IMG_0299_1.tif', 'IMG_0300_1.tif'])
|
||||
|
||||
# Missing a file
|
||||
micasa_redsense_files = [('IMG_0298_1.tif', 'Red', 1), ('IMG_0298_2.tif', 'Green', 2), ('IMG_0298_3.tif', 'Blue', 3), ('IMG_0298_4.tif', 'NIR', 4), ('IMG_0298_5.tif', 'Rededge', 5),
|
||||
('IMG_0299_2.tif', 'Green', 2), ('IMG_0299_3.tif', 'Blue', 3), ('IMG_0299_4.tif', 'NIR', 4), ('IMG_0299_5.tif', 'Rededge', 5),
|
||||
('IMG_0300_1.tif', 'Red', 1), ('IMG_0300_2.tif', 'Green', 2), ('IMG_0300_3.tif', 'Blue', 3), ('IMG_0300_4.tif', 'NIR', 4), ('IMG_0300_5.tif', 'Rededge', 5)]
|
||||
photos = [ODMPhotoMock(f, b, i) for f,b,i in micasa_redsense_files]
|
||||
self.assertRaises(RuntimeError, types.ODM_Reconstruction, photos)
|
||||
|
||||
# Single camera
|
||||
dji_files = ['DJI_0018.JPG','DJI_0019.JPG','DJI_0020.JPG','DJI_0021.JPG','DJI_0022.JPG','DJI_0023.JPG']
|
||||
photos = [ODMPhotoMock(f, 'RGB', 0) for f in dji_files]
|
||||
recon = types.ODM_Reconstruction(photos)
|
||||
self.assertTrue(recon.multi_camera is None)
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
Ładowanie…
Reference in New Issue