diff --git a/.gitignore b/.gitignore index 956fc7850..8a96a071b 100644 --- a/.gitignore +++ b/.gitignore @@ -2,6 +2,8 @@ _build venv venv_prod .vscode +.venv +.venv_prod # Base ignores: # ============= diff --git a/README.md b/README.md index 112952e8d..86397abbb 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # OpenDroneMap Docs -Contribute to [OpenDroneMap](https://docs.opendronemap.org)'s documentation! Anyone is welcome to share their knowledge and improve our documentation. 🎉 And it's pretty simple too! +Contribute to [OpenDroneMap](https://docs.opendronemap.org)'s documentation!!! Anyone is welcome to share their knowledge and improve our documentation. 🎉 And it's pretty simple too! # "But I don't know if I can contribute" @@ -10,7 +10,7 @@ Tips, tricks, hacks, datasets, lessons learned, best practices, every bit helps. # How To Make Your First Contribution -If you don't have a GitHub account, [register](https://github.com/join?source=header-home) first. It's free and GitHub is awesome. +If you don't have a GitHub account, [register](https://github.com/join?source=header-home) first. It's free and GitHub is awesome !!! Once you have an account there are two ways to contribute. One is quick for small changes, the second takes a bit longer to setup but makes writing long parts of documentation much quicker. @@ -87,19 +87,19 @@ From the same Terminal (or command prompt) run the following: ``` cd docs/ -pip install virtualenv -virtualenv -p python3 venv +python3 -m venv .venv +source .venv/bin/activate # Linux/Mac -source venv/bin/activate +source .venv/bin/activate # Windows -venv\scripts\activate +.venv\scripts\activate pip install -r requirements.txt ``` -After running `source venv/bin/activate` there should be some indication that the Python virtual environment is active (see the `(venv)` that appears at the start of terminal prompt in the screengrab below). **Note:** The next time you can `cd` into the docs folder and just run `source venv/bin/activate`. There should be no need to rerun the `pip install` and `virtualenv` commands. +After running `source .venv/bin/activate` there should be some indication that the Python virtual environment is active (see the `(.venv)` that appears at the start of terminal prompt in the screengrab below). **Note:** The next time you can `cd` into the docs folder and just run `source .venv/bin/activate`. There should be no need to rerun the `pip install` and `python3 -m venv .venv` commands. Note: If you've installed `sphinx` on your system, you may run into issues with commands using that version instead of the version inside your virtualenv. diff --git a/requirements.txt b/requirements.txt index a5264d119..ec45ba82c 100644 --- a/requirements.txt +++ b/requirements.txt @@ -2,4 +2,5 @@ sphinx==5.3.0 sphinx-autobuild==2021.3.14 sphinx-intl==2.0.1 sphinx-rtd-theme==1.1.1 -transifex-client==0.14.4 +transifex-client==0.12.5 +sphinxcontrib-mermaid==0.9.2 diff --git a/requirements_prod.txt b/requirements_prod.txt index 2c469b7d6..b7ccc05e9 100644 --- a/requirements_prod.txt +++ b/requirements_prod.txt @@ -2,4 +2,5 @@ setuptools sphinx==5.3.0 sphinx-intl==2.0.1 sphinx-rtd-theme==1.1.1 +sphinxcontrib-mermaid==0.9.2 wheel diff --git a/source/align.rst b/source/align.rst index 08656a3cc..6ef285cbc 100644 --- a/source/align.rst +++ b/source/align.rst @@ -8,6 +8,6 @@ Starting from ODM ``3.0.2`` people can supply a reference alignment file to geor If you supply a file called ``align.laz``, ``align.las`` or ``align.tif`` (single band GeoTIFF DEM) then ODM will automatically detect it and attempt to align outputs to this reference model. If it has another name you can specify using ``--align ``. -The alignment file must be created in the base of your project folder. +The alignment file must be created in the base of your project folder. The base folder is usually where you have stored your images. `Learn to edit `_ and help improve `this page `_! diff --git a/source/arguments.rst b/source/arguments.rst index f55d630ed..7c9f78583 100644 --- a/source/arguments.rst +++ b/source/arguments.rst @@ -72,7 +72,7 @@ Options and Flags Set feature extraction quality. Higher quality generates better features, but requires more memory and takes longer. . Default: ``high`` :ref:`feature-type` akaze | dspsift | hahog | orb | sift - Choose the algorithm for extracting keypoints and computing descriptors. . Default: ``sift`` + Choose the algorithm for extracting keypoints and computing descriptors. . Default: ``dspsift`` :ref:`force-gps` Use images' GPS exif data for reconstruction, even if there are GCPs present.This flag is useful if you have high precision GPS measurements. If there are no GCPs, this flag does nothing. Default: ``False`` @@ -159,7 +159,7 @@ Options and Flags Export the georeferenced point cloud in Entwine Point Tile (EPT) format. Default: ``False`` :ref:`pc-filter` - Filters the point cloud by removing points that deviate more than N standard deviations from the local mean. Set to 0 to disable filtering. Default: ``2.5`` + Filters the point cloud by removing points that deviate more than N standard deviations from the local mean. Set to 0 to disable filtering. Default: ``5`` :ref:`pc-las` Export the georeferenced point cloud in LAS format. Default: ``False`` diff --git a/source/arguments/feature-type.rst b/source/arguments/feature-type.rst index 92ece7076..8dab97c42 100644 --- a/source/arguments/feature-type.rst +++ b/source/arguments/feature-type.rst @@ -10,7 +10,7 @@ feature-type **Options:** *akaze | dspsift | hahog | orb | sift* -Choose the algorithm for extracting keypoints and computing descriptors. . Default: ``sift`` +Choose the algorithm for extracting keypoints and computing descriptors. . Default: ``dspsift`` diff --git a/source/arguments/pc-filter.rst b/source/arguments/pc-filter.rst index 1164cc0b1..db6e9f577 100644 --- a/source/arguments/pc-filter.rst +++ b/source/arguments/pc-filter.rst @@ -10,7 +10,7 @@ pc-filter **Options:** ** -Filters the point cloud by removing points that deviate more than N standard deviations from the local mean. Set to 0 to disable filtering. Default: ``2.5`` +Filters the point cloud by removing points that deviate more than N standard deviations from the local mean. Set to 0 to disable filtering. Default: ``5`` diff --git a/source/conf.py b/source/conf.py index 108a3f1ba..38ae4f22f 100644 --- a/source/conf.py +++ b/source/conf.py @@ -33,8 +33,15 @@ release = version extensions = [ 'sphinx.ext.todo', 'sphinx_rtd_theme', - 'sphinx.ext.githubpages' + 'sphinx.ext.githubpages', + 'sphinxcontrib.mermaid' + ] + +# mermaid version +mermaid_version = "10.9.1" + + #For internationalization: locale_dirs = ['locale/'] gettext_compact = False diff --git a/source/flowchart.rst b/source/flowchart.rst new file mode 100644 index 000000000..34d13c5c4 --- /dev/null +++ b/source/flowchart.rst @@ -0,0 +1,268 @@ +.. Flowchart with options + +Flowchart with options +======================= + +.. mermaid:: + :zoom: + + flowchart TB + + + + %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + %% Subgraph Stages + %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + + subgraph Dataset-stage["`**Dataset-stage**`"] + bg-removal:::options + camera-lens:::options + cameras:::options + gcp:::options + geo:::options + gps-accuracy:::options + primary-band:::options + sky-removal:::options + use-exif:::options + video-limit:::options + video-resolution:::options + end + bg-removal ~~~ camera-lens ~~~ cameras ~~~ gcp ~~~ geo + gps-accuracy ~~~ primary-band ~~~ sky-removal ~~~ use-exif ~~~ video-limit + + click bg-removal "../arguments/bg-removal/" + click camera-lens "../arguments/camera-lens/" + click cameras "../arguments/cameras/" + click gcp "../arguments/gcp/" + click geo "../arguments/geo/" + click gps-accuracy "../arguments/gps-accuracy/" + click primary-band "../arguments/primary-band/" + click sky-removal "../arguments/sky-removal/" + click use-exif "../arguments/use-exif/" + click video-limit "../arguments/video-limit/" + click video-resolution "../arguments/video-resolution/" + + + subgraph Split["`**Split**`"] + direction TB + sm-cluster:::options + sm-no-align:::options + split:::options + split-image-groups:::options + split-overlap:::options + end + click sm-cluster "../arguments/sm-cluster/" + click sm-no-align "../arguments/sm-no-align/" + click split "../arguments/split/" + click split-image-groups "../arguments/split-image-groups/" + click split-overlap "../arguments/split-overlap/" + + Spliting["`**Spliting**`"] + + subgraph OpenSFM["`**OpenSFM**`"] + feature-quality:::options + feature-type:::options + force-gps:::options + ignore-gsd:::options + matcher-neighbors:::options + matcher-order:::options + matcher-type:::options + min-num-features:::options + pc-quality:::options + radiometric-calibration:::options + rolling-shutter:::options + rolling-shutter-readout:::options + sfm-algorithm:::options + sfm-no-partial:::options + skip-band-alignment:::options + use-fixed-camera-params:::options + use-hybrid-bundle-adjustment:::options + end + feature-quality ~~~ feature-type ~~~ force-gps ~~~ ignore-gsd ~~~ matcher-neighbors + matcher-order ~~~ matcher-type ~~~ min-num-features ~~~ pc-quality ~~~ radiometric-calibration + rolling-shutter ~~~ rolling-shutter-readout ~~~ sfm-algorithm ~~~ sfm-no-partial ~~~ skip-band-alignment + use-fixed-camera-params ~~~ use-hybrid-bundle-adjustment + + click feature-quality "../arguments/feature-quality/" + click feature-type "../arguments/feature-type/" + click force-gps "../arguments/force-gps/" + click ignore-gsd "../arguments/ignore-gsd/" + click matcher-neighbors "../arguments/matcher-neighbors/" + click matcher-order "../arguments/matcher-order/" + click matcher-type "../arguments/matcher-type/" + click min-num-features "../arguments/min-num-features/" + click pc-quality "../arguments/pc-quality/" + click radiometric-calibration "../arguments/radiometric-calibration/" + click rolling-shutter "../arguments/rolling-shutter/" + click rolling-shutter-readout "../arguments/rolling-shutter-readout/" + click sfm-algorithm "../arguments/sfm-algorithm/" + click sfm-no-partial "../arguments/sfm-no-partial/" + click skip-band-alignment "../arguments/skip-band-alignment/" + click use-fixed-camera-params "../arguments/use-fixed-camera-params/" + click use-hybrid-bundle-adjustment "../arguments/use-hybrid-bundle-adjustment/" + + + subgraph Openmvs["`**Openmvs**`"] + pc-filter:::options + pc-skip-geometric:::options + end + pc-filter ~~~ pc-skip-geometric + + click pc-filter "../arguments/pc-filter/" + click pc-skip-geometric "../arguments/pc-skip-geometric/" + + subgraph Odm-filterpoints["`**Odm-filterpoints**`"] + auto-boundary:::options + auto-boundary-distance:::options + boundary:::options + fast-orthophoto:::options + pc-sample:::options + end + auto-boundary ~~~ auto-boundary-distance ~~~ boundary ~~~ fast-orthophoto ~~~ pc-sample + + click auto-boundary "../arguments/auto-boundary/" + click auto-boundary-distance "../arguments/auto-boundary-distance/" + click boundary "../arguments/boundary/" + click fast-orthophoto "../arguments/fast-orthophoto/" + click pc-sample "../arguments/pc-sample/" + + subgraph Odm-meshing["`**Odm-meshing**`"] + mesh-octree-depth:::options + mesh-size:::options + skip-3dmodel:::options + end + mesh-octree-depth ~~~ mesh-size ~~~ skip-3dmodel + + click mesh-octree-depth "../arguments/mesh-octree-depth/" + click mesh-size "../arguments/mesh-size/" + click skip-3dmodel "../arguments/skip-3dmodel/" + + subgraph Mvs-texturing["`**Mvs-texturing**`"] + gltf:::options + texturing-keep-unseen-faces:::options + texturing-single-material:::options + texturing-skip-global-seam-leveling:::options + use-3dmesh:::options + end + gltf ~~~ texturing-keep-unseen-faces ~~~ texturing-single-material ~~~ texturing-skip-global-seam-leveling ~~~ use-3dmesh + + click gltf "../arguments/gltf/" + click texturing-keep-unseen-faces "../arguments/texturing-keep-unseen-faces/" + click texturing-single-material "../arguments/texturing-single-material/" + click texturing-skip-global-seam-leveling "../arguments/texturing-skip-global-seam-leveling/" + click use-3dmesh "../arguments/use-3dmesh/" + + subgraph Odm-georeferencing["`**Odm-georeferencing**`"] + align:::options + crop:::options + pc-classify:::options + pc-copc:::options + pc-csv:::options + pc-ept:::options + pc-las:::options + pc-rectify:::options + end + align ~~~ crop ~~~ pc-classify ~~~ pc-copc ~~~ pc-csv + pc-ept ~~~ pc-las ~~~ pc-rectify + + click align "../arguments/align/" + click crop "../arguments/crop/" + click pc-classify "../arguments/pc-classify/" + click pc-copc "../arguments/pc-copc/" + click pc-csv "../arguments/pc-csv/" + click pc-ept "../arguments/pc-ept/" + click pc-las "../arguments/pc-las/" + click pc-rectify "../arguments/pc-rectify/" + + subgraph Odm-dem["`**Odm-dem**`"] + cog:::options + dem-decimation:::options + dem-euclidean-map:::options + dem-gapfill-steps:::options + dem-resolution:::options + dsm:::options + dtm:::options + smrf-scalar:::options + smrf-slope:::options + smrf-threshold:::options + smrf-window:::options + tiles:::options + end + cog ~~~ dem-decimation ~~~ dem-euclidean-map ~~~ dem-gapfill-steps ~~~ dem-resolution + dsm ~~~ dtm ~~~ smrf-scalar ~~~ smrf-slope ~~~ smrf-threshold ~~~ smrf-window + + click cog "../arguments/cog/" + click dem-decimation "../arguments/dem-decimation/" + click dem-euclidean-map "../arguments/dem-euclidean-map/" + click dem-gapfill-steps "../arguments/dem-gapfill-steps/" + click dem-resolution "../arguments/dem-resolution/" + click dsm "../arguments/dsm/" + click dtm "../arguments/dtm/" + click smrf-scalar "../arguments/smrf-scalar/" + click smrf-slope "../arguments/smrf-slope/" + click smrf-threshold "../arguments/smrf-threshold/" + click smrf-window "../arguments/smrf-window/" + + subgraph Odm-orthophoto["`**Odm-orthophoto**`"] + build-overviews:::options + orthophoto-compression:::options + orthophoto-cutline:::options + orthophoto-kmz:::options + orthophoto-no-tiled:::options + orthophoto-png:::options + orthophoto-resolution:::options + skip-orthophoto:::options + end + build-overviews ~~~ orthophoto-compression ~~~ orthophoto-cutline ~~~ orthophoto-kmz + orthophoto-no-tiled ~~~ orthophoto-png ~~~ orthophoto-resolution ~~~ skip-orthophoto + + click build-overviews "../arguments/build-overviews/" + click orthophoto-compression "../arguments/orthophoto-compression/" + click orthophoto-cutline "../arguments/orthophoto-cutline/" + click orthophoto-kmz "../arguments/orthophoto-kmz/" + click orthophoto-no-tiled "../arguments/orthophoto-no-tiled/" + click orthophoto-png "../arguments/orthophoto-png/" + click orthophoto-resolution "../arguments/orthophoto-resolution/" + click skip-orthophoto "../arguments/skip-orthophoto/" + + subgraph Odm-report["`**Odm-report**`"] + skip-report:::options + end + + click skip-report "../arguments/skip-report/" + + subgraph Odm-postprocess["`**Odm-postprocess**`"] + 3d-tiles:::options + copy-to:::options + end + 3d-tiles ~~~ copy-to + + click 3d-tiles "../arguments/3d-tiles/" + click copy-to "../arguments/copy-to/" + + %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + %% Links + %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + + + images{"Images"} ==> Dataset-stage ==> Split == No ==> OpenSFM ==> Openmvs + Openmvs ==> Odm-filterpoints ==> Odm-meshing ==> Mvs-texturing ==> Odm-georeferencing + Odm-georeferencing ==> Odm-dem ==> Odm-orthophoto ==> Odm-report ==> Odm-postprocess + + %% Split yes + %%Split == Yes ==> Spliting == Merge ==> OpenSFM-detect-features + Split == Yes ==> Spliting ==> OpenSFM + + %% Styles + + %% Style for options + classDef options fill:#ffffde,stroke-width:4px,stroke-dasharray:5,stroke:#f66 + + %% Style for stages + classDef stages fill:#3699db,rx:10,ry:10,rx:10,ry:10,stroke:#333,stroke-width:2px,font-size:15pt; + class Dataset-stage,Split,OpenSFM,Openmvs,Odm-filterpoints stages + class Odm-meshing,Mvs-texturing,Odm-georeferencing,Odm-dem stages + class Odm-orthophoto,Odm-report,Odm-postprocess,Spliting stages + + classDef imagesstyle fill:#64ff0c,rx:10,ry:10,stroke:#333,stroke-width:2px; + class images imagesstyle diff --git a/source/gcp.rst b/source/gcp.rst index 0a30610d7..eb4ca7af5 100644 --- a/source/gcp.rst +++ b/source/gcp.rst @@ -28,7 +28,8 @@ The format of the GCP file is simple. * The first line should contain the name of the projection used for the geo coordinates. This can be specified either as a PROJ string (e.g. ``+proj=utm +zone=10 +ellps=WGS84 +datum=WGS84 +units=m +no_defs``), EPSG code (e.g. ``EPSG:4326``) or as a ``WGS84 UTM [N|S]`` value (eg. ``WGS84 UTM 16N``) * Subsequent lines are the X, Y & Z coordinates, your associated pixels, the image filename and optional extra fields, separated by tabs or spaces: - * Elevation values can be set to "NaN" to indicate no value + * Avoid setting elevation values to "NaN" to indicate no value. This can cause processing failures. Instead use 0.0 + * Similarly decreasing the no. of digits after the decimal place for `geo_x` and `geo_y` can also reduce processing failures. * The 7th column (optional) typically contains the label of the GCP. GCP file format:: diff --git a/source/index.rst b/source/index.rst index c2157c5f1..4665b5f3c 100644 --- a/source/index.rst +++ b/source/index.rst @@ -50,6 +50,7 @@ The documentation is available in several languages. Some translations are incom .. toctree:: tutorials arguments + flowchart outputs gcp map-accuracy diff --git a/source/large.rst b/source/large.rst index b4fb13d9f..c228523b0 100644 --- a/source/large.rst +++ b/source/large.rst @@ -12,7 +12,7 @@ Split-merge works in WebODM out of the box as long as the processing nodes suppo Calibrate images ---------------- -Image calibration is recommended (but not required) for large datasets because error propagation due to image distortion could cause a bowl effect on the models. Calibration instructions can be found at `Calibrate Images `_. +Image calibration is recommended (but not required) for large datasets because error propagation due to image distortion could cause a bowl effect on the models. Calibration instructions can be found at `Camera Calibration `_. .. figure:: images/msimbasi_bowling.png :alt: image of lens distortion effect on bowling of data diff --git a/source/multispectral.rst b/source/multispectral.rst index 69a85a153..6df6e77a2 100644 --- a/source/multispectral.rst +++ b/source/multispectral.rst @@ -3,30 +3,35 @@ Multispectral Support Since version 0.9.9 ODM has basic support for radiometric normalization, which is able to generate reflectance orthophotos from multispectral cameras. Multispectral cameras capture multiple shots of the scene using different band sensors. -Hardware --------- + +Supported Sensors +----------------- While we aim to support as many cameras as possible, multispectral support has been developed using the following cameras, so they will work better: - * `MicaSense RedEdge-MX and Altum `_ - * `Sentera 6X `_ - * `DJI Phantom 4 Multispectral `_ + * `MicaSense RedEdge-MX and Altum `_ + * `Sentera 6X `_ (as of ODM version 1.0.1) + * `DJI Phantom 4 Multispectral `_ (as of ODM version 2.8.8) + * `DJI Mavic 3 Multispectral `_ (as of ODM version 3.5.3) Other cameras might also work. You can help us expand this list by `sharing datasets `_ captured with other cameras. -Usage ------ +Creating Orthophotos from Multispectral Data +------------------------------------------- -Process all the images from all bands at once (do not separate the bands into multiple folders) and pass the `--radiometric-calibration` parameter to enable radiometric normalization. If the images are part of a multi-camera setup, the resulting orthophoto will have N bands, one for each camera (+ alpha). +For supported sensors listed above (and likley other sensors), users can process multipsectral data in the same manner as visible light images. Images from all sensor bands should be processed at once (do not separate the bands into multiple folders). Users have the option to pass the ``--radiometric-calibration`` parameter with options ``camera`` or ``camera+sun`` to enable radiometric normalization. If the images are part of a multi-camera setup, the resulting orthophoto will have N bands, one for each camera (+ alpha). +NDVI and other vegetation indices can be calculated from these stitched orthophotos using software such as `qGIS `_ `Learn to edit `_ and help improve `this page `_! +Workflows for Non-supported Sensors +----------------------------------- -Sentera AGX710 --------------- +**Sentera AGX710:** -While this sensor is not officially supported by ODM, the following workflow gives some good results. + +While the Sentera AGX710 is not officially supported by ODM, the following workflow gives some good results. * all JPGs from the NDRE directory should be renamed with the exact following pattern 0000X_NIR.jpg. No extra '_' should be present in the file names ie 10_51_14_IMG_00008.jpg => 00008_NIR.jpg * all JPGs from the nRGB directory should be renamed with the exact following pattern 0000X_RGB.jpg. No extra '_' should be present in the file names ie 10_51_14_IMG_00023.jpg => 00023_RGB.jpg diff --git a/source/requesting-features.rst b/source/requesting-features.rst index 6acde16ee..1e380295a 100644 --- a/source/requesting-features.rst +++ b/source/requesting-features.rst @@ -1,31 +1,31 @@ How To Request Features ======================= -All software needs user feedback and feature requests, to grow and maintain -alignment with the needs of its users. +All software needs user feedback and feature requests, to grow and maintain +alignment with the needs of its users. -OpenDroneMap is FOSS software. Free and open source (FOSS) projects are interesting -from the inside and outside: from the outside, successful ones feel like they should be able -to do anything, and it’s hard to know what a reasonable request is. From the inside of a -project, they can feel very resource constrained: largely by time, money, and opportunity +OpenDroneMap is FOSS software. Free and open source (FOSS) projects are interesting +from the inside and outside: from the outside, successful ones feel like they should be able +to do anything, and it’s hard to know what a reasonable request is. From the inside of a +project, they can feel very resource constrained: largely by time, money, and opportunity overload. -**Demanding that a feature be implemented is probably not going to convince the development team to do so**. Imagine +**Demanding that a feature be implemented is probably not going to convince the development team to do so**. Imagine if somebody knocked on your door and asked you to "stop reading this page right now and come to my house to cook me dinner!". Your first response might very reasonably be "who on earth is this person and why should I spend my time and energy fulfilling his agenda instead of my own?". -**Suggesting** that a feature be implemented is a more effective (and cordial) way to ask for new features, especially if you're prepared to offer some of your own resources (time, funds or both) to help get the feature implemented. Explaining why +**Suggesting** that a feature be implemented is a more effective (and cordial) way to ask for new features, especially if you're prepared to offer some of your own resources (time, funds or both) to help get the feature implemented. Explaining why *your* suggestion can benefit others can also help. If the feature benefits you exclusively, it might be harder to convince others to do the work for you. -A feature request can be submitted as issues on the applicable Github repository (e.g., -`WebODM `_ or `ODM `_ -or similar) or more simply as a discussion topic on `the community forum `_. -Try to start by searching these sources to see if someone else has already brought it up. Sometimes a feature is already in +A feature request can be submitted as issues on the applicable Github repository (e.g., +`WebODM `_ or `ODM `_ +or similar) or more simply as a discussion topic on `the community forum `_. +Try to start by searching these sources to see if someone else has already brought it up. Sometimes a feature is already in the works, or has at least been discussed. To request the addition of support for new drone cameras: please share a set of test images on the `datasets channel on the forum `_. Without test images there's not much the developers can do. -And importantly, the trick is to listen: if someone within the project says: "This is a big lift, -we need MONEY or TIME or SOMEONE TO HELP CODE IT" (or possibly a combination of the three) +And importantly, the trick is to listen: if someone within the project says: "This is a big lift, +we need MONEY or TIME or SOMEONE TO HELP CODE IT" (or possibly a combination of the three) then there are two answers that work really well in response: *Ok. I didn’t know it was a big feature request! I hope someone comes along with the necessary resources. As a community member, I would be happy to be an early user and tester!* @@ -34,9 +34,9 @@ or *Let’s figure out if we can put together the resources to get this done! Here’s what I can contribute toward it: …* -We are glad you are excited to see new features added to the project. Some new features need support, -and some are easier to implement. We'll do our best to help you understand where your request falls, and +We are glad you are excited to see new features added to the project. Some new features need support, +and some are easier to implement. We'll do our best to help you understand where your request falls, and we appreciate any support you can provide. -`Learn to edit `_ and help improve `this page `_! \ No newline at end of file +`Learn to edit `_ and help improve `this page `_! diff --git a/source/tutorials.rst b/source/tutorials.rst index 09554278f..84bb69243 100644 --- a/source/tutorials.rst +++ b/source/tutorials.rst @@ -64,7 +64,7 @@ From James and Robson (2014), `CC BY 4.0