Change the feature_id of objects in the tiles from arbitrary number to osm_id.
From discussion here https://github.com/openmaptiles/openmaptiles/pull/725#issuecomment-619059931
Note the multiploygon building are load with negative relation osm_id. But the use in feature_id do an overflow (eg -7980888 -> 18446744073701570000). We can keep as is or not.
No stats done for yet. Trying to let the CI do it.
`nullif(as_numeric(height),-1)` is doing a double conversion -- null into -1 followed by -1 back into null. Remove that, and delete unused `as_numeric`
Add `getmvt` sql function as part of the build-sql make step. The function will be imported into PG during the `import-sql` step after all other SQL is done. This is a noop for the `generate-tiles` approach.
layer_building uses osm_all_buildings, which uses a
non-parallel-safe as_numeric. Marking it as non-parallel safe
for now until we switch to CleanNumeric later.
Replacing materialized view by a tables with update from trigger on change only.
Start with the most simple cases.
Just replicate the change on:
* `osm_water_polygon` to `osm_water_lakeline`,
* `osm_water_polygon` to `osm_water_point`.
Use a view to factorize the `osm_water_lakeline` and `osm_water_point_view` definition and reuse it in the trigger.
The update of `osm_important_waterway_linestring` is more complex, as it is a merge of `osm_waterway_linestring`. It not done in the same way. At the end of the transaction we remove impacted and recompute them.
The goal is to update more quickly the content of derivated table by just updating the changing content. It replaces the update of materialized view because their need a full recompute (with lock issue).
Note, an advanced version of differential update over materialized view as already implemented in the building cluster PR #725.
It addresses #814 and a part of #809.
* make sure all required dirs are initialized before running any of the docker commands. Otherwise docker will create them as root.
* minor docs cleanup
Generate multiple SQL files to be imported in parallel.
The files will respect the cross-layer dependencies,
so they can be all ingested at the same time.
* quickstart support for osmfr and bbike areas
- Use the `--empty` flag to start with an empty database
- Geofabrik as the default server
- osmfr is used for hierarchical area names such as `europe/austria`
- bbbike is used for Capitalized area names such as `Adelaide`
* Trims SSD drives, flushes cache before each performance test. Unfortunately these are still incomplete -- need to use real hardware machines for all these to take effect.
* A bit more output in PR updater
* Adds a script to downloads multiple areas and compute their test parameters
* added a large test that uses a combined 76MB file with equatorial-guinea, liechtenstein, district-of-columbia, greater-london
* cache wikidata downloads
`master-tools` branch is the same as `master` branch, except that it uses `latest` from the tools repo. This allows us to quickly track if master is compiling correct.
Update to tools v5. See https://github.com/openmaptiles/openmaptiles-tools/releases/tag/v5.0.0 for the list of all changes. Other OMT-repo specific changes:
* removes `import-osm` docker usage, replacing it with `openmaptiles-tools`
* quickstart builds faster because it uses postgres with preloaded water, natural earth, and lake centerlines tables.
### Makefile targets
* `tools-dev` will open a shell in a docker to experiment and debug (instead of `import-sql-dev` and `import-osm-dev`)
* separated `start-maputnik` from `start-postserve`
* renamed `clean-docker` into `db-destroy` to make it more explicit
* cleaner `db-start`, `db-stop`, `db-destroy` targets
* `db-start-preloaded` is the same as `db-start`, except that it uses `postgis-preloaded` -- an image with preloaded water, natural-earth, and lake centerline data
* `db-start` will not recreate the container if it already exists -- this way if it was started as preloaded, it will not be rebuilt.
* better output messages
### Quickstart
* uses `postgis-preloaded` image by default to make quickstart quicker. To start with a clean db, pass 2 parameters to quickstart, e.g. `./quickstart.sh albania empty`
Include closed PRs in the update cycle, because there could be a case that PR got closed before the job had a chance to finish, and we should still update it.
Results now show a table of how long each step took, as well as the PG database size change.
* use `time` to compute profiling for each step
* call postgres to get database size
Build on top of PR #755, to be merged first.
Since we want every thing from osm_building_polygon (osm_id >= 0 and osm_id < 0), we can merge the two queries.
Note: the obp.osm_id >= 0 on the left join only apply to the left join part.
Turned out that some update jobs failed due to
```
{
"message": "Bad credentials",
"documentation_url": "https://developer.github.com/v3"
}
```
This is probably due to credentials expiring (long workflow startup?),
or some internal github issue.
For now, removing authenticated `curl` calls because most
of them can be done anonymously, and keeping them only when needed.
* delete output escaping (forgot to remove it -- was used for the older system)
* stop early if there are no pull requests (e.g. in case this is a fork)
A cron-based approach to find pull requests, possibly from forks,
that finished profiling, and post their results as comments.
See in-depth explanation of how this works at
https://github.com/nyurik/auto_pr_comments_from_forks