kopia lustrzana https://github.com/openmaptiles/openmaptiles
Merge 3323782d62
into f70ae783b2
commit
adb2d34a2e
Plik binarny nie jest wyświetlany.
12
.env
12
.env
|
@ -20,7 +20,7 @@ BBOX=-180.0,-85.0511,180.0,85.0511
|
|||
|
||||
# Which zooms to generate with make generate-tiles-pg
|
||||
MIN_ZOOM=0
|
||||
MAX_ZOOM=7
|
||||
MAX_ZOOM=8
|
||||
|
||||
# `MID_ZOOM` setting only works with `make generate-tiles-pg` command. Make sure MID_ZOOM < MAX_ZOOM.
|
||||
# See https://github.com/openmaptiles/openmaptiles-tools/pull/383
|
||||
|
@ -30,17 +30,21 @@ MAX_ZOOM=7
|
|||
DIFF_MODE=false
|
||||
|
||||
# The current setup assumes this file is placed inside the data/ dir
|
||||
MBTILES_FILE=tiles.mbtiles
|
||||
MBTILES_FILE=waterway_13.mbtiles
|
||||
|
||||
# This is the current repl_config.json location, pre-configured in the tools Dockerfile
|
||||
# Makefile and quickstart replace it with the dynamically generated one, but we keep it here in case some other method is used to run.
|
||||
IMPOSM_CONFIG_FILE=/usr/src/app/config/repl_config.json
|
||||
|
||||
# Number of parallel processes to use when importing sql files
|
||||
MAX_PARALLEL_PSQL=5
|
||||
MAX_PARALLEL_PSQL=10
|
||||
|
||||
# Number of parallel threads to use when generating vector map tiles
|
||||
COPY_CONCURRENCY=10
|
||||
|
||||
COPY_CONCURRENCY=60
|
||||
# Variables for generate tiles using tilelive-pgquery
|
||||
|
||||
UV_THREADPOOL_SIZE=1024 # acts as limit rather than definite amount, is therefore set to max without any significant negative impact.
|
||||
|
||||
|
||||
PGHOSTS_LIST=
|
||||
|
|
123
QUICKSTART.md
123
QUICKSTART.md
|
@ -1,38 +1,39 @@
|
|||
## Quickstart - for small extracts
|
||||
|
||||
### Req:
|
||||
* CPU: AMD64 ( = Intel 64 bit)
|
||||
* The base docker debian images are x86_64 based, so the ARM, MIPS currently not supported!
|
||||
* Operating system
|
||||
* Linux is suggested
|
||||
* The development and the testing platform is Linux.
|
||||
* If you are using FreeBSD, Solaris, Windows, ...
|
||||
* Please give a feedback, share your experience, write a tutorial
|
||||
* bash
|
||||
* git
|
||||
* make
|
||||
* bc
|
||||
* md5sum
|
||||
* docker >=1.12.3
|
||||
* https://www.docker.com/products/overview
|
||||
* docker-compose >=1.7.1
|
||||
* https://docs.docker.com/compose/install/
|
||||
* disk space ( >= ~15Gb )
|
||||
* for small extracts >= ~15Gb
|
||||
* for big extracts ( continents, planet) 250 Gb
|
||||
* And depends on
|
||||
* OpenStreetMap data size
|
||||
* Zoom level
|
||||
* Best on SSD for postserve but completely usable on HDD
|
||||
* Takes 24hrs to import on a reasonable machine, and is immediately available with postserve
|
||||
* memory ( >= 3Gb )
|
||||
* for small extracts 3Gb-8Gb RAM
|
||||
* for big extracts ( Europe, Planet) > 8-32 Gb
|
||||
* internet connections
|
||||
* for downloading docker images
|
||||
* for downloading OpenStreetMap data from Geofabrik
|
||||
|
||||
Important: The ./quickstart.sh is for small extracts - not optimal for a Planet rendering !!
|
||||
- CPU: AMD64 ( = Intel 64 bit)
|
||||
- The base docker debian images are x86_64 based, so the ARM, MIPS currently not supported!
|
||||
- Operating system
|
||||
- Linux is suggested
|
||||
- The development and the testing platform is Linux.
|
||||
- If you are using FreeBSD, Solaris, Windows, ...
|
||||
- Please give a feedback, share your experience, write a tutorial
|
||||
- bash
|
||||
- git
|
||||
- make
|
||||
- bc
|
||||
- md5sum
|
||||
- docker >=1.12.3
|
||||
- https://www.docker.com/products/overview
|
||||
- docker-compose >=1.7.1
|
||||
- https://docs.docker.com/compose/install/
|
||||
- disk space ( >= ~15Gb )
|
||||
- for small extracts >= ~15Gb
|
||||
- for big extracts ( continents, planet) 250 Gb
|
||||
- And depends on
|
||||
- OpenStreetMap data size
|
||||
- Zoom level
|
||||
- Best on SSD for postserve but completely usable on HDD
|
||||
- Takes 24hrs to import on a reasonable machine, and is immediately available with postserve
|
||||
- memory ( >= 3Gb )
|
||||
- for small extracts 3Gb-8Gb RAM
|
||||
- for big extracts ( Europe, Planet) > 8-32 Gb
|
||||
- internet connections
|
||||
- for downloading docker images
|
||||
- for downloading OpenStreetMap data from Geofabrik
|
||||
|
||||
Important: The ./quickstart.sh is for small extracts - not optimal for a Planet rendering !!
|
||||
|
||||
### First experiment - with `albania` ( small extracts! )
|
||||
|
||||
|
@ -43,25 +44,27 @@ cd openmaptiles
|
|||
```
|
||||
|
||||
If you have problems with the quickstart
|
||||
* check the ./quickstart.log!
|
||||
* doublecheck the system requirements!
|
||||
* check the current issues: https://github.com/openmaptiles/openmaptiles/issues
|
||||
* create new issues:
|
||||
* create a new gist: https://gist.github.com/ from your ./quickstart.log
|
||||
* doublecheck: don't reveal any sensitive information about your system
|
||||
* create a new issue: https://github.com/openmaptiles/openmaptiles/issues
|
||||
* describe the problems
|
||||
* add any pertinent information about your environment
|
||||
* link your (quickstart.log) gist!
|
||||
|
||||
- check the ./quickstart.log!
|
||||
- doublecheck the system requirements!
|
||||
- check the current issues: https://github.com/openmaptiles/openmaptiles/issues
|
||||
- create new issues:
|
||||
- create a new gist: https://gist.github.com/ from your ./quickstart.log
|
||||
- doublecheck: don't reveal any sensitive information about your system
|
||||
- create a new issue: https://github.com/openmaptiles/openmaptiles/issues
|
||||
- describe the problems
|
||||
- add any pertinent information about your environment
|
||||
- link your (quickstart.log) gist!
|
||||
|
||||
### Check other extracts
|
||||
|
||||
IF the previous step is working,
|
||||
THEN you can test other available quickstart extracts ( based on [Geofabrik extracts](http://download.geofabrik.de/index.html) ) !
|
||||
* We are using https://github.com/julien-noblet/download-geofabrik tool
|
||||
* The current extract list, and more information -> `make list-geofabrik` or `make list-bbbike`
|
||||
|
||||
This is generating `.mbtiles` for your area : [ MIN_ZOOM: "0" - MAX_ZOOM: "7" ]
|
||||
- We are using https://github.com/julien-noblet/download-geofabrik tool
|
||||
- The current extract list, and more information -> `make list-geofabrik` or `make list-bbbike`
|
||||
|
||||
This is generating `.mbtiles` for your area : [ MIN_ZOOM: "0" - MAX_ZOOM: "7" ]
|
||||
|
||||
```bash
|
||||
./quickstart.sh africa # Africa
|
||||
|
@ -362,8 +365,10 @@ This is generating `.mbtiles` for your area : [ MIN_ZOOM: "0" - MAX_ZOOM: "7"
|
|||
./quickstart.sh wyoming # Wyoming, US
|
||||
./quickstart.sh yukon # Yukon, Canada
|
||||
```
|
||||
|
||||
### Using your own OSM data
|
||||
Mbtiles can be generated from an arbitrary osm.pbf (e.g. for a region that is not covered by an existing extract) by making the `data/` directory and placing an *.osm.pbf (e.g. `mydata.osm.pbf`) inside.
|
||||
|
||||
Mbtiles can be generated from an arbitrary osm.pbf (e.g. for a region that is not covered by an existing extract) by making the `data/` directory and placing an \*.osm.pbf (e.g. `mydata.osm.pbf`) inside.
|
||||
|
||||
```
|
||||
mkdir -p data
|
||||
|
@ -373,44 +378,48 @@ make generate-bbox-file area=mydata
|
|||
```
|
||||
|
||||
### Check postserve
|
||||
* ` docker-compose up -d postserve`
|
||||
and the generated maps are going to be available in browser on [localhost:8090/tiles/0/0/0.pbf](http://localhost:8090/tiles/0/0/0.pbf).
|
||||
|
||||
- ` docker-compose up -d postserve`
|
||||
and the generated maps are going to be available in browser on [localhost:8090/tiles/0/0/0.pbf](http://localhost:8090/tiles/0/0/0.pbf).
|
||||
|
||||
### Check tileserver
|
||||
|
||||
start:
|
||||
* ` make start-tileserver`
|
||||
and the generated maps are going to be available in webbrowser on [localhost:8080](http://localhost:8080/).
|
||||
|
||||
- ` make start-tileserver`
|
||||
and the generated maps are going to be available in webbrowser on [localhost:8080](http://localhost:8080/).
|
||||
|
||||
This is only a quick preview, because your mbtiles only generated to zoom level 7 !
|
||||
|
||||
|
||||
### Set which zooms to generate
|
||||
|
||||
modify the settings in the `.env` file, the defaults:
|
||||
* `MIN_ZOOM=0`
|
||||
* `MAX_ZOOM=7`
|
||||
|
||||
- `MIN_ZOOM=0`
|
||||
- `MAX_ZOOM=7`
|
||||
|
||||
Hints:
|
||||
* Small increments! Never starts with the `MAX_ZOOM = 14`
|
||||
* The suggested `MAX_ZOOM = 14` - use only with small extracts
|
||||
|
||||
- Small increments! Never starts with the `MAX_ZOOM = 14`
|
||||
- The suggested `MAX_ZOOM = 14` - use only with small extracts
|
||||
|
||||
### Set the bounding box to generate
|
||||
|
||||
By default, tile generation is done for the full extent of the area.
|
||||
If you want to generate a tiles for a smaller extent, modify the settings in the `.env` file, the default:
|
||||
* `BBOX=-180.0,-85.0511,180.0,85.0511`
|
||||
|
||||
- `BBOX=-180.0,-85.0511,180.0,85.0511`
|
||||
|
||||
Delete the `./data/<area>.bbox` file, and re-start `./quickstart.sh <area>`
|
||||
|
||||
Hint:
|
||||
* The [boundingbox.klokantech.com](https://boundingbox.klokantech.com/) site can be used to find a bounding box (CSV format) using a map.
|
||||
|
||||
- The [boundingbox.klokantech.com](https://boundingbox.klokantech.com/) site can be used to find a bounding box (CSV format) using a map.
|
||||
|
||||
### Check other commands
|
||||
|
||||
`make help`
|
||||
|
||||
|
||||
the current output:
|
||||
|
||||
```
|
||||
|
|
|
@ -5,7 +5,6 @@ volumes:
|
|||
cache:
|
||||
|
||||
services:
|
||||
|
||||
openmaptiles-tools:
|
||||
volumes:
|
||||
- cache:/cache
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
# This version must match the MAKE_DC_VERSION value below
|
||||
version: "3"
|
||||
#version: "3.7"
|
||||
|
||||
volumes:
|
||||
pgdata:
|
||||
|
@ -9,13 +9,16 @@ networks:
|
|||
driver: bridge
|
||||
|
||||
services:
|
||||
|
||||
postgres:
|
||||
image: "${POSTGIS_IMAGE:-openmaptiles/postgis}:${TOOLS_VERSION}"
|
||||
#command: postgres -c "config_file=./etc/postgresql/postgresql.conf"
|
||||
command: postgres -c "shared_buffers=16GB" -c "autovacuum=on" -c "work_mem=1500MB" -c "maintenance_work_mem=4096kB" -c "effective_cache_size=20GB" -c "random_page_cost=1.1" -c "effective_cache_size=40GB" -c "dynamic_shared_memory_type=posix" -c "max_wal_size=1GB" -c "min_wal_size=80MB" -c "wal_buffers=16MB" -c "effective_io_concurrency=30" -c "max_connections=300"
|
||||
# Use "command: postgres -c jit=off" for PostgreSQL 11+ because of slow large MVT query processing
|
||||
# Use "shm_size: 512m" if you want to prevent a possible 'No space left on device' during 'make generate-tiles-pg'
|
||||
shm_size: 3g
|
||||
volumes:
|
||||
- pgdata:/var/lib/postgresql/data
|
||||
#- ./etc/postgresql/postgresql.conf:/var/lib/postgresql/data/postgresql.conf
|
||||
networks:
|
||||
- postgres
|
||||
ports:
|
||||
|
@ -27,6 +30,7 @@ services:
|
|||
POSTGRES_USER: ${PGUSER:-openmaptiles}
|
||||
POSTGRES_PASSWORD: ${PGPASSWORD:-openmaptiles}
|
||||
PGPORT: ${PGPORT:-5432}
|
||||
#TILE_TIMEOUT: 3600000
|
||||
|
||||
import-data:
|
||||
image: "openmaptiles/import-data:${TOOLS_VERSION}"
|
||||
|
@ -40,7 +44,7 @@ services:
|
|||
environment:
|
||||
# Must match the version of this file (first line)
|
||||
# download-osm will use it when generating a composer file
|
||||
MAKE_DC_VERSION: "3"
|
||||
MAKE_DC_VERSION: "3.7"
|
||||
# Allow DIFF_MODE, MIN_ZOOM, and MAX_ZOOM to be overwritten from shell
|
||||
DIFF_MODE: ${DIFF_MODE}
|
||||
MIN_ZOOM: ${MIN_ZOOM}
|
||||
|
@ -56,6 +60,7 @@ services:
|
|||
PGPASSWORD: ${PGPASSWORD:-openmaptiles}
|
||||
PGPORT: ${PGPORT:-5432}
|
||||
MBTILES_FILE: ${MBTILES_FILE}
|
||||
#UV_THREADPOOL_SIZE:
|
||||
networks:
|
||||
- postgres
|
||||
volumes:
|
||||
|
@ -94,7 +99,10 @@ services:
|
|||
volumes:
|
||||
- ./data:/export
|
||||
- ./build/openmaptiles.tm2source:/tm2source
|
||||
networks:
|
||||
- type: tmpfs
|
||||
target: /dev/shm
|
||||
tmpfs:
|
||||
size: 16000000000 # ~16gb
|
||||
- postgres
|
||||
env_file: .env
|
||||
environment:
|
||||
|
@ -109,6 +117,7 @@ services:
|
|||
PGUSER: ${PGUSER:-openmaptiles}
|
||||
PGPASSWORD: ${PGPASSWORD:-openmaptiles}
|
||||
PGPORT: ${PGPORT:-5432}
|
||||
#UV_THREADPOOL_SIZE: ${UV_THREADPOOL_SIZE}
|
||||
|
||||
postserve:
|
||||
image: "openmaptiles/openmaptiles-tools:${TOOLS_VERSION}"
|
||||
|
@ -122,6 +131,7 @@ services:
|
|||
- "${PPORT:-8090}:${PPORT:-8090}"
|
||||
volumes:
|
||||
- .:/tileset
|
||||
#- ./etc/postgresql/postgresql.conf:/var/lib/postgresql/data/postgresql.conf
|
||||
|
||||
maputnik_editor:
|
||||
image: "maputnik/editor"
|
||||
|
|
Plik binarny nie jest wyświetlany.
|
@ -0,0 +1,814 @@
|
|||
# -----------------------------
|
||||
# PostgreSQL configuration file
|
||||
# -----------------------------
|
||||
#
|
||||
# This file consists of lines of the form:
|
||||
#
|
||||
# name = value
|
||||
#
|
||||
# (The "=" is optional.) Whitespace may be used. Comments are introduced with
|
||||
# "#" anywhere on a line. The complete list of parameter names and allowed
|
||||
# values can be found in the PostgreSQL documentation.
|
||||
#
|
||||
# The commented-out settings shown in this file represent the default values.
|
||||
# Re-commenting a setting is NOT sufficient to revert it to the default value;
|
||||
# you need to reload the server.
|
||||
#
|
||||
# This file is read on server startup and when the server receives a SIGHUP
|
||||
# signal. If you edit the file on a running system, you have to SIGHUP the
|
||||
# server for the changes to take effect, run "pg_ctl reload", or execute
|
||||
# "SELECT pg_reload_conf()". Some parameters, which are marked below,
|
||||
# require a server shutdown and restart to take effect.
|
||||
#
|
||||
# Any parameter can also be given as a command-line option to the server, e.g.,
|
||||
# "postgres -c log_connections=on". Some parameters can be changed at run time
|
||||
# with the "SET" SQL command.
|
||||
#
|
||||
# Memory units: B = bytes Time units: us = microseconds
|
||||
# kB = kilobytes ms = milliseconds
|
||||
# MB = megabytes s = seconds
|
||||
# GB = gigabytes min = minutes
|
||||
# TB = terabytes h = hours
|
||||
# d = days
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# FILE LOCATIONS
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# The default values of these variables are driven from the -D command-line
|
||||
# option or PGDATA environment variable, represented here as ConfigDir.
|
||||
|
||||
#data_directory = 'ConfigDir' # use data in another directory
|
||||
# (change requires restart)
|
||||
#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file
|
||||
# (change requires restart)
|
||||
#ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file
|
||||
# (change requires restart)
|
||||
|
||||
# If external_pid_file is not explicitly set, no extra PID file is written.
|
||||
#external_pid_file = '' # write an extra PID file
|
||||
# (change requires restart)
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# CONNECTIONS AND AUTHENTICATION
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Connection Settings -
|
||||
|
||||
|
||||
#listen_addresses = 'localhost' # what IP address(es) to listen on;
|
||||
# comma-separated list of addresses;
|
||||
# defaults to 'localhost'; use '*' for all
|
||||
# (change requires restart)
|
||||
#port = 5432 # (change requires restart)
|
||||
max_connections = 300 #100 # (change requires restart)
|
||||
#superuser_reserved_connections = 3 # (change requires restart)
|
||||
#unix_socket_directories = '/tmp' # comma-separated list of directories
|
||||
# (change requires restart)
|
||||
#unix_socket_group = '' # (change requires restart)
|
||||
#unix_socket_permissions = 0777 # begin with 0 to use octal notation
|
||||
# (change requires restart)
|
||||
#bonjour = off # advertise server via Bonjour
|
||||
# (change requires restart)
|
||||
#bonjour_name = '' # defaults to the computer name
|
||||
# (change requires restart)
|
||||
|
||||
# - TCP settings -
|
||||
# see "man tcp" for details
|
||||
|
||||
#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;
|
||||
# 0 selects the system default
|
||||
#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;
|
||||
# 0 selects the system default
|
||||
#tcp_keepalives_count = 0 # TCP_KEEPCNT;
|
||||
# 0 selects the system default
|
||||
#tcp_user_timeout = 0 # TCP_USER_TIMEOUT, in milliseconds;
|
||||
# 0 selects the system default
|
||||
|
||||
#client_connection_check_interval = 0 # time between checks for client
|
||||
# disconnection while running queries;
|
||||
# 0 for never
|
||||
|
||||
# - Authentication -
|
||||
|
||||
#authentication_timeout = 1min # 1s-600s
|
||||
#password_encryption = scram-sha-256 # scram-sha-256 or md5
|
||||
#db_user_namespace = off
|
||||
|
||||
# GSSAPI using Kerberos
|
||||
#krb_server_keyfile = 'FILE:${sysconfdir}/krb5.keytab'
|
||||
#krb_caseins_users = off
|
||||
|
||||
# - SSL -
|
||||
|
||||
#ssl = off
|
||||
#ssl_ca_file = ''
|
||||
#ssl_cert_file = 'server.crt'
|
||||
#ssl_crl_file = ''
|
||||
#ssl_crl_dir = ''
|
||||
#ssl_key_file = 'server.key'
|
||||
#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers
|
||||
#ssl_prefer_server_ciphers = on
|
||||
#ssl_ecdh_curve = 'prime256v1'
|
||||
#ssl_min_protocol_version = 'TLSv1.2'
|
||||
#ssl_max_protocol_version = ''
|
||||
#ssl_dh_params_file = ''
|
||||
#ssl_passphrase_command = ''
|
||||
#ssl_passphrase_command_supports_reload = off
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# RESOURCE USAGE (except WAL)
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Memory -
|
||||
|
||||
shared_buffers = 16GB #128MB # min 128kB
|
||||
# (change requires restart)
|
||||
#huge_pages = try # on, off, or try
|
||||
# (change requires restart)
|
||||
#huge_page_size = 0 # zero for system default
|
||||
# (change requires restart)
|
||||
#temp_buffers = 8MB # min 800kB
|
||||
#max_prepared_transactions = 0 # zero disables the feature
|
||||
# (change requires restart)
|
||||
# Caution: it is not advisable to set max_prepared_transactions nonzero unless
|
||||
# you actively intend to use prepared transactions.
|
||||
work_mem = 1GB # min 64kB
|
||||
#hash_mem_multiplier = 2.0 # 1-1000.0 multiplier on hash table work_mem
|
||||
maintenance_work_mem = 8GB # min 1MB
|
||||
#autovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_mem
|
||||
#logical_decoding_work_mem = 64MB # min 64kB
|
||||
#max_stack_depth = 2MB # min 100kB
|
||||
#shared_memory_type = mmap # the default is the first option
|
||||
# supported by the operating system:
|
||||
# mmap
|
||||
# sysv
|
||||
# windows
|
||||
# (change requires restart)
|
||||
dynamic_shared_memory_type = posix # the default is usually the first option
|
||||
# supported by the operating system:
|
||||
# posix
|
||||
# sysv
|
||||
# windows
|
||||
# mmap
|
||||
# (change requires restart)
|
||||
#min_dynamic_shared_memory = 0MB # (change requires restart)
|
||||
|
||||
# - Disk -
|
||||
|
||||
#temp_file_limit = -1 # limits per-process temp file space
|
||||
# in kilobytes, or -1 for no limit
|
||||
|
||||
# - Kernel Resources -
|
||||
|
||||
#max_files_per_process = 1000 # min 64
|
||||
# (change requires restart)
|
||||
|
||||
# - Cost-Based Vacuum Delay -
|
||||
|
||||
#vacuum_cost_delay = 0 # 0-100 milliseconds (0 disables)
|
||||
#vacuum_cost_page_hit = 1 # 0-10000 credits
|
||||
#vacuum_cost_page_miss = 2 # 0-10000 credits
|
||||
#vacuum_cost_page_dirty = 20 # 0-10000 credits
|
||||
#vacuum_cost_limit = 200 # 1-10000 credits
|
||||
|
||||
# - Background Writer -
|
||||
|
||||
#bgwriter_delay = 200ms # 10-10000ms between rounds
|
||||
#bgwriter_lru_maxpages = 100 # max buffers written/round, 0 disables
|
||||
#bgwriter_lru_multiplier = 2.0 # 0-10.0 multiplier on buffers scanned/round
|
||||
#bgwriter_flush_after = 0 # measured in pages, 0 disables
|
||||
|
||||
# - Asynchronous Behavior -
|
||||
|
||||
#backend_flush_after = 0 # measured in pages, 0 disables
|
||||
effective_io_concurrency = 30 #0 # 1-1000; 0 disables prefetching
|
||||
#maintenance_io_concurrency = 10 # 1-1000; 0 disables prefetching
|
||||
#max_worker_processes = 8 # (change requires restart)
|
||||
#max_parallel_workers_per_gather = 2 # limited by max_parallel_workers
|
||||
#max_parallel_maintenance_workers = 2 # limited by max_parallel_workers
|
||||
#max_parallel_workers = 8 # number of max_worker_processes that
|
||||
# can be used in parallel operations
|
||||
#parallel_leader_participation = on
|
||||
#old_snapshot_threshold = -1 # 1min-60d; -1 disables; 0 is immediate
|
||||
# (change requires restart)
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# WRITE-AHEAD LOG
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Settings -
|
||||
|
||||
#wal_level = replica # minimal, replica, or logical
|
||||
# (change requires restart)
|
||||
fsync = off # flush data to disk for crash safety
|
||||
# (turning this off can cause
|
||||
# unrecoverable data corruption)
|
||||
synchronous_commit = off # synchronization level;
|
||||
# off, local, remote_write, remote_apply, or on
|
||||
#wal_sync_method = fsync # the default is the first option
|
||||
# supported by the operating system:
|
||||
# open_datasync
|
||||
# fdatasync (default on Linux and FreeBSD)
|
||||
# fsync
|
||||
# fsync_writethrough
|
||||
# open_sync
|
||||
#full_page_writes = on # recover from partial page writes
|
||||
#wal_log_hints = off # also do full page writes of non-critical updates
|
||||
# (change requires restart)
|
||||
#wal_compression = off # enables compression of full-page writes;
|
||||
# off, pglz, lz4, zstd, or on
|
||||
#wal_init_zero = on # zero-fill new WAL files
|
||||
#wal_recycle = on # recycle WAL files
|
||||
wal_buffers = 16MB #-1 # min 32kB, -1 sets based on shared_buffers
|
||||
# (change requires restart)
|
||||
#wal_writer_delay = 200ms # 1-10000 milliseconds
|
||||
#wal_writer_flush_after = 1MB # measured in pages, 0 disables
|
||||
#wal_skip_threshold = 2MB
|
||||
|
||||
#commit_delay = 0 # range 0-100000, in microseconds
|
||||
#commit_siblings = 5 # range 1-1000
|
||||
|
||||
# - Checkpoints -
|
||||
|
||||
#checkpoint_timeout = 5min # range 30s-1d
|
||||
#checkpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0
|
||||
#checkpoint_flush_after = 0 # measured in pages, 0 disables
|
||||
#checkpoint_warning = 30s # 0 disables
|
||||
max_wal_size = 1GB
|
||||
min_wal_size = 80MB
|
||||
|
||||
# - Prefetching during recovery -
|
||||
|
||||
#recovery_prefetch = try # prefetch pages referenced in the WAL?
|
||||
#wal_decode_buffer_size = 512kB # lookahead window used for prefetching
|
||||
# (change requires restart)
|
||||
|
||||
# - Archiving -
|
||||
|
||||
#archive_mode = off # enables archiving; off, on, or always
|
||||
# (change requires restart)
|
||||
#archive_library = '' # library to use to archive a logfile segment
|
||||
# (empty string indicates archive_command should
|
||||
# be used)
|
||||
#archive_command = '' # command to use to archive a logfile segment
|
||||
# placeholders: %p = path of file to archive
|
||||
# %f = file name only
|
||||
# e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'
|
||||
#archive_timeout = 0 # force a logfile segment switch after this
|
||||
# number of seconds; 0 disables
|
||||
|
||||
# - Archive Recovery -
|
||||
|
||||
# These are only used in recovery mode.
|
||||
|
||||
#restore_command = '' # command to use to restore an archived logfile segment
|
||||
# placeholders: %p = path of file to restore
|
||||
# %f = file name only
|
||||
# e.g. 'cp /mnt/server/archivedir/%f %p'
|
||||
#archive_cleanup_command = '' # command to execute at every restartpoint
|
||||
#recovery_end_command = '' # command to execute at completion of recovery
|
||||
|
||||
# - Recovery Target -
|
||||
|
||||
# Set these only when performing a targeted recovery.
|
||||
|
||||
#recovery_target = '' # 'immediate' to end recovery as soon as a
|
||||
# consistent state is reached
|
||||
# (change requires restart)
|
||||
#recovery_target_name = '' # the named restore point to which recovery will proceed
|
||||
# (change requires restart)
|
||||
#recovery_target_time = '' # the time stamp up to which recovery will proceed
|
||||
# (change requires restart)
|
||||
#recovery_target_xid = '' # the transaction ID up to which recovery will proceed
|
||||
# (change requires restart)
|
||||
#recovery_target_lsn = '' # the WAL LSN up to which recovery will proceed
|
||||
# (change requires restart)
|
||||
#recovery_target_inclusive = on # Specifies whether to stop:
|
||||
# just after the specified recovery target (on)
|
||||
# just before the recovery target (off)
|
||||
# (change requires restart)
|
||||
#recovery_target_timeline = 'latest' # 'current', 'latest', or timeline ID
|
||||
# (change requires restart)
|
||||
#recovery_target_action = 'pause' # 'pause', 'promote', 'shutdown'
|
||||
# (change requires restart)
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# REPLICATION
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Sending Servers -
|
||||
|
||||
# Set these on the primary and on any standby that will send replication data.
|
||||
|
||||
#max_wal_senders = 10 # max number of walsender processes
|
||||
# (change requires restart)
|
||||
#max_replication_slots = 10 # max number of replication slots
|
||||
# (change requires restart)
|
||||
#wal_keep_size = 0 # in megabytes; 0 disables
|
||||
#max_slot_wal_keep_size = -1 # in megabytes; -1 disables
|
||||
#wal_sender_timeout = 60s # in milliseconds; 0 disables
|
||||
#track_commit_timestamp = off # collect timestamp of transaction commit
|
||||
# (change requires restart)
|
||||
|
||||
# - Primary Server -
|
||||
|
||||
# These settings are ignored on a standby server.
|
||||
|
||||
#synchronous_standby_names = '' # standby servers that provide sync rep
|
||||
# method to choose sync standbys, number of sync standbys,
|
||||
# and comma-separated list of application_name
|
||||
# from standby(s); '*' = all
|
||||
#vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is delayed
|
||||
|
||||
# - Standby Servers -
|
||||
|
||||
# These settings are ignored on a primary server.
|
||||
|
||||
#primary_conninfo = '' # connection string to sending server
|
||||
#primary_slot_name = '' # replication slot on sending server
|
||||
#promote_trigger_file = '' # file name whose presence ends recovery
|
||||
#hot_standby = on # "off" disallows queries during recovery
|
||||
# (change requires restart)
|
||||
#max_standby_archive_delay = 30s # max delay before canceling queries
|
||||
# when reading WAL from archive;
|
||||
# -1 allows indefinite delay
|
||||
#max_standby_streaming_delay = 30s # max delay before canceling queries
|
||||
# when reading streaming WAL;
|
||||
# -1 allows indefinite delay
|
||||
#wal_receiver_create_temp_slot = off # create temp slot if primary_slot_name
|
||||
# is not set
|
||||
#wal_receiver_status_interval = 10s # send replies at least this often
|
||||
# 0 disables
|
||||
#hot_standby_feedback = off # send info from standby to prevent
|
||||
# query conflicts
|
||||
#wal_receiver_timeout = 60s # time that receiver waits for
|
||||
# communication from primary
|
||||
# in milliseconds; 0 disables
|
||||
#wal_retrieve_retry_interval = 5s # time to wait before retrying to
|
||||
# retrieve WAL after a failed attempt
|
||||
#recovery_min_apply_delay = 0 # minimum delay for applying changes during recovery
|
||||
|
||||
# - Subscribers -
|
||||
|
||||
# These settings are ignored on a publisher.
|
||||
|
||||
#max_logical_replication_workers = 4 # taken from max_worker_processes
|
||||
# (change requires restart)
|
||||
#max_sync_workers_per_subscription = 2 # taken from max_logical_replication_workers
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# QUERY TUNING
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Planner Method Configuration -
|
||||
|
||||
#enable_async_append = on
|
||||
#enable_bitmapscan = on
|
||||
#enable_gathermerge = on
|
||||
#enable_hashagg = on
|
||||
#enable_hashjoin = on
|
||||
#enable_incremental_sort = on
|
||||
#enable_indexscan = on
|
||||
#enable_indexonlyscan = on
|
||||
#enable_material = on
|
||||
#enable_memoize = on
|
||||
#enable_mergejoin = on
|
||||
#enable_nestloop = on
|
||||
#enable_parallel_append = on
|
||||
#enable_parallel_hash = on
|
||||
#enable_partition_pruning = on
|
||||
#enable_partitionwise_join = off
|
||||
#enable_partitionwise_aggregate = off
|
||||
#enable_seqscan = on
|
||||
#enable_sort = on
|
||||
#enable_tidscan = on
|
||||
|
||||
# - Planner Cost Constants -
|
||||
|
||||
#seq_page_cost = 1.0 # measured on an arbitrary scale
|
||||
random_page_cost = 1.1 #4.0 # same scale as above
|
||||
#cpu_tuple_cost = 0.01 # same scale as above
|
||||
#cpu_index_tuple_cost = 0.005 # same scale as above
|
||||
#cpu_operator_cost = 0.0025 # same scale as above
|
||||
#parallel_setup_cost = 1000.0 # same scale as above
|
||||
#parallel_tuple_cost = 0.1 # same scale as above
|
||||
#min_parallel_table_scan_size = 8MB
|
||||
#min_parallel_index_scan_size = 512kB
|
||||
effective_cache_size = 20GB
|
||||
|
||||
#jit_above_cost = 100000 # perform JIT compilation if available
|
||||
# and query more expensive than this;
|
||||
# -1 disables
|
||||
#jit_inline_above_cost = 500000 # inline small functions if query is
|
||||
# more expensive than this; -1 disables
|
||||
#jit_optimize_above_cost = 500000 # use expensive JIT optimizations if
|
||||
# query is more expensive than this;
|
||||
# -1 disables
|
||||
|
||||
# - Genetic Query Optimizer -
|
||||
|
||||
#geqo = on
|
||||
#geqo_threshold = 12
|
||||
#geqo_effort = 5 # range 1-10
|
||||
#geqo_pool_size = 0 # selects default based on effort
|
||||
#geqo_generations = 0 # selects default based on effort
|
||||
#geqo_selection_bias = 2.0 # range 1.5-2.0
|
||||
#geqo_seed = 0.0 # range 0.0-1.0
|
||||
|
||||
# - Other Planner Options -
|
||||
|
||||
#default_statistics_target = 100 # range 1-10000
|
||||
#constraint_exclusion = partition # on, off, or partition
|
||||
#cursor_tuple_fraction = 0.1 # range 0.0-1.0
|
||||
#from_collapse_limit = 8
|
||||
#jit = on # allow JIT compilation
|
||||
#join_collapse_limit = 8 # 1 disables collapsing of explicit
|
||||
# JOIN clauses
|
||||
#plan_cache_mode = auto # auto, force_generic_plan or
|
||||
# force_custom_plan
|
||||
#recursive_worktable_factor = 10.0 # range 0.001-1000000
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# REPORTING AND LOGGING
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Where to Log -
|
||||
|
||||
#log_destination = 'stderr' # Valid values are combinations of
|
||||
# stderr, csvlog, jsonlog, syslog, and
|
||||
# eventlog, depending on platform.
|
||||
# csvlog and jsonlog require
|
||||
# logging_collector to be on.
|
||||
|
||||
# This is used when logging to stderr:
|
||||
#logging_collector = off # Enable capturing of stderr, jsonlog,
|
||||
# and csvlog into log files. Required
|
||||
# to be on for csvlogs and jsonlogs.
|
||||
# (change requires restart)
|
||||
|
||||
# These are only used if logging_collector is on:
|
||||
#log_directory = 'log' # directory where log files are written,
|
||||
# can be absolute or relative to PGDATA
|
||||
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,
|
||||
# can include strftime() escapes
|
||||
#log_file_mode = 0600 # creation mode for log files,
|
||||
# begin with 0 to use octal notation
|
||||
#log_rotation_age = 1d # Automatic rotation of logfiles will
|
||||
# happen after that time. 0 disables.
|
||||
#log_rotation_size = 10MB # Automatic rotation of logfiles will
|
||||
# happen after that much log output.
|
||||
# 0 disables.
|
||||
#log_truncate_on_rotation = off # If on, an existing log file with the
|
||||
# same name as the new log file will be
|
||||
# truncated rather than appended to.
|
||||
# But such truncation only occurs on
|
||||
# time-driven rotation, not on restarts
|
||||
# or size-driven rotation. Default is
|
||||
# off, meaning append to existing files
|
||||
# in all cases.
|
||||
|
||||
# These are relevant when logging to syslog:
|
||||
#syslog_facility = 'LOCAL0'
|
||||
#syslog_ident = 'postgres'
|
||||
#syslog_sequence_numbers = on
|
||||
#syslog_split_messages = on
|
||||
|
||||
# This is only relevant when logging to eventlog (Windows):
|
||||
# (change requires restart)
|
||||
#event_source = 'PostgreSQL'
|
||||
|
||||
# - When to Log -
|
||||
|
||||
#log_min_messages = warning # values in order of decreasing detail:
|
||||
# debug5
|
||||
# debug4
|
||||
# debug3
|
||||
# debug2
|
||||
# debug1
|
||||
# info
|
||||
# notice
|
||||
# warning
|
||||
# error
|
||||
# log
|
||||
# fatal
|
||||
# panic
|
||||
|
||||
#log_min_error_statement = error # values in order of decreasing detail:
|
||||
# debug5
|
||||
# debug4
|
||||
# debug3
|
||||
# debug2
|
||||
# debug1
|
||||
# info
|
||||
# notice
|
||||
# warning
|
||||
# error
|
||||
# log
|
||||
# fatal
|
||||
# panic (effectively off)
|
||||
|
||||
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
|
||||
# and their durations, > 0 logs only
|
||||
# statements running at least this number
|
||||
# of milliseconds
|
||||
|
||||
#log_min_duration_sample = -1 # -1 is disabled, 0 logs a sample of statements
|
||||
# and their durations, > 0 logs only a sample of
|
||||
# statements running at least this number
|
||||
# of milliseconds;
|
||||
# sample fraction is determined by log_statement_sample_rate
|
||||
|
||||
#log_statement_sample_rate = 1.0 # fraction of logged statements exceeding
|
||||
# log_min_duration_sample to be logged;
|
||||
# 1.0 logs all such statements, 0.0 never logs
|
||||
|
||||
|
||||
#log_transaction_sample_rate = 0.0 # fraction of transactions whose statements
|
||||
# are logged regardless of their duration; 1.0 logs all
|
||||
# statements from all transactions, 0.0 never logs
|
||||
|
||||
#log_startup_progress_interval = 10s # Time between progress updates for
|
||||
# long-running startup operations.
|
||||
# 0 disables the feature, > 0 indicates
|
||||
# the interval in milliseconds.
|
||||
|
||||
# - What to Log -
|
||||
|
||||
#debug_print_parse = off
|
||||
#debug_print_rewritten = off
|
||||
#debug_print_plan = off
|
||||
#debug_pretty_print = on
|
||||
#log_autovacuum_min_duration = 10min # log autovacuum activity;
|
||||
# -1 disables, 0 logs all actions and
|
||||
# their durations, > 0 logs only
|
||||
# actions running at least this number
|
||||
# of milliseconds.
|
||||
#log_checkpoints = on
|
||||
#log_connections = off
|
||||
#log_disconnections = off
|
||||
#log_duration = off
|
||||
#log_error_verbosity = default # terse, default, or verbose messages
|
||||
#log_hostname = off
|
||||
#log_line_prefix = '%m [%p] ' # special values:
|
||||
# %a = application name
|
||||
# %u = user name
|
||||
# %d = database name
|
||||
# %r = remote host and port
|
||||
# %h = remote host
|
||||
# %b = backend type
|
||||
# %p = process ID
|
||||
# %P = process ID of parallel group leader
|
||||
# %t = timestamp without milliseconds
|
||||
# %m = timestamp with milliseconds
|
||||
# %n = timestamp with milliseconds (as a Unix epoch)
|
||||
# %Q = query ID (0 if none or not computed)
|
||||
# %i = command tag
|
||||
# %e = SQL state
|
||||
# %c = session ID
|
||||
# %l = session line number
|
||||
# %s = session start timestamp
|
||||
# %v = virtual transaction ID
|
||||
# %x = transaction ID (0 if none)
|
||||
# %q = stop here in non-session
|
||||
# processes
|
||||
# %% = '%'
|
||||
# e.g. '<%u%%%d> '
|
||||
#log_lock_waits = off # log lock waits >= deadlock_timeout
|
||||
#log_recovery_conflict_waits = off # log standby recovery conflict waits
|
||||
# >= deadlock_timeout
|
||||
#log_parameter_max_length = -1 # when logging statements, limit logged
|
||||
# bind-parameter values to N bytes;
|
||||
# -1 means print in full, 0 disables
|
||||
#log_parameter_max_length_on_error = 0 # when logging an error, limit logged
|
||||
# bind-parameter values to N bytes;
|
||||
# -1 means print in full, 0 disables
|
||||
#log_statement = 'none' # none, ddl, mod, all
|
||||
#log_replication_commands = off
|
||||
#log_temp_files = -1 # log temporary files equal or larger
|
||||
# than the specified size in kilobytes;
|
||||
# -1 disables, 0 logs all temp files
|
||||
log_timezone = 'Europe/Oslo'
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# PROCESS TITLE
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
#cluster_name = '' # added to process titles if nonempty
|
||||
# (change requires restart)
|
||||
#update_process_title = on
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# STATISTICS
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Cumulative Query and Index Statistics -
|
||||
|
||||
#track_activities = on
|
||||
#track_activity_query_size = 1024 # (change requires restart)
|
||||
track_counts = on
|
||||
#track_io_timing = off
|
||||
#track_wal_io_timing = off
|
||||
#track_functions = none # none, pl, all
|
||||
#stats_fetch_consistency = cache
|
||||
|
||||
|
||||
# - Monitoring -
|
||||
|
||||
#compute_query_id = auto
|
||||
#log_statement_stats = off
|
||||
#log_parser_stats = off
|
||||
#log_planner_stats = off
|
||||
#log_executor_stats = off
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# AUTOVACUUM
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
autovacuum = on # Enable autovacuum subprocess? 'on'
|
||||
# requires track_counts to also be on.
|
||||
#autovacuum_max_workers = 3 # max number of autovacuum subprocesses
|
||||
# (change requires restart)
|
||||
#autovacuum_naptime = 1min # time between autovacuum runs
|
||||
#autovacuum_vacuum_threshold = 50 # min number of row updates before
|
||||
# vacuum
|
||||
#autovacuum_vacuum_insert_threshold = 1000 # min number of row inserts
|
||||
# before vacuum; -1 disables insert
|
||||
# vacuums
|
||||
#autovacuum_analyze_threshold = 50 # min number of row updates before
|
||||
# analyze
|
||||
#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum
|
||||
#autovacuum_vacuum_insert_scale_factor = 0.2 # fraction of inserts over table
|
||||
# size before insert vacuum
|
||||
#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze
|
||||
#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum
|
||||
# (change requires restart)
|
||||
#autovacuum_multixact_freeze_max_age = 400000000 # maximum multixact age
|
||||
# before forced vacuum
|
||||
# (change requires restart)
|
||||
#autovacuum_vacuum_cost_delay = 2ms # default vacuum cost delay for
|
||||
# autovacuum, in milliseconds;
|
||||
# -1 means use vacuum_cost_delay
|
||||
#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for
|
||||
# autovacuum, -1 means use
|
||||
# vacuum_cost_limit
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# CLIENT CONNECTION DEFAULTS
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Statement Behavior -
|
||||
|
||||
#client_min_messages = notice # values in order of decreasing detail:
|
||||
# debug5
|
||||
# debug4
|
||||
# debug3
|
||||
# debug2
|
||||
# debug1
|
||||
# log
|
||||
# notice
|
||||
# warning
|
||||
# error
|
||||
#search_path = '"$user", public' # schema names
|
||||
#row_security = on
|
||||
#default_table_access_method = 'heap'
|
||||
#default_tablespace = '' # a tablespace name, '' uses the default
|
||||
#default_toast_compression = 'pglz' # 'pglz' or 'lz4'
|
||||
#temp_tablespaces = '' # a list of tablespace names, '' uses
|
||||
# only default tablespace
|
||||
#check_function_bodies = on
|
||||
#default_transaction_isolation = 'read committed'
|
||||
#default_transaction_read_only = off
|
||||
#default_transaction_deferrable = off
|
||||
#session_replication_role = 'origin'
|
||||
#statement_timeout = 0 # in milliseconds, 0 is disabled
|
||||
#lock_timeout = 0 # in milliseconds, 0 is disabled
|
||||
#idle_in_transaction_session_timeout = 0 # in milliseconds, 0 is disabled
|
||||
#idle_session_timeout = 0 # in milliseconds, 0 is disabled
|
||||
#vacuum_freeze_table_age = 150000000
|
||||
#vacuum_freeze_min_age = 50000000
|
||||
#vacuum_failsafe_age = 1600000000
|
||||
#vacuum_multixact_freeze_table_age = 150000000
|
||||
#vacuum_multixact_freeze_min_age = 5000000
|
||||
#vacuum_multixact_failsafe_age = 1600000000
|
||||
#bytea_output = 'hex' # hex, escape
|
||||
#xmlbinary = 'base64'
|
||||
#xmloption = 'content'
|
||||
#gin_pending_list_limit = 4MB
|
||||
|
||||
# - Locale and Formatting -
|
||||
|
||||
datestyle = 'iso, mdy'
|
||||
#intervalstyle = 'postgres'
|
||||
timezone = 'Europe/Oslo'
|
||||
#timezone_abbreviations = 'Default' # Select the set of available time zone
|
||||
# abbreviations. Currently, there are
|
||||
# Default
|
||||
# Australia (historical usage)
|
||||
# India
|
||||
# You can create your own file in
|
||||
# share/timezonesets/.
|
||||
#extra_float_digits = 1 # min -15, max 3; any value >0 actually
|
||||
# selects precise output mode
|
||||
#client_encoding = sql_ascii # actually, defaults to database
|
||||
# encoding
|
||||
|
||||
# These settings are initialized by initdb, but they can be changed.
|
||||
lc_messages = 'C' # locale for system error message
|
||||
# strings
|
||||
lc_monetary = 'C' # locale for monetary formatting
|
||||
lc_numeric = 'C' # locale for number formatting
|
||||
lc_time = 'C' # locale for time formatting
|
||||
|
||||
# default configuration for text search
|
||||
default_text_search_config = 'pg_catalog.english'
|
||||
|
||||
# - Shared Library Preloading -
|
||||
|
||||
#local_preload_libraries = ''
|
||||
#session_preload_libraries = ''
|
||||
#shared_preload_libraries = '' # (change requires restart)
|
||||
#jit_provider = 'llvmjit' # JIT library to use
|
||||
|
||||
# - Other Defaults -
|
||||
|
||||
#dynamic_library_path = '$libdir'
|
||||
#gin_fuzzy_search_limit = 0
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# LOCK MANAGEMENT
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
#deadlock_timeout = 1s
|
||||
#max_locks_per_transaction = 64 # min 10
|
||||
# (change requires restart)
|
||||
#max_pred_locks_per_transaction = 64 # min 10
|
||||
# (change requires restart)
|
||||
#max_pred_locks_per_relation = -2 # negative values mean
|
||||
# (max_pred_locks_per_transaction
|
||||
# / -max_pred_locks_per_relation) - 1
|
||||
#max_pred_locks_per_page = 2 # min 0
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# VERSION AND PLATFORM COMPATIBILITY
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# - Previous PostgreSQL Versions -
|
||||
|
||||
#array_nulls = on
|
||||
#backslash_quote = safe_encoding # on, off, or safe_encoding
|
||||
#escape_string_warning = on
|
||||
#lo_compat_privileges = off
|
||||
#quote_all_identifiers = off
|
||||
#standard_conforming_strings = on
|
||||
#synchronize_seqscans = on
|
||||
|
||||
# - Other Platforms and Clients -
|
||||
|
||||
#transform_null_equals = off
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# ERROR HANDLING
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
#exit_on_error = off # terminate session on any error?
|
||||
#restart_after_crash = on # reinitialize after backend crash?
|
||||
#data_sync_retry = off # retry or panic on failure to fsync
|
||||
# data?
|
||||
# (change requires restart)
|
||||
#recovery_init_sync_method = fsync # fsync, syncfs (Linux 5.8+)
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# CONFIG FILE INCLUDES
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# These options allow settings to be loaded from files other than the
|
||||
# default postgresql.conf. Note that these are directives, not variable
|
||||
# assignments, so they can usefully be given more than once.
|
||||
|
||||
#include_dir = '...' # include files ending in '.conf' from
|
||||
# a directory, e.g., 'conf.d'
|
||||
#include_if_exists = '...' # include file only if it exists
|
||||
#include = '...' # include file
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# CUSTOMIZED OPTIONS
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# Add settings for extensions here
|
|
@ -1,4 +1,40 @@
|
|||
generalized_tables:
|
||||
# etldoc: osm_water_polygon_gen_z1 -> osm_water_polygon_gen_z0
|
||||
water_polygon_gen_z0:
|
||||
source: water_polygon_gen_z1
|
||||
sql_filter: area>power(ZRES0,2)
|
||||
tolerance: ZRES1
|
||||
|
||||
# etldoc: osm_water_polygon_gen_z2 -> osm_water_polygon_gen_z1
|
||||
water_polygon_gen_z1:
|
||||
source: water_polygon_gen_z2
|
||||
sql_filter: area>power(ZRES0,2)
|
||||
tolerance: ZRES2
|
||||
|
||||
# etldoc: osm_water_polygon_gen_z3 -> osm_water_polygon_gen_z2
|
||||
water_polygon_gen_z2:
|
||||
source: water_polygon_gen_z3
|
||||
sql_filter: area>power(ZRES1,2)
|
||||
tolerance: ZRES3
|
||||
|
||||
# etldoc: osm_water_polygon_gen_z4 -> osm_water_polygon_gen_z3
|
||||
water_polygon_gen_z3:
|
||||
source: water_polygon_gen_z4
|
||||
sql_filter: area>power(ZRES2,2)
|
||||
tolerance: ZRES4
|
||||
|
||||
# etldoc: osm_water_polygon_gen_z5 -> osm_water_polygon_gen_z4
|
||||
water_polygon_gen_z4:
|
||||
source: water_polygon_gen_z5
|
||||
sql_filter: area>power(ZRES3,2)
|
||||
tolerance: ZRES5
|
||||
|
||||
# etldoc: osm_water_polygon_gen_z6 -> osm_water_polygon_gen_z5
|
||||
water_polygon_gen_z5:
|
||||
source: water_polygon_gen_z6
|
||||
sql_filter: area>power(ZRES4,2)
|
||||
tolerance: ZRES6
|
||||
|
||||
# etldoc: osm_water_polygon_gen_z7 -> osm_water_polygon_gen_z6
|
||||
water_polygon_gen_z6:
|
||||
source: water_polygon_gen_z7
|
||||
|
@ -45,73 +81,72 @@ bridge_field: &bridge
|
|||
type: bool
|
||||
|
||||
tables:
|
||||
|
||||
# etldoc: imposm3 -> osm_water_polygon
|
||||
water_polygon:
|
||||
columns:
|
||||
- name: osm_id
|
||||
type: id
|
||||
- name: geometry
|
||||
type: validated_geometry
|
||||
- name: area
|
||||
type: area
|
||||
- key: name
|
||||
name: name
|
||||
type: string
|
||||
- name: name_en
|
||||
key: name:en
|
||||
type: string
|
||||
- name: name_de
|
||||
key: name:de
|
||||
type: string
|
||||
- name: tags
|
||||
type: hstore_tags
|
||||
- name: place
|
||||
key: place
|
||||
type: string
|
||||
- name: natural
|
||||
key: natural
|
||||
type: string
|
||||
- name: landuse
|
||||
key: landuse
|
||||
type: string
|
||||
- name: waterway
|
||||
key: waterway
|
||||
type: string
|
||||
- name: leisure
|
||||
key: leisure
|
||||
type: string
|
||||
- name: water
|
||||
key: water
|
||||
type: string
|
||||
- name: is_intermittent
|
||||
key: intermittent
|
||||
type: bool
|
||||
- *tunnel
|
||||
- *bridge
|
||||
- name: osm_id
|
||||
type: id
|
||||
- name: geometry
|
||||
type: validated_geometry
|
||||
- name: area
|
||||
type: area
|
||||
- key: name
|
||||
name: name
|
||||
type: string
|
||||
- name: name_en
|
||||
key: name:en
|
||||
type: string
|
||||
- name: name_de
|
||||
key: name:de
|
||||
type: string
|
||||
- name: tags
|
||||
type: hstore_tags
|
||||
- name: place
|
||||
key: place
|
||||
type: string
|
||||
- name: natural
|
||||
key: natural
|
||||
type: string
|
||||
- name: landuse
|
||||
key: landuse
|
||||
type: string
|
||||
- name: waterway
|
||||
key: waterway
|
||||
type: string
|
||||
- name: leisure
|
||||
key: leisure
|
||||
type: string
|
||||
- name: water
|
||||
key: water
|
||||
type: string
|
||||
- name: is_intermittent
|
||||
key: intermittent
|
||||
type: bool
|
||||
- *tunnel
|
||||
- *bridge
|
||||
filters:
|
||||
reject:
|
||||
covered: ["yes"]
|
||||
mapping:
|
||||
landuse:
|
||||
- reservoir
|
||||
- basin
|
||||
- salt_pond
|
||||
- reservoir
|
||||
- basin
|
||||
- salt_pond
|
||||
leisure:
|
||||
- swimming_pool
|
||||
- swimming_pool
|
||||
natural:
|
||||
- water
|
||||
- bay
|
||||
- spring
|
||||
- water
|
||||
- bay
|
||||
- spring
|
||||
waterway:
|
||||
- dock
|
||||
- dock
|
||||
water:
|
||||
- river
|
||||
- stream
|
||||
- canal
|
||||
- ditch
|
||||
- drain
|
||||
- pond
|
||||
- basin
|
||||
- wastewater
|
||||
- river
|
||||
- stream
|
||||
- canal
|
||||
- ditch
|
||||
- drain
|
||||
- pond
|
||||
- basin
|
||||
- wastewater
|
||||
type: polygon
|
||||
|
|
|
@ -21,7 +21,7 @@
|
|||
1
|
||||
]
|
||||
],
|
||||
"order": 17
|
||||
"order": 1
|
||||
},
|
||||
{
|
||||
"id": "water",
|
||||
|
@ -48,7 +48,7 @@
|
|||
"tunnel"
|
||||
]
|
||||
],
|
||||
"order": 18
|
||||
"order": 2
|
||||
}
|
||||
]
|
||||
}
|
|
@ -140,3 +140,124 @@ SELECT ST_Simplify(geometry, ZRes(8)) AS geometry
|
|||
FROM osm_ocean_polygon_gen_z7
|
||||
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
|
||||
CREATE INDEX IF NOT EXISTS osm_ocean_polygon_gen_z6_idx ON osm_ocean_polygon_gen_z6 USING gist (geometry);
|
||||
|
||||
|
||||
-- This statement can be deleted after the water importer image stops creating this object as a table
|
||||
DO
|
||||
$$
|
||||
BEGIN
|
||||
DROP TABLE IF EXISTS osm_ocean_polygon_gen_z5 CASCADE;
|
||||
EXCEPTION
|
||||
WHEN wrong_object_type THEN
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
-- etldoc: osm_ocean_polygon_gen_z6 -> osm_ocean_polygon_gen_z5
|
||||
DROP MATERIALIZED VIEW IF EXISTS osm_ocean_polygon_gen_z5 CASCADE;
|
||||
CREATE MATERIALIZED VIEW osm_ocean_polygon_gen_z5 AS
|
||||
(
|
||||
SELECT ST_Simplify(geometry, ZRes(7)) AS geometry
|
||||
FROM osm_ocean_polygon_gen_z6
|
||||
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
|
||||
CREATE INDEX IF NOT EXISTS osm_ocean_polygon_gen_z5_idx ON osm_ocean_polygon_gen_z5 USING gist (geometry);
|
||||
|
||||
|
||||
-- This statement can be deleted after the water importer image stops creating this object as a table
|
||||
DO
|
||||
$$
|
||||
BEGIN
|
||||
DROP TABLE IF EXISTS osm_ocean_polygon_gen_z4 CASCADE;
|
||||
EXCEPTION
|
||||
WHEN wrong_object_type THEN
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
-- etldoc: osm_ocean_polygon_gen_z5 -> osm_ocean_polygon_gen_z4
|
||||
DROP MATERIALIZED VIEW IF EXISTS osm_ocean_polygon_gen_z4 CASCADE;
|
||||
CREATE MATERIALIZED VIEW osm_ocean_polygon_gen_z4 AS
|
||||
(
|
||||
SELECT ST_Simplify(geometry, ZRes(6)) AS geometry
|
||||
FROM osm_ocean_polygon_gen_z5
|
||||
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
|
||||
CREATE INDEX IF NOT EXISTS osm_ocean_polygon_gen_z4_idx ON osm_ocean_polygon_gen_z4 USING gist (geometry);
|
||||
|
||||
|
||||
-- This statement can be deleted after the water importer image stops creating this object as a table
|
||||
DO
|
||||
$$
|
||||
BEGIN
|
||||
DROP TABLE IF EXISTS osm_ocean_polygon_gen_z3 CASCADE;
|
||||
EXCEPTION
|
||||
WHEN wrong_object_type THEN
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
-- etldoc: osm_ocean_polygon_gen_z4 -> osm_ocean_polygon_gen_z3
|
||||
DROP MATERIALIZED VIEW IF EXISTS osm_ocean_polygon_gen_z3 CASCADE;
|
||||
CREATE MATERIALIZED VIEW osm_ocean_polygon_gen_z3 AS
|
||||
(
|
||||
SELECT ST_Simplify(geometry, ZRes(5)) AS geometry
|
||||
FROM osm_ocean_polygon_gen_z4
|
||||
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
|
||||
CREATE INDEX IF NOT EXISTS osm_ocean_polygon_gen_z3_idx ON osm_ocean_polygon_gen_z3 USING gist (geometry);
|
||||
|
||||
|
||||
-- This statement can be deleted after the water importer image stops creating this object as a table
|
||||
DO
|
||||
$$
|
||||
BEGIN
|
||||
DROP TABLE IF EXISTS osm_ocean_polygon_gen_z2 CASCADE;
|
||||
EXCEPTION
|
||||
WHEN wrong_object_type THEN
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
-- etldoc: osm_ocean_polygon_gen_z3 -> osm_ocean_polygon_gen_z2
|
||||
DROP MATERIALIZED VIEW IF EXISTS osm_ocean_polygon_gen_z2 CASCADE;
|
||||
CREATE MATERIALIZED VIEW osm_ocean_polygon_gen_z2 AS
|
||||
(
|
||||
SELECT ST_Simplify(geometry, ZRes(4)) AS geometry
|
||||
FROM osm_ocean_polygon_gen_z3
|
||||
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
|
||||
CREATE INDEX IF NOT EXISTS osm_ocean_polygon_gen_z2_idx ON osm_ocean_polygon_gen_z2 USING gist (geometry);
|
||||
|
||||
|
||||
-- This statement can be deleted after the water importer image stops creating this object as a table
|
||||
DO
|
||||
$$
|
||||
BEGIN
|
||||
DROP TABLE IF EXISTS osm_ocean_polygon_gen_z1 CASCADE;
|
||||
EXCEPTION
|
||||
WHEN wrong_object_type THEN
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
-- etldoc: osm_ocean_polygon_gen_z2 -> osm_ocean_polygon_gen_z1
|
||||
DROP MATERIALIZED VIEW IF EXISTS osm_ocean_polygon_gen_z1 CASCADE;
|
||||
CREATE MATERIALIZED VIEW osm_ocean_polygon_gen_z1 AS
|
||||
(
|
||||
SELECT ST_Simplify(geometry, ZRes(3)) AS geometry
|
||||
FROM osm_ocean_polygon_gen_z2
|
||||
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
|
||||
CREATE INDEX IF NOT EXISTS osm_ocean_polygon_gen_z1_idx ON osm_ocean_polygon_gen_z1 USING gist (geometry);
|
||||
|
||||
|
||||
-- This statement can be deleted after the water importer image stops creating this object as a table
|
||||
DO
|
||||
$$
|
||||
BEGIN
|
||||
DROP TABLE IF EXISTS osm_ocean_polygon_gen_z0 CASCADE;
|
||||
EXCEPTION
|
||||
WHEN wrong_object_type THEN
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
-- etldoc: osm_ocean_polygon_gen_z1 -> osm_ocean_polygon_gen_z0
|
||||
DROP MATERIALIZED VIEW IF EXISTS osm_ocean_polygon_gen_z0 CASCADE;
|
||||
CREATE MATERIALIZED VIEW osm_ocean_polygon_gen_z0 AS
|
||||
(
|
||||
SELECT ST_Simplify(geometry, ZRes(2)) AS geometry
|
||||
FROM osm_ocean_polygon_gen_z1
|
||||
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
|
||||
CREATE INDEX IF NOT EXISTS osm_ocean_polygon_gen_z0_idx ON osm_ocean_polygon_gen_z0 USING gist (geometry);
|
||||
|
||||
|
|
|
@ -9,6 +9,8 @@ $$ LANGUAGE SQL IMMUTABLE
|
|||
PARALLEL SAFE;
|
||||
|
||||
|
||||
|
||||
|
||||
CREATE OR REPLACE FUNCTION waterway_brunnel(is_bridge bool, is_tunnel bool) RETURNS text AS
|
||||
$$
|
||||
SELECT CASE
|
||||
|
@ -20,388 +22,26 @@ $$ LANGUAGE SQL IMMUTABLE
|
|||
PARALLEL SAFE;
|
||||
|
||||
|
||||
-- Get matching osm id for natural earth id.
|
||||
DROP MATERIALIZED VIEW IF EXISTS match_osm_ne_id CASCADE;
|
||||
CREATE MATERIALIZED VIEW match_osm_ne_id AS
|
||||
(
|
||||
WITH name_match AS
|
||||
(
|
||||
-- Distinct on keeps just the first occurence -> order by 'area_ratio DESC'.
|
||||
SELECT DISTINCT ON (ne.ne_id)
|
||||
ne.ne_id,
|
||||
osm.osm_id,
|
||||
(ST_Area(ST_Intersection(ne.geometry, osm.geometry))/ST_Area(ne.geometry)) AS area_ratio
|
||||
FROM ne_10m_lakes ne, osm_water_polygon_gen_z6 osm
|
||||
WHERE ne.name = osm.name
|
||||
AND ST_Intersects(ne.geometry, osm.geometry)
|
||||
ORDER BY ne_id,
|
||||
area_ratio DESC
|
||||
),
|
||||
-- Add lakes which are not match by name, but intersects.
|
||||
-- Duplicity solves 'DISTICT ON' with 'area_ratio'.
|
||||
geom_match AS
|
||||
(SELECT DISTINCT ON (ne.ne_id)
|
||||
ne.ne_id,
|
||||
osm.osm_id,
|
||||
(ST_Area(ST_Intersection(ne.geometry, osm.geometry))/ST_Area(ne.geometry)) AS area_ratio
|
||||
FROM ne_10m_lakes ne, osm_water_polygon_gen_z6 osm
|
||||
WHERE ST_Intersects(ne.geometry, osm.geometry)
|
||||
AND ne.ne_id NOT IN
|
||||
( SELECT ne_id
|
||||
FROM name_match
|
||||
)
|
||||
ORDER BY ne_id,
|
||||
area_ratio DESC
|
||||
)
|
||||
|
||||
SELECT ne_id,
|
||||
osm_id
|
||||
FROM name_match
|
||||
|
||||
UNION
|
||||
|
||||
SELECT ne_id,
|
||||
osm_id
|
||||
FROM geom_match
|
||||
);
|
||||
|
||||
-- ne_10m_ocean
|
||||
-- etldoc: ne_10m_ocean -> ne_10m_ocean_gen_z5
|
||||
DROP MATERIALIZED VIEW IF EXISTS ne_10m_ocean_gen_z5 CASCADE;
|
||||
CREATE MATERIALIZED VIEW ne_10m_ocean_gen_z5 AS
|
||||
(
|
||||
SELECT NULL::integer AS id,
|
||||
(ST_Dump(ST_Simplify(geometry, ZRes(7)))).geom AS geometry,
|
||||
'ocean'::text AS class,
|
||||
NULL::boolean AS is_intermittent,
|
||||
NULL::boolean AS is_bridge,
|
||||
NULL::boolean AS is_tunnel
|
||||
FROM ne_10m_ocean
|
||||
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
|
||||
CREATE INDEX IF NOT EXISTS ne_10m_ocean_gen_z5_idx ON ne_10m_ocean_gen_z5 USING gist (geometry);
|
||||
|
||||
-- ne_10m_lakes
|
||||
-- etldoc: ne_10m_lakes -> ne_10m_lakes_gen_z5
|
||||
DROP MATERIALIZED VIEW IF EXISTS ne_10m_lakes_gen_z5 CASCADE;
|
||||
CREATE MATERIALIZED VIEW ne_10m_lakes_gen_z5 AS
|
||||
(
|
||||
SELECT COALESCE(osm.osm_id, ne_id) AS id,
|
||||
-- Union fixing e.g. Lake Huron and Georgian Bay duplicity
|
||||
(ST_Dump(ST_MakeValid(ST_Simplify(ST_Union(geometry), ZRes(7))))).geom AS geometry,
|
||||
'lake'::text AS class,
|
||||
NULL::boolean AS is_intermittent,
|
||||
NULL::boolean AS is_bridge,
|
||||
NULL::boolean AS is_tunnel
|
||||
FROM ne_10m_lakes
|
||||
LEFT JOIN match_osm_ne_id osm USING (ne_id)
|
||||
GROUP BY COALESCE(osm.osm_id, ne_id), is_intermittent, is_bridge, is_tunnel
|
||||
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
|
||||
CREATE INDEX IF NOT EXISTS ne_10m_lakes_gen_z5_idx ON ne_10m_lakes_gen_z5 USING gist (geometry);
|
||||
|
||||
-- etldoc: ne_10m_lakes_gen_z5 -> ne_10m_lakes_gen_z4
|
||||
DROP MATERIALIZED VIEW IF EXISTS ne_10m_lakes_gen_z4 CASCADE;
|
||||
CREATE MATERIALIZED VIEW ne_10m_lakes_gen_z4 AS
|
||||
(
|
||||
SELECT id,
|
||||
(ST_Dump(ST_MakeValid(ST_Simplify(geometry, ZRes(6))))).geom AS geometry,
|
||||
class,
|
||||
is_intermittent,
|
||||
is_bridge,
|
||||
is_tunnel
|
||||
FROM ne_10m_lakes_gen_z5
|
||||
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
|
||||
CREATE INDEX IF NOT EXISTS ne_10m_lakes_gen_z4_idx ON ne_10m_lakes_gen_z4 USING gist (geometry);
|
||||
|
||||
-- ne_50m_ocean
|
||||
-- etldoc: ne_50m_ocean -> ne_50m_ocean_gen_z4
|
||||
DROP MATERIALIZED VIEW IF EXISTS ne_50m_ocean_gen_z4 CASCADE;
|
||||
CREATE MATERIALIZED VIEW ne_50m_ocean_gen_z4 AS
|
||||
(
|
||||
SELECT NULL::integer AS id,
|
||||
(ST_Dump(ST_Simplify(geometry, ZRes(6)))).geom AS geometry,
|
||||
'ocean'::text AS class,
|
||||
NULL::boolean AS is_intermittent,
|
||||
NULL::boolean AS is_bridge,
|
||||
NULL::boolean AS is_tunnel
|
||||
FROM ne_50m_ocean
|
||||
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
|
||||
CREATE INDEX IF NOT EXISTS ne_50m_ocean_gen_z4_idx ON ne_50m_ocean_gen_z4 USING gist (geometry);
|
||||
|
||||
-- etldoc: ne_50m_ocean_gen_z4 -> ne_50m_ocean_gen_z3
|
||||
DROP MATERIALIZED VIEW IF EXISTS ne_50m_ocean_gen_z3 CASCADE;
|
||||
CREATE MATERIALIZED VIEW ne_50m_ocean_gen_z3 AS
|
||||
(
|
||||
SELECT id,
|
||||
ST_Simplify(geometry, ZRes(5)) AS geometry,
|
||||
class,
|
||||
is_intermittent,
|
||||
is_bridge,
|
||||
is_tunnel
|
||||
FROM ne_50m_ocean_gen_z4
|
||||
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
|
||||
CREATE INDEX IF NOT EXISTS ne_50m_ocean_gen_z3_idx ON ne_50m_ocean_gen_z3 USING gist (geometry);
|
||||
|
||||
-- etldoc: ne_50m_ocean_gen_z3 -> ne_50m_ocean_gen_z2
|
||||
DROP MATERIALIZED VIEW IF EXISTS ne_50m_ocean_gen_z2 CASCADE;
|
||||
CREATE MATERIALIZED VIEW ne_50m_ocean_gen_z2 AS
|
||||
(
|
||||
SELECT id,
|
||||
ST_Simplify(geometry, ZRes(4)) AS geometry,
|
||||
class,
|
||||
is_intermittent,
|
||||
is_bridge,
|
||||
is_tunnel
|
||||
FROM ne_50m_ocean_gen_z3
|
||||
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
|
||||
CREATE INDEX IF NOT EXISTS ne_50m_ocean_gen_z2_idx ON ne_50m_ocean_gen_z2 USING gist (geometry);
|
||||
|
||||
-- ne_50m_lakes
|
||||
-- etldoc: ne_50m_lakes -> ne_50m_lakes_gen_z3
|
||||
DROP MATERIALIZED VIEW IF EXISTS ne_50m_lakes_gen_z3 CASCADE;
|
||||
CREATE MATERIALIZED VIEW ne_50m_lakes_gen_z3 AS
|
||||
(
|
||||
SELECT COALESCE(osm.osm_id, ne_id) AS id,
|
||||
(ST_Dump(ST_MakeValid(ST_Simplify(geometry, ZRes(5))))).geom AS geometry,
|
||||
'lake'::text AS class,
|
||||
NULL::boolean AS is_intermittent,
|
||||
NULL::boolean AS is_bridge,
|
||||
NULL::boolean AS is_tunnel
|
||||
FROM ne_50m_lakes
|
||||
LEFT JOIN match_osm_ne_id osm USING (ne_id)
|
||||
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
|
||||
CREATE INDEX IF NOT EXISTS ne_50m_lakes_gen_z3_idx ON ne_50m_lakes_gen_z3 USING gist (geometry);
|
||||
|
||||
-- etldoc: ne_50m_lakes_gen_z3 -> ne_50m_lakes_gen_z2
|
||||
DROP MATERIALIZED VIEW IF EXISTS ne_50m_lakes_gen_z2 CASCADE;
|
||||
CREATE MATERIALIZED VIEW ne_50m_lakes_gen_z2 AS
|
||||
(
|
||||
SELECT id,
|
||||
(ST_Dump(ST_MakeValid(ST_Simplify(geometry, ZRes(4))))).geom AS geometry,
|
||||
class,
|
||||
is_intermittent,
|
||||
is_bridge,
|
||||
is_tunnel
|
||||
FROM ne_50m_lakes_gen_z3
|
||||
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
|
||||
CREATE INDEX IF NOT EXISTS ne_50m_lakes_gen_z2_idx ON ne_50m_lakes_gen_z2 USING gist (geometry);
|
||||
|
||||
--ne_110m_ocean
|
||||
-- etldoc: ne_110m_ocean -> ne_110m_ocean_gen_z1
|
||||
DROP MATERIALIZED VIEW IF EXISTS ne_110m_ocean_gen_z1 CASCADE;
|
||||
CREATE MATERIALIZED VIEW ne_110m_ocean_gen_z1 AS
|
||||
(
|
||||
SELECT NULL::integer AS id,
|
||||
ST_Simplify(geometry, ZRes(3)) AS geometry,
|
||||
'ocean'::text AS class,
|
||||
NULL::boolean AS is_intermittent,
|
||||
NULL::boolean AS is_bridge,
|
||||
NULL::boolean AS is_tunnel
|
||||
FROM ne_110m_ocean
|
||||
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
|
||||
CREATE INDEX IF NOT EXISTS ne_110m_ocean_gen_z1_idx ON ne_110m_ocean_gen_z1 USING gist (geometry);
|
||||
|
||||
-- etldoc: ne_110m_ocean_gen_z1 -> ne_110m_ocean_gen_z0
|
||||
DROP MATERIALIZED VIEW IF EXISTS ne_110m_ocean_gen_z0 CASCADE;
|
||||
CREATE MATERIALIZED VIEW ne_110m_ocean_gen_z0 AS
|
||||
(
|
||||
SELECT id,
|
||||
ST_Simplify(geometry, ZRes(2)) AS geometry,
|
||||
class,
|
||||
is_intermittent,
|
||||
is_bridge,
|
||||
is_tunnel
|
||||
FROM ne_110m_ocean_gen_z1
|
||||
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
|
||||
CREATE INDEX IF NOT EXISTS ne_110m_ocean_gen_z0_idx ON ne_110m_ocean_gen_z0 USING gist (geometry);
|
||||
|
||||
|
||||
-- ne_110m_lakes
|
||||
-- etldoc: ne_110m_lakes -> ne_110m_lakes_gen_z1
|
||||
DROP MATERIALIZED VIEW IF EXISTS ne_110m_lakes_gen_z1 CASCADE;
|
||||
CREATE MATERIALIZED VIEW ne_110m_lakes_gen_z1 AS
|
||||
(
|
||||
SELECT COALESCE(osm.osm_id, ne_id) AS id,
|
||||
(ST_Dump(ST_Simplify(geometry, ZRes(3)))).geom AS geometry,
|
||||
'lake'::text AS class,
|
||||
NULL::boolean AS is_intermittent,
|
||||
NULL::boolean AS is_bridge,
|
||||
NULL::boolean AS is_tunnel
|
||||
FROM ne_110m_lakes
|
||||
LEFT JOIN match_osm_ne_id osm USING (ne_id)
|
||||
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
|
||||
CREATE INDEX IF NOT EXISTS ne_110m_lakes_gen_z1_idx ON ne_110m_lakes_gen_z1 USING gist (geometry);
|
||||
|
||||
-- etldoc: ne_110m_lakes_gen_z1 -> ne_110m_lakes_gen_z0
|
||||
DROP MATERIALIZED VIEW IF EXISTS ne_110m_lakes_gen_z0 CASCADE;
|
||||
CREATE MATERIALIZED VIEW ne_110m_lakes_gen_z0 AS
|
||||
(
|
||||
SELECT id,
|
||||
(ST_Dump(ST_Simplify(geometry, ZRes(2)))).geom AS geometry,
|
||||
class,
|
||||
is_intermittent,
|
||||
is_bridge,
|
||||
is_tunnel
|
||||
FROM ne_110m_lakes_gen_z1
|
||||
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
|
||||
CREATE INDEX IF NOT EXISTS ne_110m_lakes_gen_z0_idx ON ne_110m_lakes_gen_z0 USING gist (geometry);
|
||||
|
||||
DROP MATERIALIZED VIEW IF EXISTS water_z6;
|
||||
DROP MATERIALIZED VIEW IF EXISTS water_z7;
|
||||
DROP MATERIALIZED VIEW IF EXISTS water_z8;
|
||||
DROP MATERIALIZED VIEW IF EXISTS water_z9;
|
||||
DROP MATERIALIZED VIEW IF EXISTS water_z10;
|
||||
DROP MATERIALIZED VIEW IF EXISTS water_z11;
|
||||
DROP MATERIALIZED VIEW IF EXISTS water_z12;
|
||||
|
||||
CREATE OR REPLACE VIEW water_z0 AS
|
||||
(
|
||||
-- etldoc: ne_110m_ocean_gen_z0 -> water_z0
|
||||
SELECT id,
|
||||
geometry,
|
||||
class,
|
||||
is_intermittent,
|
||||
is_bridge,
|
||||
is_tunnel
|
||||
FROM ne_110m_ocean_gen_z0
|
||||
UNION ALL
|
||||
-- etldoc: ne_110m_lakes_gen_z0 -> water_z0
|
||||
SELECT id,
|
||||
geometry,
|
||||
class,
|
||||
is_intermittent,
|
||||
is_bridge,
|
||||
is_tunnel
|
||||
FROM ne_110m_lakes_gen_z0
|
||||
);
|
||||
|
||||
CREATE OR REPLACE VIEW water_z1 AS
|
||||
(
|
||||
-- etldoc: ne_110m_ocean_gen_z1 -> water_z1
|
||||
SELECT id,
|
||||
geometry,
|
||||
class,
|
||||
is_intermittent,
|
||||
is_bridge,
|
||||
is_tunnel
|
||||
FROM ne_110m_ocean_gen_z1
|
||||
UNION ALL
|
||||
-- etldoc: ne_110m_lakes_gen_z1 -> water_z1
|
||||
SELECT id,
|
||||
geometry,
|
||||
class,
|
||||
is_intermittent,
|
||||
is_bridge,
|
||||
is_tunnel
|
||||
FROM ne_110m_lakes_gen_z1
|
||||
);
|
||||
|
||||
CREATE OR REPLACE VIEW water_z2 AS
|
||||
(
|
||||
-- etldoc: ne_50m_ocean_gen_z2 -> water_z2
|
||||
SELECT id,
|
||||
geometry,
|
||||
class,
|
||||
is_intermittent,
|
||||
is_bridge,
|
||||
is_tunnel
|
||||
FROM ne_50m_ocean_gen_z2
|
||||
UNION ALL
|
||||
-- etldoc: ne_50m_lakes_gen_z2 -> water_z2
|
||||
SELECT id,
|
||||
geometry,
|
||||
class,
|
||||
is_intermittent,
|
||||
is_bridge,
|
||||
is_tunnel
|
||||
FROM ne_50m_lakes_gen_z2
|
||||
);
|
||||
|
||||
CREATE OR REPLACE VIEW water_z3 AS
|
||||
(
|
||||
-- etldoc: ne_50m_ocean_gen_z3 -> water_z3
|
||||
SELECT id,
|
||||
geometry,
|
||||
class,
|
||||
is_intermittent,
|
||||
is_bridge,
|
||||
is_tunnel
|
||||
FROM ne_50m_ocean_gen_z3
|
||||
UNION ALL
|
||||
-- etldoc: ne_50m_lakes_gen_z3 -> water_z3
|
||||
SELECT id,
|
||||
geometry,
|
||||
class,
|
||||
is_intermittent,
|
||||
is_bridge,
|
||||
is_tunnel
|
||||
FROM ne_50m_lakes_gen_z3
|
||||
);
|
||||
|
||||
CREATE OR REPLACE VIEW water_z4 AS
|
||||
(
|
||||
-- etldoc: ne_50m_ocean_gen_z4 -> water_z4
|
||||
SELECT id,
|
||||
geometry,
|
||||
class,
|
||||
is_intermittent,
|
||||
is_bridge,
|
||||
is_tunnel
|
||||
FROM ne_50m_ocean_gen_z4
|
||||
UNION ALL
|
||||
-- etldoc: ne_10m_lakes_gen_z4 -> water_z4
|
||||
SELECT id,
|
||||
geometry,
|
||||
class,
|
||||
is_intermittent,
|
||||
is_bridge,
|
||||
is_tunnel
|
||||
FROM ne_10m_lakes_gen_z4
|
||||
);
|
||||
|
||||
|
||||
CREATE OR REPLACE VIEW water_z5 AS
|
||||
(
|
||||
-- etldoc: ne_10m_ocean_gen_z5 -> water_z5
|
||||
SELECT id,
|
||||
geometry,
|
||||
class,
|
||||
is_intermittent,
|
||||
is_bridge,
|
||||
is_tunnel
|
||||
FROM ne_10m_ocean_gen_z5
|
||||
UNION ALL
|
||||
-- etldoc: ne_10m_lakes_gen_z5 -> water_z5
|
||||
SELECT id,
|
||||
geometry,
|
||||
class,
|
||||
is_intermittent,
|
||||
is_bridge,
|
||||
is_tunnel
|
||||
FROM ne_10m_lakes_gen_z5
|
||||
);
|
||||
|
||||
CREATE MATERIALIZED VIEW water_z6 AS
|
||||
(
|
||||
-- etldoc: osm_ocean_polygon_gen_z6 -> water_z6
|
||||
SELECT NULL::integer AS id,
|
||||
(ST_Dump(geometry)).geom AS geometry,
|
||||
'ocean'::text AS class,
|
||||
NULL::boolean AS is_intermittent,
|
||||
NULL::boolean AS is_bridge,
|
||||
NULL::boolean AS is_tunnel
|
||||
FROM osm_ocean_polygon_gen_z6
|
||||
UNION ALL
|
||||
-- etldoc: osm_water_polygon_gen_z6 -> water_z6
|
||||
SELECT osm_id AS id,
|
||||
(ST_Dump(geometry)).geom AS geometry,
|
||||
water_class(waterway, water, leisure) AS class,
|
||||
is_intermittent,
|
||||
NULL::boolean AS is_bridge,
|
||||
NULL::boolean AS is_tunnel
|
||||
FROM osm_water_polygon_gen_z6
|
||||
WHERE "natural" != 'bay'
|
||||
);
|
||||
CREATE INDEX ON water_z6 USING gist(geometry);
|
||||
DROP MATERIALIZED VIEW IF EXISTS water_z0 CASCADE;
|
||||
DROP MATERIALIZED VIEW IF EXISTS water_z1 CASCADE;
|
||||
DROP MATERIALIZED VIEW IF EXISTS water_z2 CASCADE;
|
||||
DROP MATERIALIZED VIEW IF EXISTS water_z3 CASCADE;
|
||||
DROP MATERIALIZED VIEW IF EXISTS water_z4 CASCADE;
|
||||
DROP MATERIALIZED VIEW IF EXISTS water_z5 CASCADE;
|
||||
DROP MATERIALIZED VIEW IF EXISTS water_z6 CASCADE;
|
||||
DROP MATERIALIZED VIEW IF EXISTS water_z7 CASCADE;
|
||||
DROP MATERIALIZED VIEW IF EXISTS water_z8 CASCADE;
|
||||
DROP MATERIALIZED VIEW IF EXISTS water_z9 CASCADE;
|
||||
DROP MATERIALIZED VIEW IF EXISTS water_z10 CASCADE;
|
||||
DROP MATERIALIZED VIEW IF EXISTS water_z11 CASCADE;
|
||||
DROP MATERIALIZED VIEW IF EXISTS water_z12 CASCADE;
|
||||
|
||||
|
||||
|
||||
CREATE MATERIALIZED VIEW water_z7 AS
|
||||
(
|
||||
|
@ -541,6 +181,280 @@ WHERE "natural" != 'bay'
|
|||
);
|
||||
CREATE INDEX ON water_z12 USING gist(geometry);
|
||||
|
||||
CREATE MATERIALIZED VIEW water_z6 AS
|
||||
(
|
||||
-- etldoc: osm_ocean_polygon_gen_z6 -> water_z6
|
||||
SELECT NULL::integer AS id,
|
||||
ST_MakeValid(
|
||||
ST_SnapToGrid(
|
||||
ST_SimplifyVW(
|
||||
ST_Buffer(
|
||||
ST_SnapToGrid(
|
||||
ST_Buffer(geometry, -0.004, 1),
|
||||
0.004),
|
||||
0.004,
|
||||
1
|
||||
),
|
||||
power(zres(6),2)
|
||||
),
|
||||
0.004
|
||||
)
|
||||
) AS geometry,
|
||||
'ocean'::text AS class,
|
||||
NULL::boolean AS is_intermittent,
|
||||
NULL::boolean AS is_bridge,
|
||||
NULL::boolean AS is_tunnel
|
||||
FROM osm_ocean_polygon_gen_z6
|
||||
UNION ALL
|
||||
-- etldoc: osm_water_polygon_gen_z6 -> water_z6
|
||||
SELECT osm_id AS id,
|
||||
(ST_Dump(geometry)).geom AS geometry,
|
||||
water_class(waterway, water, leisure) AS class,
|
||||
is_intermittent,
|
||||
NULL::boolean AS is_bridge,
|
||||
NULL::boolean AS is_tunnel
|
||||
FROM osm_water_polygon_gen_z6
|
||||
WHERE "natural" != 'bay'
|
||||
);
|
||||
CREATE INDEX ON water_z6 USING gist(geometry);
|
||||
|
||||
|
||||
CREATE MATERIALIZED VIEW water_z5 AS
|
||||
(
|
||||
-- etldoc: osm_ocean_polygon_gen_z5 -> water_z5
|
||||
SELECT NULL::integer AS id,
|
||||
ST_MakeValid(
|
||||
ST_SnapToGrid(
|
||||
ST_SimplifyVW(
|
||||
ST_Buffer(
|
||||
ST_SnapToGrid(
|
||||
ST_Buffer(geometry, -0.008, 1),
|
||||
0.008),
|
||||
0.008,
|
||||
1
|
||||
),
|
||||
power(zres(5),2)
|
||||
),
|
||||
0.008
|
||||
)
|
||||
) AS geometry,
|
||||
'ocean'::text AS class,
|
||||
NULL::boolean AS is_intermittent,
|
||||
NULL::boolean AS is_bridge,
|
||||
NULL::boolean AS is_tunnel
|
||||
FROM osm_ocean_polygon_gen_z6
|
||||
WHERE ST_Area(geometry) > power(zres(4),2)
|
||||
UNION ALL
|
||||
-- etldoc: osm_water_polygon_gen_z5 -> water_z4
|
||||
SELECT osm_id AS id,
|
||||
(ST_Dump(geometry)).geom AS geometry,
|
||||
water_class(waterway, water, leisure) AS class,
|
||||
is_intermittent,
|
||||
NULL::boolean AS is_bridge,
|
||||
NULL::boolean AS is_tunnel
|
||||
FROM osm_water_polygon_gen_z5
|
||||
WHERE "natural" != 'bay'
|
||||
);
|
||||
CREATE INDEX ON water_z5 USING gist(geometry);
|
||||
|
||||
|
||||
|
||||
CREATE MATERIALIZED VIEW water_z4 AS
|
||||
(
|
||||
-- etldoc: osm_ocean_polygon_gen_z4 -> water_z4
|
||||
SELECT NULL::integer AS id,
|
||||
ST_MakeValid(
|
||||
ST_SnapToGrid(
|
||||
ST_SimplifyVW(
|
||||
ST_Buffer(
|
||||
ST_SnapToGrid(
|
||||
ST_Buffer(geometry, -0.016, 1),
|
||||
0.016),
|
||||
0.016,
|
||||
1
|
||||
),
|
||||
power(zres(4),2)
|
||||
),
|
||||
0.016
|
||||
)
|
||||
) AS geometry,
|
||||
'ocean'::text AS class,
|
||||
NULL::boolean AS is_intermittent,
|
||||
NULL::boolean AS is_bridge,
|
||||
NULL::boolean AS is_tunnel
|
||||
FROM water_z5
|
||||
WHERE ST_Area(geometry) > power(zres(3),2)
|
||||
UNION ALL
|
||||
-- etldoc: osm_water_polygon_gen_z4 -> water_z3
|
||||
SELECT osm_id AS id,
|
||||
(ST_Dump(geometry)).geom AS geometry,
|
||||
water_class(waterway, water, leisure) AS class,
|
||||
is_intermittent,
|
||||
NULL::boolean AS is_bridge,
|
||||
NULL::boolean AS is_tunnel
|
||||
FROM osm_water_polygon_gen_z4
|
||||
WHERE "natural" != 'bay'
|
||||
);
|
||||
CREATE INDEX ON water_z4 USING gist(geometry);
|
||||
|
||||
|
||||
CREATE MATERIALIZED VIEW water_z3 AS
|
||||
(
|
||||
-- etldoc: osm_ocean_polygon_gen_z3 -> water_z3
|
||||
SELECT NULL::integer AS id,
|
||||
ST_MakeValid(
|
||||
ST_SnapToGrid(
|
||||
ST_SimplifyVW(
|
||||
ST_Buffer(
|
||||
ST_SnapToGrid(
|
||||
ST_Buffer(geometry, -0.032, 1),
|
||||
0.032),
|
||||
0.032,
|
||||
1
|
||||
),
|
||||
power(zres(3),2)
|
||||
),
|
||||
0.032
|
||||
)
|
||||
) AS geometry,
|
||||
'ocean'::text AS class,
|
||||
NULL::boolean AS is_intermittent,
|
||||
NULL::boolean AS is_bridge,
|
||||
NULL::boolean AS is_tunnel
|
||||
FROM water_z4
|
||||
WHERE ST_Area(geometry) > power(zres(2),2)
|
||||
UNION ALL
|
||||
-- etldoc: osm_water_polygon_gen_z3 -> water_z2
|
||||
SELECT osm_id AS id,
|
||||
(ST_Dump(geometry)).geom AS geometry,
|
||||
water_class(waterway, water, leisure) AS class,
|
||||
is_intermittent,
|
||||
NULL::boolean AS is_bridge,
|
||||
NULL::boolean AS is_tunnel
|
||||
FROM osm_water_polygon_gen_z3
|
||||
WHERE "natural" != 'bay'
|
||||
);
|
||||
CREATE INDEX ON water_z3 USING gist(geometry);
|
||||
|
||||
|
||||
CREATE MATERIALIZED VIEW water_z2 AS
|
||||
(
|
||||
-- etldoc: osm_ocean_polygon_gen_z2 -> water_z2
|
||||
SELECT NULL::integer AS id,
|
||||
ST_MakeValid(
|
||||
ST_SnapToGrid(
|
||||
ST_SimplifyVW(
|
||||
ST_Buffer(
|
||||
ST_SnapToGrid(
|
||||
ST_Buffer(geometry, -0.064, 1),
|
||||
0.064),
|
||||
0.064,
|
||||
1
|
||||
),
|
||||
power(zres(2),2)
|
||||
),
|
||||
0.064
|
||||
)
|
||||
) AS geometry,
|
||||
'ocean'::text AS class,
|
||||
NULL::boolean AS is_intermittent,
|
||||
NULL::boolean AS is_bridge,
|
||||
NULL::boolean AS is_tunnel
|
||||
FROM water_z3
|
||||
WHERE ST_Area(geometry) > power(zres(1),2)
|
||||
UNION ALL
|
||||
-- etldoc: osm_water_polygon_gen_z2 -> water_z1
|
||||
SELECT osm_id AS id,
|
||||
(ST_Dump(geometry)).geom AS geometry,
|
||||
water_class(waterway, water, leisure) AS class,
|
||||
is_intermittent,
|
||||
NULL::boolean AS is_bridge,
|
||||
NULL::boolean AS is_tunnel
|
||||
FROM osm_water_polygon_gen_z2
|
||||
WHERE "natural" != 'bay'
|
||||
);
|
||||
CREATE INDEX ON water_z2 USING gist(geometry);
|
||||
|
||||
|
||||
CREATE MATERIALIZED VIEW water_z1 AS
|
||||
(
|
||||
-- etldoc: osm_ocean_polygon_gen_z1 -> water_z1
|
||||
SELECT NULL::integer AS id,
|
||||
ST_MakeValid(
|
||||
ST_SnapToGrid(
|
||||
ST_SimplifyVW(
|
||||
ST_Buffer(
|
||||
ST_SnapToGrid(
|
||||
ST_Buffer(geometry, -0.128, 1),
|
||||
0.128),
|
||||
0.128,
|
||||
1
|
||||
),
|
||||
power(zres(1),2)
|
||||
),
|
||||
0.128
|
||||
)
|
||||
) AS geometry,
|
||||
'ocean'::text AS class,
|
||||
NULL::boolean AS is_intermittent,
|
||||
NULL::boolean AS is_bridge,
|
||||
NULL::boolean AS is_tunnel
|
||||
FROM water_z2
|
||||
WHERE ST_Area(geometry) > power(zres(0),2)
|
||||
UNION ALL
|
||||
-- etldoc: osm_water_polygon_gen_z1 -> water_z0
|
||||
SELECT osm_id AS id,
|
||||
(ST_Dump(geometry)).geom AS geometry,
|
||||
water_class(waterway, water, leisure) AS class,
|
||||
is_intermittent,
|
||||
NULL::boolean AS is_bridge,
|
||||
NULL::boolean AS is_tunnel
|
||||
FROM osm_water_polygon_gen_z1
|
||||
WHERE "natural" != 'bay'
|
||||
);
|
||||
CREATE INDEX ON water_z1 USING gist(geometry);
|
||||
|
||||
|
||||
|
||||
CREATE MATERIALIZED VIEW water_z0 AS
|
||||
(
|
||||
-- etldoc: osm_ocean_polygon_gen_z0 -> water_z0
|
||||
SELECT NULL::integer AS id,
|
||||
ST_MakeValid(
|
||||
ST_SnapToGrid(
|
||||
ST_SimplifyVW(
|
||||
ST_Buffer(
|
||||
ST_SnapToGrid(
|
||||
ST_Buffer(geometry, -0.256, 1),
|
||||
0.256),
|
||||
0.256,
|
||||
1
|
||||
),
|
||||
power(zres(0),2)
|
||||
),
|
||||
0.256
|
||||
)
|
||||
) AS geometry,
|
||||
'ocean'::text AS class,
|
||||
NULL::boolean AS is_intermittent,
|
||||
NULL::boolean AS is_bridge,
|
||||
NULL::boolean AS is_tunnel
|
||||
FROM water_z1
|
||||
UNION ALL
|
||||
-- etldoc: osm_water_polygon_gen_z1 -> water_z0
|
||||
SELECT osm_id AS id,
|
||||
(ST_Dump(geometry)).geom AS geometry,
|
||||
water_class(waterway, water, leisure) AS class,
|
||||
is_intermittent,
|
||||
NULL::boolean AS is_bridge,
|
||||
NULL::boolean AS is_tunnel
|
||||
FROM osm_water_polygon_gen_z1
|
||||
WHERE "natural" != 'bay'
|
||||
);
|
||||
CREATE INDEX ON water_z0 USING gist(geometry);
|
||||
|
||||
|
||||
|
||||
-- etldoc: layer_water [shape=record fillcolor=lightpink, style="rounded,filled",
|
||||
-- etldoc: label="layer_water |<z0> z0|<z1>z1|<z2>z2|<z3>z3 |<z4> z4|<z5>z5|<z6>z6|<z7>z7| <z8> z8 |<z9> z9 |<z10> z10 |<z11> z11 |<z12> z12+" ] ;
|
||||
|
||||
|
|
|
@ -38,7 +38,7 @@
|
|||
"tunnel"
|
||||
]
|
||||
],
|
||||
"order": 12
|
||||
"order": 1
|
||||
},
|
||||
{
|
||||
"id": "waterway_river",
|
||||
|
@ -84,7 +84,7 @@
|
|||
1
|
||||
]
|
||||
],
|
||||
"order": 13
|
||||
"order": 2
|
||||
},
|
||||
{
|
||||
"id": "waterway_river_intermittent",
|
||||
|
@ -133,7 +133,7 @@
|
|||
1
|
||||
]
|
||||
],
|
||||
"order": 14
|
||||
"order": 3
|
||||
},
|
||||
{
|
||||
"id": "waterway_other",
|
||||
|
@ -179,7 +179,7 @@
|
|||
1
|
||||
]
|
||||
],
|
||||
"order": 15
|
||||
"order": 4
|
||||
},
|
||||
{
|
||||
"id": "waterway_other_intermittent",
|
||||
|
@ -229,7 +229,7 @@
|
|||
1
|
||||
]
|
||||
],
|
||||
"order": 16
|
||||
"order": 5
|
||||
},
|
||||
{
|
||||
"id": "waterway-bridge-case",
|
||||
|
@ -282,7 +282,7 @@
|
|||
"bridge"
|
||||
]
|
||||
],
|
||||
"order": 111
|
||||
"order": 8
|
||||
},
|
||||
{
|
||||
"id": "waterway-bridge",
|
||||
|
@ -322,7 +322,7 @@
|
|||
"bridge"
|
||||
]
|
||||
],
|
||||
"order": 112
|
||||
"order": 9
|
||||
},
|
||||
{
|
||||
"id": "water_way_name",
|
||||
|
@ -367,7 +367,7 @@
|
|||
"LineString"
|
||||
]
|
||||
],
|
||||
"order": 151
|
||||
"order": 10
|
||||
}
|
||||
]
|
||||
}
|
|
@ -1,28 +1,28 @@
|
|||
tileset:
|
||||
layers:
|
||||
- layers/water/water.yaml
|
||||
- layers/waterway/waterway.yaml
|
||||
- layers/landcover/landcover.yaml
|
||||
- layers/landuse/landuse.yaml
|
||||
- layers/mountain_peak/mountain_peak.yaml
|
||||
- layers/park/park.yaml
|
||||
- layers/boundary/boundary.yaml
|
||||
- layers/aeroway/aeroway.yaml
|
||||
- layers/transportation/transportation.yaml
|
||||
- layers/building/building.yaml
|
||||
- layers/water_name/water_name.yaml
|
||||
- layers/transportation_name/transportation_name.yaml
|
||||
- layers/place/place.yaml
|
||||
- layers/housenumber/housenumber.yaml
|
||||
- layers/poi/poi.yaml
|
||||
- layers/aerodrome_label/aerodrome_label.yaml
|
||||
#- layers/waterway/waterway.yaml
|
||||
#- layers/landcover/landcover.yaml
|
||||
#- layers/landuse/landuse.yaml
|
||||
#- layers/mountain_peak/mountain_peak.yaml
|
||||
#- layers/park/park.yaml
|
||||
#- layers/boundary/boundary.yaml
|
||||
#- layers/aeroway/aeroway.yaml
|
||||
#- layers/transportation/transportation.yaml
|
||||
#- layers/building/building.yaml
|
||||
#- layers/water_name/water_name.yaml
|
||||
#- layers/transportation_name/transportation_name.yaml
|
||||
#- layers/place/place.yaml
|
||||
#- layers/housenumber/housenumber.yaml
|
||||
#- layers/poi/poi.yaml
|
||||
#- layers/aerodrome_label/aerodrome_label.yaml
|
||||
name: OpenMapTiles
|
||||
version: 3.15.0
|
||||
id: openmaptiles
|
||||
description: "A tileset showcasing all layers in OpenMapTiles. https://openmaptiles.org"
|
||||
attribution: '<a href="https://www.openmaptiles.org/" target="_blank">© OpenMapTiles</a> <a href="https://www.openstreetmap.org/copyright" target="_blank">© OpenStreetMap contributors</a>'
|
||||
center: [0, 0, 1]
|
||||
bounds: [-180.0,-85.0511,180.0,85.0511]
|
||||
bounds: [-180.0, -85.0511, 180.0, 85.0511]
|
||||
maxzoom: 14
|
||||
minzoom: 0
|
||||
pixel_scale: 256
|
||||
|
|
|
@ -115,7 +115,7 @@ make refresh-docker-images
|
|||
|
||||
|
||||
##### backup log from here ...
|
||||
exec &> >(tee -a "$log_file")
|
||||
#exec &> >(tee -a "$log_file")
|
||||
|
||||
echo " "
|
||||
echo "====================================================================================="
|
||||
|
|
Ładowanie…
Reference in New Issue