working water test

pull/1705/head
Bjørnar Dervo 2025-02-05 12:31:37 +01:00
rodzic 1e2c0ea976
commit 0817560b56
14 zmienionych plików z 1591 dodań i 449 usunięć

BIN
.DS_Store vendored 100644

Plik binarny nie jest wyświetlany.

12
.env
Wyświetl plik

@ -20,7 +20,7 @@ BBOX=-180.0,-85.0511,180.0,85.0511
# Which zooms to generate with make generate-tiles-pg # Which zooms to generate with make generate-tiles-pg
MIN_ZOOM=0 MIN_ZOOM=0
MAX_ZOOM=7 MAX_ZOOM=8
# `MID_ZOOM` setting only works with `make generate-tiles-pg` command. Make sure MID_ZOOM < MAX_ZOOM. # `MID_ZOOM` setting only works with `make generate-tiles-pg` command. Make sure MID_ZOOM < MAX_ZOOM.
# See https://github.com/openmaptiles/openmaptiles-tools/pull/383 # See https://github.com/openmaptiles/openmaptiles-tools/pull/383
@ -30,17 +30,21 @@ MAX_ZOOM=7
DIFF_MODE=false DIFF_MODE=false
# The current setup assumes this file is placed inside the data/ dir # The current setup assumes this file is placed inside the data/ dir
MBTILES_FILE=tiles.mbtiles MBTILES_FILE=waterway_13.mbtiles
# This is the current repl_config.json location, pre-configured in the tools Dockerfile # This is the current repl_config.json location, pre-configured in the tools Dockerfile
# Makefile and quickstart replace it with the dynamically generated one, but we keep it here in case some other method is used to run. # Makefile and quickstart replace it with the dynamically generated one, but we keep it here in case some other method is used to run.
IMPOSM_CONFIG_FILE=/usr/src/app/config/repl_config.json IMPOSM_CONFIG_FILE=/usr/src/app/config/repl_config.json
# Number of parallel processes to use when importing sql files # Number of parallel processes to use when importing sql files
MAX_PARALLEL_PSQL=5 MAX_PARALLEL_PSQL=10
# Number of parallel threads to use when generating vector map tiles # Number of parallel threads to use when generating vector map tiles
COPY_CONCURRENCY=10
COPY_CONCURRENCY=60
# Variables for generate tiles using tilelive-pgquery # Variables for generate tiles using tilelive-pgquery
UV_THREADPOOL_SIZE=1024 # acts as limit rather than definite amount, is therefore set to max without any significant negative impact.
PGHOSTS_LIST= PGHOSTS_LIST=

Wyświetl plik

@ -1,38 +1,39 @@
## Quickstart - for small extracts ## Quickstart - for small extracts
### Req: ### Req:
* CPU: AMD64 ( = Intel 64 bit)
* The base docker debian images are x86_64 based, so the ARM, MIPS currently not supported!
* Operating system
* Linux is suggested
* The development and the testing platform is Linux.
* If you are using FreeBSD, Solaris, Windows, ...
* Please give a feedback, share your experience, write a tutorial
* bash
* git
* make
* bc
* md5sum
* docker >=1.12.3
* https://www.docker.com/products/overview
* docker-compose >=1.7.1
* https://docs.docker.com/compose/install/
* disk space ( >= ~15Gb )
* for small extracts >= ~15Gb
* for big extracts ( continents, planet) 250 Gb
* And depends on
* OpenStreetMap data size
* Zoom level
* Best on SSD for postserve but completely usable on HDD
* Takes 24hrs to import on a reasonable machine, and is immediately available with postserve
* memory ( >= 3Gb )
* for small extracts 3Gb-8Gb RAM
* for big extracts ( Europe, Planet) > 8-32 Gb
* internet connections
* for downloading docker images
* for downloading OpenStreetMap data from Geofabrik
Important: The ./quickstart.sh is for small extracts - not optimal for a Planet rendering !! - CPU: AMD64 ( = Intel 64 bit)
- The base docker debian images are x86_64 based, so the ARM, MIPS currently not supported!
- Operating system
- Linux is suggested
- The development and the testing platform is Linux.
- If you are using FreeBSD, Solaris, Windows, ...
- Please give a feedback, share your experience, write a tutorial
- bash
- git
- make
- bc
- md5sum
- docker >=1.12.3
- https://www.docker.com/products/overview
- docker-compose >=1.7.1
- https://docs.docker.com/compose/install/
- disk space ( >= ~15Gb )
- for small extracts >= ~15Gb
- for big extracts ( continents, planet) 250 Gb
- And depends on
- OpenStreetMap data size
- Zoom level
- Best on SSD for postserve but completely usable on HDD
- Takes 24hrs to import on a reasonable machine, and is immediately available with postserve
- memory ( >= 3Gb )
- for small extracts 3Gb-8Gb RAM
- for big extracts ( Europe, Planet) > 8-32 Gb
- internet connections
- for downloading docker images
- for downloading OpenStreetMap data from Geofabrik
Important: The ./quickstart.sh is for small extracts - not optimal for a Planet rendering !!
### First experiment - with `albania` ( small extracts! ) ### First experiment - with `albania` ( small extracts! )
@ -43,25 +44,27 @@ cd openmaptiles
``` ```
If you have problems with the quickstart If you have problems with the quickstart
* check the ./quickstart.log!
* doublecheck the system requirements! - check the ./quickstart.log!
* check the current issues: https://github.com/openmaptiles/openmaptiles/issues - doublecheck the system requirements!
* create new issues: - check the current issues: https://github.com/openmaptiles/openmaptiles/issues
* create a new gist: https://gist.github.com/ from your ./quickstart.log - create new issues:
* doublecheck: don't reveal any sensitive information about your system - create a new gist: https://gist.github.com/ from your ./quickstart.log
* create a new issue: https://github.com/openmaptiles/openmaptiles/issues - doublecheck: don't reveal any sensitive information about your system
* describe the problems - create a new issue: https://github.com/openmaptiles/openmaptiles/issues
* add any pertinent information about your environment - describe the problems
* link your (quickstart.log) gist! - add any pertinent information about your environment
- link your (quickstart.log) gist!
### Check other extracts ### Check other extracts
IF the previous step is working, IF the previous step is working,
THEN you can test other available quickstart extracts ( based on [Geofabrik extracts](http://download.geofabrik.de/index.html) ) ! THEN you can test other available quickstart extracts ( based on [Geofabrik extracts](http://download.geofabrik.de/index.html) ) !
* We are using https://github.com/julien-noblet/download-geofabrik tool
* The current extract list, and more information -> `make list-geofabrik` or `make list-bbbike`
This is generating `.mbtiles` for your area : [ MIN_ZOOM: "0" - MAX_ZOOM: "7" ] - We are using https://github.com/julien-noblet/download-geofabrik tool
- The current extract list, and more information -> `make list-geofabrik` or `make list-bbbike`
This is generating `.mbtiles` for your area : [ MIN_ZOOM: "0" - MAX_ZOOM: "7" ]
```bash ```bash
./quickstart.sh africa # Africa ./quickstart.sh africa # Africa
@ -362,8 +365,10 @@ This is generating `.mbtiles` for your area : [ MIN_ZOOM: "0" - MAX_ZOOM: "7"
./quickstart.sh wyoming # Wyoming, US ./quickstart.sh wyoming # Wyoming, US
./quickstart.sh yukon # Yukon, Canada ./quickstart.sh yukon # Yukon, Canada
``` ```
### Using your own OSM data ### Using your own OSM data
Mbtiles can be generated from an arbitrary osm.pbf (e.g. for a region that is not covered by an existing extract) by making the `data/` directory and placing an *.osm.pbf (e.g. `mydata.osm.pbf`) inside.
Mbtiles can be generated from an arbitrary osm.pbf (e.g. for a region that is not covered by an existing extract) by making the `data/` directory and placing an \*.osm.pbf (e.g. `mydata.osm.pbf`) inside.
``` ```
mkdir -p data mkdir -p data
@ -373,44 +378,48 @@ make generate-bbox-file area=mydata
``` ```
### Check postserve ### Check postserve
* ` docker-compose up -d postserve`
and the generated maps are going to be available in browser on [localhost:8090/tiles/0/0/0.pbf](http://localhost:8090/tiles/0/0/0.pbf). - ` docker-compose up -d postserve`
and the generated maps are going to be available in browser on [localhost:8090/tiles/0/0/0.pbf](http://localhost:8090/tiles/0/0/0.pbf).
### Check tileserver ### Check tileserver
start: start:
* ` make start-tileserver`
and the generated maps are going to be available in webbrowser on [localhost:8080](http://localhost:8080/). - ` make start-tileserver`
and the generated maps are going to be available in webbrowser on [localhost:8080](http://localhost:8080/).
This is only a quick preview, because your mbtiles only generated to zoom level 7 ! This is only a quick preview, because your mbtiles only generated to zoom level 7 !
### Set which zooms to generate ### Set which zooms to generate
modify the settings in the `.env` file, the defaults: modify the settings in the `.env` file, the defaults:
* `MIN_ZOOM=0`
* `MAX_ZOOM=7` - `MIN_ZOOM=0`
- `MAX_ZOOM=7`
Hints: Hints:
* Small increments! Never starts with the `MAX_ZOOM = 14`
* The suggested `MAX_ZOOM = 14` - use only with small extracts - Small increments! Never starts with the `MAX_ZOOM = 14`
- The suggested `MAX_ZOOM = 14` - use only with small extracts
### Set the bounding box to generate ### Set the bounding box to generate
By default, tile generation is done for the full extent of the area. By default, tile generation is done for the full extent of the area.
If you want to generate a tiles for a smaller extent, modify the settings in the `.env` file, the default: If you want to generate a tiles for a smaller extent, modify the settings in the `.env` file, the default:
* `BBOX=-180.0,-85.0511,180.0,85.0511`
- `BBOX=-180.0,-85.0511,180.0,85.0511`
Delete the `./data/<area>.bbox` file, and re-start `./quickstart.sh <area>` Delete the `./data/<area>.bbox` file, and re-start `./quickstart.sh <area>`
Hint: Hint:
* The [boundingbox.klokantech.com](https://boundingbox.klokantech.com/) site can be used to find a bounding box (CSV format) using a map.
- The [boundingbox.klokantech.com](https://boundingbox.klokantech.com/) site can be used to find a bounding box (CSV format) using a map.
### Check other commands ### Check other commands
`make help` `make help`
the current output: the current output:
``` ```

Wyświetl plik

@ -5,7 +5,6 @@ volumes:
cache: cache:
services: services:
openmaptiles-tools: openmaptiles-tools:
volumes: volumes:
- cache:/cache - cache:/cache

Wyświetl plik

@ -1,5 +1,5 @@
# This version must match the MAKE_DC_VERSION value below # This version must match the MAKE_DC_VERSION value below
version: "3" #version: "3.7"
volumes: volumes:
pgdata: pgdata:
@ -9,13 +9,16 @@ networks:
driver: bridge driver: bridge
services: services:
postgres: postgres:
image: "${POSTGIS_IMAGE:-openmaptiles/postgis}:${TOOLS_VERSION}" image: "${POSTGIS_IMAGE:-openmaptiles/postgis}:${TOOLS_VERSION}"
#command: postgres -c "config_file=./etc/postgresql/postgresql.conf"
command: postgres -c "shared_buffers=16GB" -c "autovacuum=on" -c "work_mem=1500MB" -c "maintenance_work_mem=4096kB" -c "effective_cache_size=20GB" -c "random_page_cost=1.1" -c "effective_cache_size=40GB" -c "dynamic_shared_memory_type=posix" -c "max_wal_size=1GB" -c "min_wal_size=80MB" -c "wal_buffers=16MB" -c "effective_io_concurrency=30" -c "max_connections=300"
# Use "command: postgres -c jit=off" for PostgreSQL 11+ because of slow large MVT query processing # Use "command: postgres -c jit=off" for PostgreSQL 11+ because of slow large MVT query processing
# Use "shm_size: 512m" if you want to prevent a possible 'No space left on device' during 'make generate-tiles-pg' # Use "shm_size: 512m" if you want to prevent a possible 'No space left on device' during 'make generate-tiles-pg'
shm_size: 3g
volumes: volumes:
- pgdata:/var/lib/postgresql/data - pgdata:/var/lib/postgresql/data
#- ./etc/postgresql/postgresql.conf:/var/lib/postgresql/data/postgresql.conf
networks: networks:
- postgres - postgres
ports: ports:
@ -27,6 +30,7 @@ services:
POSTGRES_USER: ${PGUSER:-openmaptiles} POSTGRES_USER: ${PGUSER:-openmaptiles}
POSTGRES_PASSWORD: ${PGPASSWORD:-openmaptiles} POSTGRES_PASSWORD: ${PGPASSWORD:-openmaptiles}
PGPORT: ${PGPORT:-5432} PGPORT: ${PGPORT:-5432}
#TILE_TIMEOUT: 3600000
import-data: import-data:
image: "openmaptiles/import-data:${TOOLS_VERSION}" image: "openmaptiles/import-data:${TOOLS_VERSION}"
@ -40,7 +44,7 @@ services:
environment: environment:
# Must match the version of this file (first line) # Must match the version of this file (first line)
# download-osm will use it when generating a composer file # download-osm will use it when generating a composer file
MAKE_DC_VERSION: "3" MAKE_DC_VERSION: "3.7"
# Allow DIFF_MODE, MIN_ZOOM, and MAX_ZOOM to be overwritten from shell # Allow DIFF_MODE, MIN_ZOOM, and MAX_ZOOM to be overwritten from shell
DIFF_MODE: ${DIFF_MODE} DIFF_MODE: ${DIFF_MODE}
MIN_ZOOM: ${MIN_ZOOM} MIN_ZOOM: ${MIN_ZOOM}
@ -56,6 +60,7 @@ services:
PGPASSWORD: ${PGPASSWORD:-openmaptiles} PGPASSWORD: ${PGPASSWORD:-openmaptiles}
PGPORT: ${PGPORT:-5432} PGPORT: ${PGPORT:-5432}
MBTILES_FILE: ${MBTILES_FILE} MBTILES_FILE: ${MBTILES_FILE}
#UV_THREADPOOL_SIZE:
networks: networks:
- postgres - postgres
volumes: volumes:
@ -94,7 +99,10 @@ services:
volumes: volumes:
- ./data:/export - ./data:/export
- ./build/openmaptiles.tm2source:/tm2source - ./build/openmaptiles.tm2source:/tm2source
networks: - type: tmpfs
target: /dev/shm
tmpfs:
size: 16000000000 # ~16gb
- postgres - postgres
env_file: .env env_file: .env
environment: environment:
@ -109,6 +117,7 @@ services:
PGUSER: ${PGUSER:-openmaptiles} PGUSER: ${PGUSER:-openmaptiles}
PGPASSWORD: ${PGPASSWORD:-openmaptiles} PGPASSWORD: ${PGPASSWORD:-openmaptiles}
PGPORT: ${PGPORT:-5432} PGPORT: ${PGPORT:-5432}
#UV_THREADPOOL_SIZE: ${UV_THREADPOOL_SIZE}
postserve: postserve:
image: "openmaptiles/openmaptiles-tools:${TOOLS_VERSION}" image: "openmaptiles/openmaptiles-tools:${TOOLS_VERSION}"
@ -122,6 +131,7 @@ services:
- "${PPORT:-8090}:${PPORT:-8090}" - "${PPORT:-8090}:${PPORT:-8090}"
volumes: volumes:
- .:/tileset - .:/tileset
#- ./etc/postgresql/postgresql.conf:/var/lib/postgresql/data/postgresql.conf
maputnik_editor: maputnik_editor:
image: "maputnik/editor" image: "maputnik/editor"

BIN
etc/.DS_Store vendored 100644

Plik binarny nie jest wyświetlany.

Wyświetl plik

@ -0,0 +1,814 @@
# -----------------------------
# PostgreSQL configuration file
# -----------------------------
#
# This file consists of lines of the form:
#
# name = value
#
# (The "=" is optional.) Whitespace may be used. Comments are introduced with
# "#" anywhere on a line. The complete list of parameter names and allowed
# values can be found in the PostgreSQL documentation.
#
# The commented-out settings shown in this file represent the default values.
# Re-commenting a setting is NOT sufficient to revert it to the default value;
# you need to reload the server.
#
# This file is read on server startup and when the server receives a SIGHUP
# signal. If you edit the file on a running system, you have to SIGHUP the
# server for the changes to take effect, run "pg_ctl reload", or execute
# "SELECT pg_reload_conf()". Some parameters, which are marked below,
# require a server shutdown and restart to take effect.
#
# Any parameter can also be given as a command-line option to the server, e.g.,
# "postgres -c log_connections=on". Some parameters can be changed at run time
# with the "SET" SQL command.
#
# Memory units: B = bytes Time units: us = microseconds
# kB = kilobytes ms = milliseconds
# MB = megabytes s = seconds
# GB = gigabytes min = minutes
# TB = terabytes h = hours
# d = days
#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------
# The default values of these variables are driven from the -D command-line
# option or PGDATA environment variable, represented here as ConfigDir.
#data_directory = 'ConfigDir' # use data in another directory
# (change requires restart)
#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file
# (change requires restart)
#ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file
# (change requires restart)
# If external_pid_file is not explicitly set, no extra PID file is written.
#external_pid_file = '' # write an extra PID file
# (change requires restart)
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
#listen_addresses = 'localhost' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
#port = 5432 # (change requires restart)
max_connections = 300 #100 # (change requires restart)
#superuser_reserved_connections = 3 # (change requires restart)
#unix_socket_directories = '/tmp' # comma-separated list of directories
# (change requires restart)
#unix_socket_group = '' # (change requires restart)
#unix_socket_permissions = 0777 # begin with 0 to use octal notation
# (change requires restart)
#bonjour = off # advertise server via Bonjour
# (change requires restart)
#bonjour_name = '' # defaults to the computer name
# (change requires restart)
# - TCP settings -
# see "man tcp" for details
#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;
# 0 selects the system default
#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;
# 0 selects the system default
#tcp_keepalives_count = 0 # TCP_KEEPCNT;
# 0 selects the system default
#tcp_user_timeout = 0 # TCP_USER_TIMEOUT, in milliseconds;
# 0 selects the system default
#client_connection_check_interval = 0 # time between checks for client
# disconnection while running queries;
# 0 for never
# - Authentication -
#authentication_timeout = 1min # 1s-600s
#password_encryption = scram-sha-256 # scram-sha-256 or md5
#db_user_namespace = off
# GSSAPI using Kerberos
#krb_server_keyfile = 'FILE:${sysconfdir}/krb5.keytab'
#krb_caseins_users = off
# - SSL -
#ssl = off
#ssl_ca_file = ''
#ssl_cert_file = 'server.crt'
#ssl_crl_file = ''
#ssl_crl_dir = ''
#ssl_key_file = 'server.key'
#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers
#ssl_prefer_server_ciphers = on
#ssl_ecdh_curve = 'prime256v1'
#ssl_min_protocol_version = 'TLSv1.2'
#ssl_max_protocol_version = ''
#ssl_dh_params_file = ''
#ssl_passphrase_command = ''
#ssl_passphrase_command_supports_reload = off
#------------------------------------------------------------------------------
# RESOURCE USAGE (except WAL)
#------------------------------------------------------------------------------
# - Memory -
shared_buffers = 16GB #128MB # min 128kB
# (change requires restart)
#huge_pages = try # on, off, or try
# (change requires restart)
#huge_page_size = 0 # zero for system default
# (change requires restart)
#temp_buffers = 8MB # min 800kB
#max_prepared_transactions = 0 # zero disables the feature
# (change requires restart)
# Caution: it is not advisable to set max_prepared_transactions nonzero unless
# you actively intend to use prepared transactions.
work_mem = 1GB # min 64kB
#hash_mem_multiplier = 2.0 # 1-1000.0 multiplier on hash table work_mem
maintenance_work_mem = 8GB # min 1MB
#autovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_mem
#logical_decoding_work_mem = 64MB # min 64kB
#max_stack_depth = 2MB # min 100kB
#shared_memory_type = mmap # the default is the first option
# supported by the operating system:
# mmap
# sysv
# windows
# (change requires restart)
dynamic_shared_memory_type = posix # the default is usually the first option
# supported by the operating system:
# posix
# sysv
# windows
# mmap
# (change requires restart)
#min_dynamic_shared_memory = 0MB # (change requires restart)
# - Disk -
#temp_file_limit = -1 # limits per-process temp file space
# in kilobytes, or -1 for no limit
# - Kernel Resources -
#max_files_per_process = 1000 # min 64
# (change requires restart)
# - Cost-Based Vacuum Delay -
#vacuum_cost_delay = 0 # 0-100 milliseconds (0 disables)
#vacuum_cost_page_hit = 1 # 0-10000 credits
#vacuum_cost_page_miss = 2 # 0-10000 credits
#vacuum_cost_page_dirty = 20 # 0-10000 credits
#vacuum_cost_limit = 200 # 1-10000 credits
# - Background Writer -
#bgwriter_delay = 200ms # 10-10000ms between rounds
#bgwriter_lru_maxpages = 100 # max buffers written/round, 0 disables
#bgwriter_lru_multiplier = 2.0 # 0-10.0 multiplier on buffers scanned/round
#bgwriter_flush_after = 0 # measured in pages, 0 disables
# - Asynchronous Behavior -
#backend_flush_after = 0 # measured in pages, 0 disables
effective_io_concurrency = 30 #0 # 1-1000; 0 disables prefetching
#maintenance_io_concurrency = 10 # 1-1000; 0 disables prefetching
#max_worker_processes = 8 # (change requires restart)
#max_parallel_workers_per_gather = 2 # limited by max_parallel_workers
#max_parallel_maintenance_workers = 2 # limited by max_parallel_workers
#max_parallel_workers = 8 # number of max_worker_processes that
# can be used in parallel operations
#parallel_leader_participation = on
#old_snapshot_threshold = -1 # 1min-60d; -1 disables; 0 is immediate
# (change requires restart)
#------------------------------------------------------------------------------
# WRITE-AHEAD LOG
#------------------------------------------------------------------------------
# - Settings -
#wal_level = replica # minimal, replica, or logical
# (change requires restart)
fsync = off # flush data to disk for crash safety
# (turning this off can cause
# unrecoverable data corruption)
synchronous_commit = off # synchronization level;
# off, local, remote_write, remote_apply, or on
#wal_sync_method = fsync # the default is the first option
# supported by the operating system:
# open_datasync
# fdatasync (default on Linux and FreeBSD)
# fsync
# fsync_writethrough
# open_sync
#full_page_writes = on # recover from partial page writes
#wal_log_hints = off # also do full page writes of non-critical updates
# (change requires restart)
#wal_compression = off # enables compression of full-page writes;
# off, pglz, lz4, zstd, or on
#wal_init_zero = on # zero-fill new WAL files
#wal_recycle = on # recycle WAL files
wal_buffers = 16MB #-1 # min 32kB, -1 sets based on shared_buffers
# (change requires restart)
#wal_writer_delay = 200ms # 1-10000 milliseconds
#wal_writer_flush_after = 1MB # measured in pages, 0 disables
#wal_skip_threshold = 2MB
#commit_delay = 0 # range 0-100000, in microseconds
#commit_siblings = 5 # range 1-1000
# - Checkpoints -
#checkpoint_timeout = 5min # range 30s-1d
#checkpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0
#checkpoint_flush_after = 0 # measured in pages, 0 disables
#checkpoint_warning = 30s # 0 disables
max_wal_size = 1GB
min_wal_size = 80MB
# - Prefetching during recovery -
#recovery_prefetch = try # prefetch pages referenced in the WAL?
#wal_decode_buffer_size = 512kB # lookahead window used for prefetching
# (change requires restart)
# - Archiving -
#archive_mode = off # enables archiving; off, on, or always
# (change requires restart)
#archive_library = '' # library to use to archive a logfile segment
# (empty string indicates archive_command should
# be used)
#archive_command = '' # command to use to archive a logfile segment
# placeholders: %p = path of file to archive
# %f = file name only
# e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'
#archive_timeout = 0 # force a logfile segment switch after this
# number of seconds; 0 disables
# - Archive Recovery -
# These are only used in recovery mode.
#restore_command = '' # command to use to restore an archived logfile segment
# placeholders: %p = path of file to restore
# %f = file name only
# e.g. 'cp /mnt/server/archivedir/%f %p'
#archive_cleanup_command = '' # command to execute at every restartpoint
#recovery_end_command = '' # command to execute at completion of recovery
# - Recovery Target -
# Set these only when performing a targeted recovery.
#recovery_target = '' # 'immediate' to end recovery as soon as a
# consistent state is reached
# (change requires restart)
#recovery_target_name = '' # the named restore point to which recovery will proceed
# (change requires restart)
#recovery_target_time = '' # the time stamp up to which recovery will proceed
# (change requires restart)
#recovery_target_xid = '' # the transaction ID up to which recovery will proceed
# (change requires restart)
#recovery_target_lsn = '' # the WAL LSN up to which recovery will proceed
# (change requires restart)
#recovery_target_inclusive = on # Specifies whether to stop:
# just after the specified recovery target (on)
# just before the recovery target (off)
# (change requires restart)
#recovery_target_timeline = 'latest' # 'current', 'latest', or timeline ID
# (change requires restart)
#recovery_target_action = 'pause' # 'pause', 'promote', 'shutdown'
# (change requires restart)
#------------------------------------------------------------------------------
# REPLICATION
#------------------------------------------------------------------------------
# - Sending Servers -
# Set these on the primary and on any standby that will send replication data.
#max_wal_senders = 10 # max number of walsender processes
# (change requires restart)
#max_replication_slots = 10 # max number of replication slots
# (change requires restart)
#wal_keep_size = 0 # in megabytes; 0 disables
#max_slot_wal_keep_size = -1 # in megabytes; -1 disables
#wal_sender_timeout = 60s # in milliseconds; 0 disables
#track_commit_timestamp = off # collect timestamp of transaction commit
# (change requires restart)
# - Primary Server -
# These settings are ignored on a standby server.
#synchronous_standby_names = '' # standby servers that provide sync rep
# method to choose sync standbys, number of sync standbys,
# and comma-separated list of application_name
# from standby(s); '*' = all
#vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is delayed
# - Standby Servers -
# These settings are ignored on a primary server.
#primary_conninfo = '' # connection string to sending server
#primary_slot_name = '' # replication slot on sending server
#promote_trigger_file = '' # file name whose presence ends recovery
#hot_standby = on # "off" disallows queries during recovery
# (change requires restart)
#max_standby_archive_delay = 30s # max delay before canceling queries
# when reading WAL from archive;
# -1 allows indefinite delay
#max_standby_streaming_delay = 30s # max delay before canceling queries
# when reading streaming WAL;
# -1 allows indefinite delay
#wal_receiver_create_temp_slot = off # create temp slot if primary_slot_name
# is not set
#wal_receiver_status_interval = 10s # send replies at least this often
# 0 disables
#hot_standby_feedback = off # send info from standby to prevent
# query conflicts
#wal_receiver_timeout = 60s # time that receiver waits for
# communication from primary
# in milliseconds; 0 disables
#wal_retrieve_retry_interval = 5s # time to wait before retrying to
# retrieve WAL after a failed attempt
#recovery_min_apply_delay = 0 # minimum delay for applying changes during recovery
# - Subscribers -
# These settings are ignored on a publisher.
#max_logical_replication_workers = 4 # taken from max_worker_processes
# (change requires restart)
#max_sync_workers_per_subscription = 2 # taken from max_logical_replication_workers
#------------------------------------------------------------------------------
# QUERY TUNING
#------------------------------------------------------------------------------
# - Planner Method Configuration -
#enable_async_append = on
#enable_bitmapscan = on
#enable_gathermerge = on
#enable_hashagg = on
#enable_hashjoin = on
#enable_incremental_sort = on
#enable_indexscan = on
#enable_indexonlyscan = on
#enable_material = on
#enable_memoize = on
#enable_mergejoin = on
#enable_nestloop = on
#enable_parallel_append = on
#enable_parallel_hash = on
#enable_partition_pruning = on
#enable_partitionwise_join = off
#enable_partitionwise_aggregate = off
#enable_seqscan = on
#enable_sort = on
#enable_tidscan = on
# - Planner Cost Constants -
#seq_page_cost = 1.0 # measured on an arbitrary scale
random_page_cost = 1.1 #4.0 # same scale as above
#cpu_tuple_cost = 0.01 # same scale as above
#cpu_index_tuple_cost = 0.005 # same scale as above
#cpu_operator_cost = 0.0025 # same scale as above
#parallel_setup_cost = 1000.0 # same scale as above
#parallel_tuple_cost = 0.1 # same scale as above
#min_parallel_table_scan_size = 8MB
#min_parallel_index_scan_size = 512kB
effective_cache_size = 20GB
#jit_above_cost = 100000 # perform JIT compilation if available
# and query more expensive than this;
# -1 disables
#jit_inline_above_cost = 500000 # inline small functions if query is
# more expensive than this; -1 disables
#jit_optimize_above_cost = 500000 # use expensive JIT optimizations if
# query is more expensive than this;
# -1 disables
# - Genetic Query Optimizer -
#geqo = on
#geqo_threshold = 12
#geqo_effort = 5 # range 1-10
#geqo_pool_size = 0 # selects default based on effort
#geqo_generations = 0 # selects default based on effort
#geqo_selection_bias = 2.0 # range 1.5-2.0
#geqo_seed = 0.0 # range 0.0-1.0
# - Other Planner Options -
#default_statistics_target = 100 # range 1-10000
#constraint_exclusion = partition # on, off, or partition
#cursor_tuple_fraction = 0.1 # range 0.0-1.0
#from_collapse_limit = 8
#jit = on # allow JIT compilation
#join_collapse_limit = 8 # 1 disables collapsing of explicit
# JOIN clauses
#plan_cache_mode = auto # auto, force_generic_plan or
# force_custom_plan
#recursive_worktable_factor = 10.0 # range 0.001-1000000
#------------------------------------------------------------------------------
# REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
#log_destination = 'stderr' # Valid values are combinations of
# stderr, csvlog, jsonlog, syslog, and
# eventlog, depending on platform.
# csvlog and jsonlog require
# logging_collector to be on.
# This is used when logging to stderr:
#logging_collector = off # Enable capturing of stderr, jsonlog,
# and csvlog into log files. Required
# to be on for csvlogs and jsonlogs.
# (change requires restart)
# These are only used if logging_collector is on:
#log_directory = 'log' # directory where log files are written,
# can be absolute or relative to PGDATA
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,
# can include strftime() escapes
#log_file_mode = 0600 # creation mode for log files,
# begin with 0 to use octal notation
#log_rotation_age = 1d # Automatic rotation of logfiles will
# happen after that time. 0 disables.
#log_rotation_size = 10MB # Automatic rotation of logfiles will
# happen after that much log output.
# 0 disables.
#log_truncate_on_rotation = off # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
#syslog_sequence_numbers = on
#syslog_split_messages = on
# This is only relevant when logging to eventlog (Windows):
# (change requires restart)
#event_source = 'PostgreSQL'
# - When to Log -
#log_min_messages = warning # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic
#log_min_error_statement = error # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic (effectively off)
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this number
# of milliseconds
#log_min_duration_sample = -1 # -1 is disabled, 0 logs a sample of statements
# and their durations, > 0 logs only a sample of
# statements running at least this number
# of milliseconds;
# sample fraction is determined by log_statement_sample_rate
#log_statement_sample_rate = 1.0 # fraction of logged statements exceeding
# log_min_duration_sample to be logged;
# 1.0 logs all such statements, 0.0 never logs
#log_transaction_sample_rate = 0.0 # fraction of transactions whose statements
# are logged regardless of their duration; 1.0 logs all
# statements from all transactions, 0.0 never logs
#log_startup_progress_interval = 10s # Time between progress updates for
# long-running startup operations.
# 0 disables the feature, > 0 indicates
# the interval in milliseconds.
# - What to Log -
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_autovacuum_min_duration = 10min # log autovacuum activity;
# -1 disables, 0 logs all actions and
# their durations, > 0 logs only
# actions running at least this number
# of milliseconds.
#log_checkpoints = on
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default # terse, default, or verbose messages
#log_hostname = off
#log_line_prefix = '%m [%p] ' # special values:
# %a = application name
# %u = user name
# %d = database name
# %r = remote host and port
# %h = remote host
# %b = backend type
# %p = process ID
# %P = process ID of parallel group leader
# %t = timestamp without milliseconds
# %m = timestamp with milliseconds
# %n = timestamp with milliseconds (as a Unix epoch)
# %Q = query ID (0 if none or not computed)
# %i = command tag
# %e = SQL state
# %c = session ID
# %l = session line number
# %s = session start timestamp
# %v = virtual transaction ID
# %x = transaction ID (0 if none)
# %q = stop here in non-session
# processes
# %% = '%'
# e.g. '<%u%%%d> '
#log_lock_waits = off # log lock waits >= deadlock_timeout
#log_recovery_conflict_waits = off # log standby recovery conflict waits
# >= deadlock_timeout
#log_parameter_max_length = -1 # when logging statements, limit logged
# bind-parameter values to N bytes;
# -1 means print in full, 0 disables
#log_parameter_max_length_on_error = 0 # when logging an error, limit logged
# bind-parameter values to N bytes;
# -1 means print in full, 0 disables
#log_statement = 'none' # none, ddl, mod, all
#log_replication_commands = off
#log_temp_files = -1 # log temporary files equal or larger
# than the specified size in kilobytes;
# -1 disables, 0 logs all temp files
log_timezone = 'Europe/Oslo'
#------------------------------------------------------------------------------
# PROCESS TITLE
#------------------------------------------------------------------------------
#cluster_name = '' # added to process titles if nonempty
# (change requires restart)
#update_process_title = on
#------------------------------------------------------------------------------
# STATISTICS
#------------------------------------------------------------------------------
# - Cumulative Query and Index Statistics -
#track_activities = on
#track_activity_query_size = 1024 # (change requires restart)
track_counts = on
#track_io_timing = off
#track_wal_io_timing = off
#track_functions = none # none, pl, all
#stats_fetch_consistency = cache
# - Monitoring -
#compute_query_id = auto
#log_statement_stats = off
#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#------------------------------------------------------------------------------
# AUTOVACUUM
#------------------------------------------------------------------------------
autovacuum = on # Enable autovacuum subprocess? 'on'
# requires track_counts to also be on.
#autovacuum_max_workers = 3 # max number of autovacuum subprocesses
# (change requires restart)
#autovacuum_naptime = 1min # time between autovacuum runs
#autovacuum_vacuum_threshold = 50 # min number of row updates before
# vacuum
#autovacuum_vacuum_insert_threshold = 1000 # min number of row inserts
# before vacuum; -1 disables insert
# vacuums
#autovacuum_analyze_threshold = 50 # min number of row updates before
# analyze
#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum
#autovacuum_vacuum_insert_scale_factor = 0.2 # fraction of inserts over table
# size before insert vacuum
#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze
#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum
# (change requires restart)
#autovacuum_multixact_freeze_max_age = 400000000 # maximum multixact age
# before forced vacuum
# (change requires restart)
#autovacuum_vacuum_cost_delay = 2ms # default vacuum cost delay for
# autovacuum, in milliseconds;
# -1 means use vacuum_cost_delay
#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for
# autovacuum, -1 means use
# vacuum_cost_limit
#------------------------------------------------------------------------------
# CLIENT CONNECTION DEFAULTS
#------------------------------------------------------------------------------
# - Statement Behavior -
#client_min_messages = notice # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# log
# notice
# warning
# error
#search_path = '"$user", public' # schema names
#row_security = on
#default_table_access_method = 'heap'
#default_tablespace = '' # a tablespace name, '' uses the default
#default_toast_compression = 'pglz' # 'pglz' or 'lz4'
#temp_tablespaces = '' # a list of tablespace names, '' uses
# only default tablespace
#check_function_bodies = on
#default_transaction_isolation = 'read committed'
#default_transaction_read_only = off
#default_transaction_deferrable = off
#session_replication_role = 'origin'
#statement_timeout = 0 # in milliseconds, 0 is disabled
#lock_timeout = 0 # in milliseconds, 0 is disabled
#idle_in_transaction_session_timeout = 0 # in milliseconds, 0 is disabled
#idle_session_timeout = 0 # in milliseconds, 0 is disabled
#vacuum_freeze_table_age = 150000000
#vacuum_freeze_min_age = 50000000
#vacuum_failsafe_age = 1600000000
#vacuum_multixact_freeze_table_age = 150000000
#vacuum_multixact_freeze_min_age = 5000000
#vacuum_multixact_failsafe_age = 1600000000
#bytea_output = 'hex' # hex, escape
#xmlbinary = 'base64'
#xmloption = 'content'
#gin_pending_list_limit = 4MB
# - Locale and Formatting -
datestyle = 'iso, mdy'
#intervalstyle = 'postgres'
timezone = 'Europe/Oslo'
#timezone_abbreviations = 'Default' # Select the set of available time zone
# abbreviations. Currently, there are
# Default
# Australia (historical usage)
# India
# You can create your own file in
# share/timezonesets/.
#extra_float_digits = 1 # min -15, max 3; any value >0 actually
# selects precise output mode
#client_encoding = sql_ascii # actually, defaults to database
# encoding
# These settings are initialized by initdb, but they can be changed.
lc_messages = 'C' # locale for system error message
# strings
lc_monetary = 'C' # locale for monetary formatting
lc_numeric = 'C' # locale for number formatting
lc_time = 'C' # locale for time formatting
# default configuration for text search
default_text_search_config = 'pg_catalog.english'
# - Shared Library Preloading -
#local_preload_libraries = ''
#session_preload_libraries = ''
#shared_preload_libraries = '' # (change requires restart)
#jit_provider = 'llvmjit' # JIT library to use
# - Other Defaults -
#dynamic_library_path = '$libdir'
#gin_fuzzy_search_limit = 0
#------------------------------------------------------------------------------
# LOCK MANAGEMENT
#------------------------------------------------------------------------------
#deadlock_timeout = 1s
#max_locks_per_transaction = 64 # min 10
# (change requires restart)
#max_pred_locks_per_transaction = 64 # min 10
# (change requires restart)
#max_pred_locks_per_relation = -2 # negative values mean
# (max_pred_locks_per_transaction
# / -max_pred_locks_per_relation) - 1
#max_pred_locks_per_page = 2 # min 0
#------------------------------------------------------------------------------
# VERSION AND PLATFORM COMPATIBILITY
#------------------------------------------------------------------------------
# - Previous PostgreSQL Versions -
#array_nulls = on
#backslash_quote = safe_encoding # on, off, or safe_encoding
#escape_string_warning = on
#lo_compat_privileges = off
#quote_all_identifiers = off
#standard_conforming_strings = on
#synchronize_seqscans = on
# - Other Platforms and Clients -
#transform_null_equals = off
#------------------------------------------------------------------------------
# ERROR HANDLING
#------------------------------------------------------------------------------
#exit_on_error = off # terminate session on any error?
#restart_after_crash = on # reinitialize after backend crash?
#data_sync_retry = off # retry or panic on failure to fsync
# data?
# (change requires restart)
#recovery_init_sync_method = fsync # fsync, syncfs (Linux 5.8+)
#------------------------------------------------------------------------------
# CONFIG FILE INCLUDES
#------------------------------------------------------------------------------
# These options allow settings to be loaded from files other than the
# default postgresql.conf. Note that these are directives, not variable
# assignments, so they can usefully be given more than once.
#include_dir = '...' # include files ending in '.conf' from
# a directory, e.g., 'conf.d'
#include_if_exists = '...' # include file only if it exists
#include = '...' # include file
#------------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#------------------------------------------------------------------------------
# Add settings for extensions here

Wyświetl plik

@ -1,4 +1,40 @@
generalized_tables: generalized_tables:
# etldoc: osm_water_polygon_gen_z1 -> osm_water_polygon_gen_z0
water_polygon_gen_z0:
source: water_polygon_gen_z1
sql_filter: area>power(ZRES0,2)
tolerance: ZRES1
# etldoc: osm_water_polygon_gen_z2 -> osm_water_polygon_gen_z1
water_polygon_gen_z1:
source: water_polygon_gen_z2
sql_filter: area>power(ZRES0,2)
tolerance: ZRES2
# etldoc: osm_water_polygon_gen_z3 -> osm_water_polygon_gen_z2
water_polygon_gen_z2:
source: water_polygon_gen_z3
sql_filter: area>power(ZRES1,2)
tolerance: ZRES3
# etldoc: osm_water_polygon_gen_z4 -> osm_water_polygon_gen_z3
water_polygon_gen_z3:
source: water_polygon_gen_z4
sql_filter: area>power(ZRES2,2)
tolerance: ZRES4
# etldoc: osm_water_polygon_gen_z5 -> osm_water_polygon_gen_z4
water_polygon_gen_z4:
source: water_polygon_gen_z5
sql_filter: area>power(ZRES3,2)
tolerance: ZRES5
# etldoc: osm_water_polygon_gen_z6 -> osm_water_polygon_gen_z5
water_polygon_gen_z5:
source: water_polygon_gen_z6
sql_filter: area>power(ZRES4,2)
tolerance: ZRES6
# etldoc: osm_water_polygon_gen_z7 -> osm_water_polygon_gen_z6 # etldoc: osm_water_polygon_gen_z7 -> osm_water_polygon_gen_z6
water_polygon_gen_z6: water_polygon_gen_z6:
source: water_polygon_gen_z7 source: water_polygon_gen_z7
@ -45,73 +81,72 @@ bridge_field: &bridge
type: bool type: bool
tables: tables:
# etldoc: imposm3 -> osm_water_polygon # etldoc: imposm3 -> osm_water_polygon
water_polygon: water_polygon:
columns: columns:
- name: osm_id - name: osm_id
type: id type: id
- name: geometry - name: geometry
type: validated_geometry type: validated_geometry
- name: area - name: area
type: area type: area
- key: name - key: name
name: name name: name
type: string type: string
- name: name_en - name: name_en
key: name:en key: name:en
type: string type: string
- name: name_de - name: name_de
key: name:de key: name:de
type: string type: string
- name: tags - name: tags
type: hstore_tags type: hstore_tags
- name: place - name: place
key: place key: place
type: string type: string
- name: natural - name: natural
key: natural key: natural
type: string type: string
- name: landuse - name: landuse
key: landuse key: landuse
type: string type: string
- name: waterway - name: waterway
key: waterway key: waterway
type: string type: string
- name: leisure - name: leisure
key: leisure key: leisure
type: string type: string
- name: water - name: water
key: water key: water
type: string type: string
- name: is_intermittent - name: is_intermittent
key: intermittent key: intermittent
type: bool type: bool
- *tunnel - *tunnel
- *bridge - *bridge
filters: filters:
reject: reject:
covered: ["yes"] covered: ["yes"]
mapping: mapping:
landuse: landuse:
- reservoir - reservoir
- basin - basin
- salt_pond - salt_pond
leisure: leisure:
- swimming_pool - swimming_pool
natural: natural:
- water - water
- bay - bay
- spring - spring
waterway: waterway:
- dock - dock
water: water:
- river - river
- stream - stream
- canal - canal
- ditch - ditch
- drain - drain
- pond - pond
- basin - basin
- wastewater - wastewater
type: polygon type: polygon

Wyświetl plik

@ -21,7 +21,7 @@
1 1
] ]
], ],
"order": 17 "order": 1
}, },
{ {
"id": "water", "id": "water",
@ -48,7 +48,7 @@
"tunnel" "tunnel"
] ]
], ],
"order": 18 "order": 2
} }
] ]
} }

Wyświetl plik

@ -140,3 +140,124 @@ SELECT ST_Simplify(geometry, ZRes(8)) AS geometry
FROM osm_ocean_polygon_gen_z7 FROM osm_ocean_polygon_gen_z7
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ; ) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
CREATE INDEX IF NOT EXISTS osm_ocean_polygon_gen_z6_idx ON osm_ocean_polygon_gen_z6 USING gist (geometry); CREATE INDEX IF NOT EXISTS osm_ocean_polygon_gen_z6_idx ON osm_ocean_polygon_gen_z6 USING gist (geometry);
-- This statement can be deleted after the water importer image stops creating this object as a table
DO
$$
BEGIN
DROP TABLE IF EXISTS osm_ocean_polygon_gen_z5 CASCADE;
EXCEPTION
WHEN wrong_object_type THEN
END;
$$ LANGUAGE plpgsql;
-- etldoc: osm_ocean_polygon_gen_z6 -> osm_ocean_polygon_gen_z5
DROP MATERIALIZED VIEW IF EXISTS osm_ocean_polygon_gen_z5 CASCADE;
CREATE MATERIALIZED VIEW osm_ocean_polygon_gen_z5 AS
(
SELECT ST_Simplify(geometry, ZRes(7)) AS geometry
FROM osm_ocean_polygon_gen_z6
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
CREATE INDEX IF NOT EXISTS osm_ocean_polygon_gen_z5_idx ON osm_ocean_polygon_gen_z5 USING gist (geometry);
-- This statement can be deleted after the water importer image stops creating this object as a table
DO
$$
BEGIN
DROP TABLE IF EXISTS osm_ocean_polygon_gen_z4 CASCADE;
EXCEPTION
WHEN wrong_object_type THEN
END;
$$ LANGUAGE plpgsql;
-- etldoc: osm_ocean_polygon_gen_z5 -> osm_ocean_polygon_gen_z4
DROP MATERIALIZED VIEW IF EXISTS osm_ocean_polygon_gen_z4 CASCADE;
CREATE MATERIALIZED VIEW osm_ocean_polygon_gen_z4 AS
(
SELECT ST_Simplify(geometry, ZRes(6)) AS geometry
FROM osm_ocean_polygon_gen_z5
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
CREATE INDEX IF NOT EXISTS osm_ocean_polygon_gen_z4_idx ON osm_ocean_polygon_gen_z4 USING gist (geometry);
-- This statement can be deleted after the water importer image stops creating this object as a table
DO
$$
BEGIN
DROP TABLE IF EXISTS osm_ocean_polygon_gen_z3 CASCADE;
EXCEPTION
WHEN wrong_object_type THEN
END;
$$ LANGUAGE plpgsql;
-- etldoc: osm_ocean_polygon_gen_z4 -> osm_ocean_polygon_gen_z3
DROP MATERIALIZED VIEW IF EXISTS osm_ocean_polygon_gen_z3 CASCADE;
CREATE MATERIALIZED VIEW osm_ocean_polygon_gen_z3 AS
(
SELECT ST_Simplify(geometry, ZRes(5)) AS geometry
FROM osm_ocean_polygon_gen_z4
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
CREATE INDEX IF NOT EXISTS osm_ocean_polygon_gen_z3_idx ON osm_ocean_polygon_gen_z3 USING gist (geometry);
-- This statement can be deleted after the water importer image stops creating this object as a table
DO
$$
BEGIN
DROP TABLE IF EXISTS osm_ocean_polygon_gen_z2 CASCADE;
EXCEPTION
WHEN wrong_object_type THEN
END;
$$ LANGUAGE plpgsql;
-- etldoc: osm_ocean_polygon_gen_z3 -> osm_ocean_polygon_gen_z2
DROP MATERIALIZED VIEW IF EXISTS osm_ocean_polygon_gen_z2 CASCADE;
CREATE MATERIALIZED VIEW osm_ocean_polygon_gen_z2 AS
(
SELECT ST_Simplify(geometry, ZRes(4)) AS geometry
FROM osm_ocean_polygon_gen_z3
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
CREATE INDEX IF NOT EXISTS osm_ocean_polygon_gen_z2_idx ON osm_ocean_polygon_gen_z2 USING gist (geometry);
-- This statement can be deleted after the water importer image stops creating this object as a table
DO
$$
BEGIN
DROP TABLE IF EXISTS osm_ocean_polygon_gen_z1 CASCADE;
EXCEPTION
WHEN wrong_object_type THEN
END;
$$ LANGUAGE plpgsql;
-- etldoc: osm_ocean_polygon_gen_z2 -> osm_ocean_polygon_gen_z1
DROP MATERIALIZED VIEW IF EXISTS osm_ocean_polygon_gen_z1 CASCADE;
CREATE MATERIALIZED VIEW osm_ocean_polygon_gen_z1 AS
(
SELECT ST_Simplify(geometry, ZRes(3)) AS geometry
FROM osm_ocean_polygon_gen_z2
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
CREATE INDEX IF NOT EXISTS osm_ocean_polygon_gen_z1_idx ON osm_ocean_polygon_gen_z1 USING gist (geometry);
-- This statement can be deleted after the water importer image stops creating this object as a table
DO
$$
BEGIN
DROP TABLE IF EXISTS osm_ocean_polygon_gen_z0 CASCADE;
EXCEPTION
WHEN wrong_object_type THEN
END;
$$ LANGUAGE plpgsql;
-- etldoc: osm_ocean_polygon_gen_z1 -> osm_ocean_polygon_gen_z0
DROP MATERIALIZED VIEW IF EXISTS osm_ocean_polygon_gen_z0 CASCADE;
CREATE MATERIALIZED VIEW osm_ocean_polygon_gen_z0 AS
(
SELECT ST_Simplify(geometry, ZRes(2)) AS geometry
FROM osm_ocean_polygon_gen_z1
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
CREATE INDEX IF NOT EXISTS osm_ocean_polygon_gen_z0_idx ON osm_ocean_polygon_gen_z0 USING gist (geometry);

Wyświetl plik

@ -65,186 +65,192 @@ SELECT ne_id,
FROM geom_match FROM geom_match
); );
-- ne_10m_ocean -- -- ne_10m_ocean
-- etldoc: ne_10m_ocean -> ne_10m_ocean_gen_z5 -- -- etldoc: ne_10m_ocean -> ne_10m_ocean_gen_z5
DROP MATERIALIZED VIEW IF EXISTS ne_10m_ocean_gen_z5 CASCADE; -- DROP MATERIALIZED VIEW IF EXISTS ne_10m_ocean_gen_z5 CASCADE;
CREATE MATERIALIZED VIEW ne_10m_ocean_gen_z5 AS -- CREATE MATERIALIZED VIEW ne_10m_ocean_gen_z5 AS
( -- (
SELECT NULL::integer AS id, -- SELECT NULL::integer AS id,
(ST_Dump(ST_Simplify(geometry, ZRes(7)))).geom AS geometry, -- (ST_Dump(ST_Simplify(geometry, ZRes(7)))).geom AS geometry,
'ocean'::text AS class, -- 'ocean'::text AS class,
NULL::boolean AS is_intermittent, -- NULL::boolean AS is_intermittent,
NULL::boolean AS is_bridge, -- NULL::boolean AS is_bridge,
NULL::boolean AS is_tunnel -- NULL::boolean AS is_tunnel
FROM ne_10m_ocean -- FROM ne_10m_ocean
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ; -- ) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
CREATE INDEX IF NOT EXISTS ne_10m_ocean_gen_z5_idx ON ne_10m_ocean_gen_z5 USING gist (geometry); -- CREATE INDEX IF NOT EXISTS ne_10m_ocean_gen_z5_idx ON ne_10m_ocean_gen_z5 USING gist (geometry);
--
-- ne_10m_lakes -- -- ne_10m_lakes
-- etldoc: ne_10m_lakes -> ne_10m_lakes_gen_z5 -- -- etldoc: ne_10m_lakes -> ne_10m_lakes_gen_z5
DROP MATERIALIZED VIEW IF EXISTS ne_10m_lakes_gen_z5 CASCADE; -- DROP MATERIALIZED VIEW IF EXISTS ne_10m_lakes_gen_z5 CASCADE;
CREATE MATERIALIZED VIEW ne_10m_lakes_gen_z5 AS -- CREATE MATERIALIZED VIEW ne_10m_lakes_gen_z5 AS
( -- (
SELECT COALESCE(osm.osm_id, ne_id) AS id, -- SELECT COALESCE(osm.osm_id, ne_id) AS id,
-- Union fixing e.g. Lake Huron and Georgian Bay duplicity -- -- Union fixing e.g. Lake Huron and Georgian Bay duplicity
(ST_Dump(ST_MakeValid(ST_Simplify(ST_Union(geometry), ZRes(7))))).geom AS geometry, -- (ST_Dump(ST_MakeValid(ST_Simplify(ST_Union(geometry), ZRes(7))))).geom AS geometry,
'lake'::text AS class, -- 'lake'::text AS class,
NULL::boolean AS is_intermittent, -- NULL::boolean AS is_intermittent,
NULL::boolean AS is_bridge, -- NULL::boolean AS is_bridge,
NULL::boolean AS is_tunnel -- NULL::boolean AS is_tunnel
FROM ne_10m_lakes -- FROM ne_10m_lakes
LEFT JOIN match_osm_ne_id osm USING (ne_id) -- LEFT JOIN match_osm_ne_id osm USING (ne_id)
GROUP BY COALESCE(osm.osm_id, ne_id), is_intermittent, is_bridge, is_tunnel -- GROUP BY COALESCE(osm.osm_id, ne_id), is_intermittent, is_bridge, is_tunnel
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ; -- ) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
CREATE INDEX IF NOT EXISTS ne_10m_lakes_gen_z5_idx ON ne_10m_lakes_gen_z5 USING gist (geometry); -- CREATE INDEX IF NOT EXISTS ne_10m_lakes_gen_z5_idx ON ne_10m_lakes_gen_z5 USING gist (geometry);
--
-- etldoc: ne_10m_lakes_gen_z5 -> ne_10m_lakes_gen_z4 -- -- etldoc: ne_10m_lakes_gen_z5 -> ne_10m_lakes_gen_z4
DROP MATERIALIZED VIEW IF EXISTS ne_10m_lakes_gen_z4 CASCADE; -- DROP MATERIALIZED VIEW IF EXISTS ne_10m_lakes_gen_z4 CASCADE;
CREATE MATERIALIZED VIEW ne_10m_lakes_gen_z4 AS -- CREATE MATERIALIZED VIEW ne_10m_lakes_gen_z4 AS
( -- (
SELECT id, -- SELECT id,
(ST_Dump(ST_MakeValid(ST_Simplify(geometry, ZRes(6))))).geom AS geometry, -- (ST_Dump(ST_MakeValid(ST_Simplify(geometry, ZRes(6))))).geom AS geometry,
class, -- class,
is_intermittent, -- is_intermittent,
is_bridge, -- is_bridge,
is_tunnel -- is_tunnel
FROM ne_10m_lakes_gen_z5 -- FROM ne_10m_lakes_gen_z5
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ; -- ) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
CREATE INDEX IF NOT EXISTS ne_10m_lakes_gen_z4_idx ON ne_10m_lakes_gen_z4 USING gist (geometry); -- CREATE INDEX IF NOT EXISTS ne_10m_lakes_gen_z4_idx ON ne_10m_lakes_gen_z4 USING gist (geometry);
--
-- ne_50m_ocean -- -- ne_50m_ocean
-- etldoc: ne_50m_ocean -> ne_50m_ocean_gen_z4 -- -- etldoc: ne_50m_ocean -> ne_50m_ocean_gen_z4
DROP MATERIALIZED VIEW IF EXISTS ne_50m_ocean_gen_z4 CASCADE; -- DROP MATERIALIZED VIEW IF EXISTS ne_50m_ocean_gen_z4 CASCADE;
CREATE MATERIALIZED VIEW ne_50m_ocean_gen_z4 AS -- CREATE MATERIALIZED VIEW ne_50m_ocean_gen_z4 AS
( -- (
SELECT NULL::integer AS id, -- SELECT NULL::integer AS id,
(ST_Dump(ST_Simplify(geometry, ZRes(6)))).geom AS geometry, -- (ST_Dump(ST_Simplify(geometry, ZRes(6)))).geom AS geometry,
'ocean'::text AS class, -- 'ocean'::text AS class,
NULL::boolean AS is_intermittent, -- NULL::boolean AS is_intermittent,
NULL::boolean AS is_bridge, -- NULL::boolean AS is_bridge,
NULL::boolean AS is_tunnel -- NULL::boolean AS is_tunnel
FROM ne_50m_ocean -- FROM ne_50m_ocean
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ; -- ) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
CREATE INDEX IF NOT EXISTS ne_50m_ocean_gen_z4_idx ON ne_50m_ocean_gen_z4 USING gist (geometry); -- CREATE INDEX IF NOT EXISTS ne_50m_ocean_gen_z4_idx ON ne_50m_ocean_gen_z4 USING gist (geometry);
--
-- etldoc: ne_50m_ocean_gen_z4 -> ne_50m_ocean_gen_z3 -- -- etldoc: ne_50m_ocean_gen_z4 -> ne_50m_ocean_gen_z3
DROP MATERIALIZED VIEW IF EXISTS ne_50m_ocean_gen_z3 CASCADE; -- DROP MATERIALIZED VIEW IF EXISTS ne_50m_ocean_gen_z3 CASCADE;
CREATE MATERIALIZED VIEW ne_50m_ocean_gen_z3 AS -- CREATE MATERIALIZED VIEW ne_50m_ocean_gen_z3 AS
( -- (
SELECT id, -- SELECT id,
ST_Simplify(geometry, ZRes(5)) AS geometry, -- ST_Simplify(geometry, ZRes(5)) AS geometry,
class, -- class,
is_intermittent, -- is_intermittent,
is_bridge, -- is_bridge,
is_tunnel -- is_tunnel
FROM ne_50m_ocean_gen_z4 -- FROM ne_50m_ocean_gen_z4
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ; -- ) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
CREATE INDEX IF NOT EXISTS ne_50m_ocean_gen_z3_idx ON ne_50m_ocean_gen_z3 USING gist (geometry); -- CREATE INDEX IF NOT EXISTS ne_50m_ocean_gen_z3_idx ON ne_50m_ocean_gen_z3 USING gist (geometry);
--
-- etldoc: ne_50m_ocean_gen_z3 -> ne_50m_ocean_gen_z2 -- -- etldoc: ne_50m_ocean_gen_z3 -> ne_50m_ocean_gen_z2
DROP MATERIALIZED VIEW IF EXISTS ne_50m_ocean_gen_z2 CASCADE; -- DROP MATERIALIZED VIEW IF EXISTS ne_50m_ocean_gen_z2 CASCADE;
CREATE MATERIALIZED VIEW ne_50m_ocean_gen_z2 AS -- CREATE MATERIALIZED VIEW ne_50m_ocean_gen_z2 AS
( -- (
SELECT id, -- SELECT id,
ST_Simplify(geometry, ZRes(4)) AS geometry, -- ST_Simplify(geometry, ZRes(4)) AS geometry,
class, -- class,
is_intermittent, -- is_intermittent,
is_bridge, -- is_bridge,
is_tunnel -- is_tunnel
FROM ne_50m_ocean_gen_z3 -- FROM ne_50m_ocean_gen_z3
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ; -- ) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
CREATE INDEX IF NOT EXISTS ne_50m_ocean_gen_z2_idx ON ne_50m_ocean_gen_z2 USING gist (geometry); -- CREATE INDEX IF NOT EXISTS ne_50m_ocean_gen_z2_idx ON ne_50m_ocean_gen_z2 USING gist (geometry);
--
-- ne_50m_lakes -- -- ne_50m_lakes
-- etldoc: ne_50m_lakes -> ne_50m_lakes_gen_z3 -- -- etldoc: ne_50m_lakes -> ne_50m_lakes_gen_z3
DROP MATERIALIZED VIEW IF EXISTS ne_50m_lakes_gen_z3 CASCADE; -- DROP MATERIALIZED VIEW IF EXISTS ne_50m_lakes_gen_z3 CASCADE;
CREATE MATERIALIZED VIEW ne_50m_lakes_gen_z3 AS -- CREATE MATERIALIZED VIEW ne_50m_lakes_gen_z3 AS
( -- (
SELECT COALESCE(osm.osm_id, ne_id) AS id, -- SELECT COALESCE(osm.osm_id, ne_id) AS id,
(ST_Dump(ST_MakeValid(ST_Simplify(geometry, ZRes(5))))).geom AS geometry, -- (ST_Dump(ST_MakeValid(ST_Simplify(geometry, ZRes(5))))).geom AS geometry,
'lake'::text AS class, -- 'lake'::text AS class,
NULL::boolean AS is_intermittent, -- NULL::boolean AS is_intermittent,
NULL::boolean AS is_bridge, -- NULL::boolean AS is_bridge,
NULL::boolean AS is_tunnel -- NULL::boolean AS is_tunnel
FROM ne_50m_lakes -- FROM ne_50m_lakes
LEFT JOIN match_osm_ne_id osm USING (ne_id) -- LEFT JOIN match_osm_ne_id osm USING (ne_id)
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ; -- ) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
CREATE INDEX IF NOT EXISTS ne_50m_lakes_gen_z3_idx ON ne_50m_lakes_gen_z3 USING gist (geometry); -- CREATE INDEX IF NOT EXISTS ne_50m_lakes_gen_z3_idx ON ne_50m_lakes_gen_z3 USING gist (geometry);
--
-- etldoc: ne_50m_lakes_gen_z3 -> ne_50m_lakes_gen_z2 -- -- etldoc: ne_50m_lakes_gen_z3 -> ne_50m_lakes_gen_z2
DROP MATERIALIZED VIEW IF EXISTS ne_50m_lakes_gen_z2 CASCADE; -- DROP MATERIALIZED VIEW IF EXISTS ne_50m_lakes_gen_z2 CASCADE;
CREATE MATERIALIZED VIEW ne_50m_lakes_gen_z2 AS -- CREATE MATERIALIZED VIEW ne_50m_lakes_gen_z2 AS
( -- (
SELECT id, -- SELECT id,
(ST_Dump(ST_MakeValid(ST_Simplify(geometry, ZRes(4))))).geom AS geometry, -- (ST_Dump(ST_MakeValid(ST_Simplify(geometry, ZRes(4))))).geom AS geometry,
class, -- class,
is_intermittent, -- is_intermittent,
is_bridge, -- is_bridge,
is_tunnel -- is_tunnel
FROM ne_50m_lakes_gen_z3 -- FROM ne_50m_lakes_gen_z3
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ; -- ) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
CREATE INDEX IF NOT EXISTS ne_50m_lakes_gen_z2_idx ON ne_50m_lakes_gen_z2 USING gist (geometry); -- CREATE INDEX IF NOT EXISTS ne_50m_lakes_gen_z2_idx ON ne_50m_lakes_gen_z2 USING gist (geometry);
--
--ne_110m_ocean -- --ne_110m_ocean
-- etldoc: ne_110m_ocean -> ne_110m_ocean_gen_z1 -- -- etldoc: ne_110m_ocean -> ne_110m_ocean_gen_z1
DROP MATERIALIZED VIEW IF EXISTS ne_110m_ocean_gen_z1 CASCADE; -- DROP MATERIALIZED VIEW IF EXISTS ne_110m_ocean_gen_z1 CASCADE;
CREATE MATERIALIZED VIEW ne_110m_ocean_gen_z1 AS -- CREATE MATERIALIZED VIEW ne_110m_ocean_gen_z1 AS
( -- (
SELECT NULL::integer AS id, -- SELECT NULL::integer AS id,
ST_Simplify(geometry, ZRes(3)) AS geometry, -- ST_Simplify(geometry, ZRes(3)) AS geometry,
'ocean'::text AS class, -- 'ocean'::text AS class,
NULL::boolean AS is_intermittent, -- NULL::boolean AS is_intermittent,
NULL::boolean AS is_bridge, -- NULL::boolean AS is_bridge,
NULL::boolean AS is_tunnel -- NULL::boolean AS is_tunnel
FROM ne_110m_ocean -- FROM ne_110m_ocean
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ; -- ) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
CREATE INDEX IF NOT EXISTS ne_110m_ocean_gen_z1_idx ON ne_110m_ocean_gen_z1 USING gist (geometry); -- CREATE INDEX IF NOT EXISTS ne_110m_ocean_gen_z1_idx ON ne_110m_ocean_gen_z1 USING gist (geometry);
--
-- etldoc: ne_110m_ocean_gen_z1 -> ne_110m_ocean_gen_z0 -- -- etldoc: ne_110m_ocean_gen_z1 -> ne_110m_ocean_gen_z0
DROP MATERIALIZED VIEW IF EXISTS ne_110m_ocean_gen_z0 CASCADE; -- DROP MATERIALIZED VIEW IF EXISTS ne_110m_ocean_gen_z0 CASCADE;
CREATE MATERIALIZED VIEW ne_110m_ocean_gen_z0 AS -- CREATE MATERIALIZED VIEW ne_110m_ocean_gen_z0 AS
( -- (
SELECT id, -- SELECT id,
ST_Simplify(geometry, ZRes(2)) AS geometry, -- ST_Simplify(geometry, ZRes(2)) AS geometry,
class, -- class,
is_intermittent, -- is_intermittent,
is_bridge, -- is_bridge,
is_tunnel -- is_tunnel
FROM ne_110m_ocean_gen_z1 -- FROM ne_110m_ocean_gen_z1
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ; -- ) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
CREATE INDEX IF NOT EXISTS ne_110m_ocean_gen_z0_idx ON ne_110m_ocean_gen_z0 USING gist (geometry); -- CREATE INDEX IF NOT EXISTS ne_110m_ocean_gen_z0_idx ON ne_110m_ocean_gen_z0 USING gist (geometry);
--
--
-- ne_110m_lakes -- -- ne_110m_lakes
-- etldoc: ne_110m_lakes -> ne_110m_lakes_gen_z1 -- -- etldoc: ne_110m_lakes -> ne_110m_lakes_gen_z1
DROP MATERIALIZED VIEW IF EXISTS ne_110m_lakes_gen_z1 CASCADE; -- DROP MATERIALIZED VIEW IF EXISTS ne_110m_lakes_gen_z1 CASCADE;
CREATE MATERIALIZED VIEW ne_110m_lakes_gen_z1 AS -- CREATE MATERIALIZED VIEW ne_110m_lakes_gen_z1 AS
( -- (
SELECT COALESCE(osm.osm_id, ne_id) AS id, -- SELECT COALESCE(osm.osm_id, ne_id) AS id,
(ST_Dump(ST_Simplify(geometry, ZRes(3)))).geom AS geometry, -- (ST_Dump(ST_Simplify(geometry, ZRes(3)))).geom AS geometry,
'lake'::text AS class, -- 'lake'::text AS class,
NULL::boolean AS is_intermittent, -- NULL::boolean AS is_intermittent,
NULL::boolean AS is_bridge, -- NULL::boolean AS is_bridge,
NULL::boolean AS is_tunnel -- NULL::boolean AS is_tunnel
FROM ne_110m_lakes -- FROM ne_110m_lakes
LEFT JOIN match_osm_ne_id osm USING (ne_id) -- LEFT JOIN match_osm_ne_id osm USING (ne_id)
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ; -- ) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
CREATE INDEX IF NOT EXISTS ne_110m_lakes_gen_z1_idx ON ne_110m_lakes_gen_z1 USING gist (geometry); -- CREATE INDEX IF NOT EXISTS ne_110m_lakes_gen_z1_idx ON ne_110m_lakes_gen_z1 USING gist (geometry);
--
-- etldoc: ne_110m_lakes_gen_z1 -> ne_110m_lakes_gen_z0 -- -- etldoc: ne_110m_lakes_gen_z1 -> ne_110m_lakes_gen_z0
DROP MATERIALIZED VIEW IF EXISTS ne_110m_lakes_gen_z0 CASCADE; -- DROP MATERIALIZED VIEW IF EXISTS ne_110m_lakes_gen_z0 CASCADE;
CREATE MATERIALIZED VIEW ne_110m_lakes_gen_z0 AS -- CREATE MATERIALIZED VIEW ne_110m_lakes_gen_z0 AS
( -- (
SELECT id, -- SELECT id,
(ST_Dump(ST_Simplify(geometry, ZRes(2)))).geom AS geometry, -- (ST_Dump(ST_Simplify(geometry, ZRes(2)))).geom AS geometry,
class, -- class,
is_intermittent, -- is_intermittent,
is_bridge, -- is_bridge,
is_tunnel -- is_tunnel
FROM ne_110m_lakes_gen_z1 -- FROM ne_110m_lakes_gen_z1
) /* DELAY_MATERIALIZED_VIEW_CREATION */ ; -- ) /* DELAY_MATERIALIZED_VIEW_CREATION */ ;
CREATE INDEX IF NOT EXISTS ne_110m_lakes_gen_z0_idx ON ne_110m_lakes_gen_z0 USING gist (geometry); -- CREATE INDEX IF NOT EXISTS ne_110m_lakes_gen_z0_idx ON ne_110m_lakes_gen_z0 USING gist (geometry);
DROP MATERIALIZED VIEW IF EXISTS water_z0;
DROP MATERIALIZED VIEW IF EXISTS water_z1;
DROP MATERIALIZED VIEW IF EXISTS water_z2;
DROP MATERIALIZED VIEW IF EXISTS water_z3;
DROP MATERIALIZED VIEW IF EXISTS water_z4;
DROP MATERIALIZED VIEW IF EXISTS water_z5;
DROP MATERIALIZED VIEW IF EXISTS water_z6; DROP MATERIALIZED VIEW IF EXISTS water_z6;
DROP MATERIALIZED VIEW IF EXISTS water_z7; DROP MATERIALIZED VIEW IF EXISTS water_z7;
DROP MATERIALIZED VIEW IF EXISTS water_z8; DROP MATERIALIZED VIEW IF EXISTS water_z8;
@ -253,132 +259,276 @@ DROP MATERIALIZED VIEW IF EXISTS water_z10;
DROP MATERIALIZED VIEW IF EXISTS water_z11; DROP MATERIALIZED VIEW IF EXISTS water_z11;
DROP MATERIALIZED VIEW IF EXISTS water_z12; DROP MATERIALIZED VIEW IF EXISTS water_z12;
CREATE OR REPLACE VIEW water_z0 AS -- CREATE OR REPLACE VIEW water_z0 AS
( -- (
-- etldoc: ne_110m_ocean_gen_z0 -> water_z0 -- -- etldoc: ne_110m_ocean_gen_z0 -> water_z0
SELECT id, -- SELECT id,
geometry, -- geometry,
class, -- class,
is_intermittent, -- is_intermittent,
is_bridge, -- is_bridge,
is_tunnel -- is_tunnel
FROM ne_110m_ocean_gen_z0 -- FROM ne_110m_ocean_gen_z0
UNION ALL -- UNION ALL
-- etldoc: ne_110m_lakes_gen_z0 -> water_z0 -- -- etldoc: ne_110m_lakes_gen_z0 -> water_z0
SELECT id, -- SELECT id,
geometry, -- geometry,
class, -- class,
is_intermittent, -- is_intermittent,
is_bridge, -- is_bridge,
is_tunnel -- is_tunnel
FROM ne_110m_lakes_gen_z0 -- FROM ne_110m_lakes_gen_z0
); -- );
--
CREATE OR REPLACE VIEW water_z1 AS -- CREATE OR REPLACE VIEW water_z1 AS
( -- (
-- etldoc: ne_110m_ocean_gen_z1 -> water_z1 -- -- etldoc: ne_110m_ocean_gen_z1 -> water_z1
SELECT id, -- SELECT id,
geometry, -- geometry,
class, -- class,
is_intermittent, -- is_intermittent,
is_bridge, -- is_bridge,
is_tunnel -- is_tunnel
FROM ne_110m_ocean_gen_z1 -- FROM ne_110m_ocean_gen_z1
UNION ALL -- UNION ALL
-- etldoc: ne_110m_lakes_gen_z1 -> water_z1 -- -- etldoc: ne_110m_lakes_gen_z1 -> water_z1
SELECT id, -- SELECT id,
geometry, -- geometry,
class, -- class,
is_intermittent, -- is_intermittent,
is_bridge, -- is_bridge,
is_tunnel -- is_tunnel
FROM ne_110m_lakes_gen_z1 -- FROM ne_110m_lakes_gen_z1
); -- );
--
CREATE OR REPLACE VIEW water_z2 AS -- CREATE OR REPLACE VIEW water_z2 AS
( -- (
-- etldoc: ne_50m_ocean_gen_z2 -> water_z2 -- -- etldoc: ne_50m_ocean_gen_z2 -> water_z2
SELECT id, -- SELECT id,
geometry, -- geometry,
class, -- class,
is_intermittent, -- is_intermittent,
is_bridge, -- is_bridge,
is_tunnel -- is_tunnel
FROM ne_50m_ocean_gen_z2 -- FROM ne_50m_ocean_gen_z2
UNION ALL -- UNION ALL
-- etldoc: ne_50m_lakes_gen_z2 -> water_z2 -- -- etldoc: ne_50m_lakes_gen_z2 -> water_z2
SELECT id, -- SELECT id,
geometry, -- geometry,
class, -- class,
is_intermittent, -- is_intermittent,
is_bridge, -- is_bridge,
is_tunnel -- is_tunnel
FROM ne_50m_lakes_gen_z2 -- FROM ne_50m_lakes_gen_z2
); -- );
--
CREATE OR REPLACE VIEW water_z3 AS -- CREATE OR REPLACE VIEW water_z3 AS
( -- (
-- etldoc: ne_50m_ocean_gen_z3 -> water_z3 -- -- etldoc: ne_50m_ocean_gen_z3 -> water_z3
SELECT id, -- SELECT id,
geometry, -- geometry,
class, -- class,
is_intermittent, -- is_intermittent,
is_bridge, -- is_bridge,
is_tunnel -- is_tunnel
FROM ne_50m_ocean_gen_z3 -- FROM ne_50m_ocean_gen_z3
UNION ALL -- UNION ALL
-- etldoc: ne_50m_lakes_gen_z3 -> water_z3 -- -- etldoc: ne_50m_lakes_gen_z3 -> water_z3
SELECT id, -- SELECT id,
geometry, -- geometry,
class, -- class,
is_intermittent, -- is_intermittent,
is_bridge, -- is_bridge,
is_tunnel -- is_tunnel
FROM ne_50m_lakes_gen_z3 -- FROM ne_50m_lakes_gen_z3
); -- );
--
CREATE OR REPLACE VIEW water_z4 AS -- CREATE OR REPLACE VIEW water_z4 AS
( -- (
-- etldoc: ne_50m_ocean_gen_z4 -> water_z4 -- -- etldoc: ne_50m_ocean_gen_z4 -> water_z4
SELECT id, -- SELECT id,
geometry, -- geometry,
class, -- class,
is_intermittent, -- is_intermittent,
is_bridge, -- is_bridge,
is_tunnel -- is_tunnel
FROM ne_50m_ocean_gen_z4 -- FROM ne_50m_ocean_gen_z4
UNION ALL -- UNION ALL
-- etldoc: ne_10m_lakes_gen_z4 -> water_z4 -- -- etldoc: ne_10m_lakes_gen_z4 -> water_z4
SELECT id, -- SELECT id,
geometry, -- geometry,
class, -- class,
is_intermittent, -- is_intermittent,
is_bridge, -- is_bridge,
is_tunnel -- is_tunnel
FROM ne_10m_lakes_gen_z4 -- FROM ne_10m_lakes_gen_z4
); -- );
--
--
-- CREATE OR REPLACE VIEW water_z5 AS
-- (
-- -- etldoc: ne_10m_ocean_gen_z5 -> water_z5
-- SELECT id,
-- geometry,
-- class,
-- is_intermittent,
-- is_bridge,
-- is_tunnel
-- FROM ne_10m_ocean_gen_z5
-- UNION ALL
-- -- etldoc: ne_10m_lakes_gen_z5 -> water_z5
-- SELECT id,
-- geometry,
-- class,
-- is_intermittent,
-- is_bridge,
-- is_tunnel
-- FROM ne_10m_lakes_gen_z5
-- );
CREATE OR REPLACE VIEW water_z5 AS
CREATE MATERIALIZED VIEW water_z0 AS
( (
-- etldoc: ne_10m_ocean_gen_z5 -> water_z5 -- etldoc: osm_ocean_polygon_gen_z0 -> water_z0
SELECT id, SELECT NULL::integer AS id,
geometry, (ST_Dump(geometry)).geom AS geometry,
class, 'ocean'::text AS class,
is_intermittent, NULL::boolean AS is_intermittent,
is_bridge, NULL::boolean AS is_bridge,
is_tunnel NULL::boolean AS is_tunnel
FROM ne_10m_ocean_gen_z5 FROM osm_ocean_polygon_gen_z0
UNION ALL UNION ALL
-- etldoc: ne_10m_lakes_gen_z5 -> water_z5 -- etldoc: osm_water_polygon_gen_z1 -> water_z0
SELECT id, SELECT osm_id AS id,
geometry, (ST_Dump(geometry)).geom AS geometry,
class, water_class(waterway, water, leisure) AS class,
is_intermittent, is_intermittent,
is_bridge, NULL::boolean AS is_bridge,
is_tunnel NULL::boolean AS is_tunnel
FROM ne_10m_lakes_gen_z5 FROM osm_water_polygon_gen_z1
WHERE "natural" != 'bay'
); );
CREATE INDEX ON water_z0 USING gist(geometry);
CREATE MATERIALIZED VIEW water_z1 AS
(
-- etldoc: osm_ocean_polygon_gen_z1 -> water_z1
SELECT NULL::integer AS id,
(ST_Dump(geometry)).geom AS geometry,
'ocean'::text AS class,
NULL::boolean AS is_intermittent,
NULL::boolean AS is_bridge,
NULL::boolean AS is_tunnel
FROM osm_ocean_polygon_gen_z1
UNION ALL
-- etldoc: osm_water_polygon_gen_z1 -> water_z0
SELECT osm_id AS id,
(ST_Dump(geometry)).geom AS geometry,
water_class(waterway, water, leisure) AS class,
is_intermittent,
NULL::boolean AS is_bridge,
NULL::boolean AS is_tunnel
FROM osm_water_polygon_gen_z1
WHERE "natural" != 'bay'
);
CREATE INDEX ON water_z1 USING gist(geometry);
CREATE MATERIALIZED VIEW water_z2 AS
(
-- etldoc: osm_ocean_polygon_gen_z2 -> water_z2
SELECT NULL::integer AS id,
(ST_Dump(geometry)).geom AS geometry,
'ocean'::text AS class,
NULL::boolean AS is_intermittent,
NULL::boolean AS is_bridge,
NULL::boolean AS is_tunnel
FROM osm_ocean_polygon_gen_z2
UNION ALL
-- etldoc: osm_water_polygon_gen_z2 -> water_z1
SELECT osm_id AS id,
(ST_Dump(geometry)).geom AS geometry,
water_class(waterway, water, leisure) AS class,
is_intermittent,
NULL::boolean AS is_bridge,
NULL::boolean AS is_tunnel
FROM osm_water_polygon_gen_z2
WHERE "natural" != 'bay'
);
CREATE INDEX ON water_z2 USING gist(geometry);
CREATE MATERIALIZED VIEW water_z3 AS
(
-- etldoc: osm_ocean_polygon_gen_z3 -> water_z3
SELECT NULL::integer AS id,
(ST_Dump(geometry)).geom AS geometry,
'ocean'::text AS class,
NULL::boolean AS is_intermittent,
NULL::boolean AS is_bridge,
NULL::boolean AS is_tunnel
FROM osm_ocean_polygon_gen_z3
UNION ALL
-- etldoc: osm_water_polygon_gen_z3 -> water_z2
SELECT osm_id AS id,
(ST_Dump(geometry)).geom AS geometry,
water_class(waterway, water, leisure) AS class,
is_intermittent,
NULL::boolean AS is_bridge,
NULL::boolean AS is_tunnel
FROM osm_water_polygon_gen_z3
WHERE "natural" != 'bay'
);
CREATE INDEX ON water_z3 USING gist(geometry);
CREATE MATERIALIZED VIEW water_z4 AS
(
-- etldoc: osm_ocean_polygon_gen_z4 -> water_z4
SELECT NULL::integer AS id,
(ST_Dump(geometry)).geom AS geometry,
'ocean'::text AS class,
NULL::boolean AS is_intermittent,
NULL::boolean AS is_bridge,
NULL::boolean AS is_tunnel
FROM osm_ocean_polygon_gen_z4
UNION ALL
-- etldoc: osm_water_polygon_gen_z4 -> water_z3
SELECT osm_id AS id,
(ST_Dump(geometry)).geom AS geometry,
water_class(waterway, water, leisure) AS class,
is_intermittent,
NULL::boolean AS is_bridge,
NULL::boolean AS is_tunnel
FROM osm_water_polygon_gen_z4
WHERE "natural" != 'bay'
);
CREATE INDEX ON water_z4 USING gist(geometry);
CREATE MATERIALIZED VIEW water_z5 AS
(
-- etldoc: osm_ocean_polygon_gen_z5 -> water_z5
SELECT NULL::integer AS id,
(ST_Dump(geometry)).geom AS geometry,
'ocean'::text AS class,
NULL::boolean AS is_intermittent,
NULL::boolean AS is_bridge,
NULL::boolean AS is_tunnel
FROM osm_ocean_polygon_gen_z5
UNION ALL
-- etldoc: osm_water_polygon_gen_z5 -> water_z4
SELECT osm_id AS id,
(ST_Dump(geometry)).geom AS geometry,
water_class(waterway, water, leisure) AS class,
is_intermittent,
NULL::boolean AS is_bridge,
NULL::boolean AS is_tunnel
FROM osm_water_polygon_gen_z5
WHERE "natural" != 'bay'
);
CREATE INDEX ON water_z5 USING gist(geometry);
CREATE MATERIALIZED VIEW water_z6 AS CREATE MATERIALIZED VIEW water_z6 AS
( (

Wyświetl plik

@ -38,7 +38,7 @@
"tunnel" "tunnel"
] ]
], ],
"order": 12 "order": 1
}, },
{ {
"id": "waterway_river", "id": "waterway_river",
@ -84,7 +84,7 @@
1 1
] ]
], ],
"order": 13 "order": 2
}, },
{ {
"id": "waterway_river_intermittent", "id": "waterway_river_intermittent",
@ -133,7 +133,7 @@
1 1
] ]
], ],
"order": 14 "order": 3
}, },
{ {
"id": "waterway_other", "id": "waterway_other",
@ -179,7 +179,7 @@
1 1
] ]
], ],
"order": 15 "order": 4
}, },
{ {
"id": "waterway_other_intermittent", "id": "waterway_other_intermittent",
@ -229,7 +229,7 @@
1 1
] ]
], ],
"order": 16 "order": 5
}, },
{ {
"id": "waterway-bridge-case", "id": "waterway-bridge-case",
@ -282,7 +282,7 @@
"bridge" "bridge"
] ]
], ],
"order": 111 "order": 8
}, },
{ {
"id": "waterway-bridge", "id": "waterway-bridge",
@ -322,7 +322,7 @@
"bridge" "bridge"
] ]
], ],
"order": 112 "order": 9
}, },
{ {
"id": "water_way_name", "id": "water_way_name",
@ -367,7 +367,7 @@
"LineString" "LineString"
] ]
], ],
"order": 151 "order": 10
} }
] ]
} }

Wyświetl plik

@ -1,28 +1,28 @@
tileset: tileset:
layers: layers:
- layers/water/water.yaml - layers/water/water.yaml
- layers/waterway/waterway.yaml #- layers/waterway/waterway.yaml
- layers/landcover/landcover.yaml #- layers/landcover/landcover.yaml
- layers/landuse/landuse.yaml #- layers/landuse/landuse.yaml
- layers/mountain_peak/mountain_peak.yaml #- layers/mountain_peak/mountain_peak.yaml
- layers/park/park.yaml #- layers/park/park.yaml
- layers/boundary/boundary.yaml #- layers/boundary/boundary.yaml
- layers/aeroway/aeroway.yaml #- layers/aeroway/aeroway.yaml
- layers/transportation/transportation.yaml #- layers/transportation/transportation.yaml
- layers/building/building.yaml #- layers/building/building.yaml
- layers/water_name/water_name.yaml #- layers/water_name/water_name.yaml
- layers/transportation_name/transportation_name.yaml #- layers/transportation_name/transportation_name.yaml
- layers/place/place.yaml #- layers/place/place.yaml
- layers/housenumber/housenumber.yaml #- layers/housenumber/housenumber.yaml
- layers/poi/poi.yaml #- layers/poi/poi.yaml
- layers/aerodrome_label/aerodrome_label.yaml #- layers/aerodrome_label/aerodrome_label.yaml
name: OpenMapTiles name: OpenMapTiles
version: 3.15.0 version: 3.15.0
id: openmaptiles id: openmaptiles
description: "A tileset showcasing all layers in OpenMapTiles. https://openmaptiles.org" description: "A tileset showcasing all layers in OpenMapTiles. https://openmaptiles.org"
attribution: '<a href="https://www.openmaptiles.org/" target="_blank">&copy; OpenMapTiles</a> <a href="https://www.openstreetmap.org/copyright" target="_blank">&copy; OpenStreetMap contributors</a>' attribution: '<a href="https://www.openmaptiles.org/" target="_blank">&copy; OpenMapTiles</a> <a href="https://www.openstreetmap.org/copyright" target="_blank">&copy; OpenStreetMap contributors</a>'
center: [0, 0, 1] center: [0, 0, 1]
bounds: [-180.0,-85.0511,180.0,85.0511] bounds: [-180.0, -85.0511, 180.0, 85.0511]
maxzoom: 14 maxzoom: 14
minzoom: 0 minzoom: 0
pixel_scale: 256 pixel_scale: 256

Wyświetl plik

@ -115,7 +115,7 @@ make refresh-docker-images
##### backup log from here ... ##### backup log from here ...
exec &> >(tee -a "$log_file") #exec &> >(tee -a "$log_file")
echo " " echo " "
echo "=====================================================================================" echo "====================================================================================="