Port 9.6 to develop (#89)

* Part one of porting work from 9.6 to 10

* Backported more scripts from 9.6 branch

* Added missing apt update in dockerfile

* Updates to entrypoint to reference image and update docker-compose to reference 10 pg

* Added sample and docs from 9.6 branch

* Removed my diagram as Rizky had already added one

* Fix env paths for pg 10

* Fixes for backporting work from 9.6 to 10 - dbb now spins up and accepts connections properly
pull/90/head
Tim Sutton 2018-03-21 22:53:39 +02:00 zatwierdzone przez GitHub
rodzic 1250909c4e
commit cd66fea41d
Nie znaleziono w bazie danych klucza dla tego podpisu
ID klucza GPG: 4AEE18F83AFDEB23
21 zmienionych plików z 1339 dodań i 309 usunięć

3
.gitignore vendored
Wyświetl plik

@ -1,2 +1,5 @@
.idea
*.*~
*/replication/pg-*
*/replication/docker-compose.override.yml
.DS_Store

Wyświetl plik

@ -6,31 +6,40 @@ RUN export DEBIAN_FRONTEND=noninteractive
ENV DEBIAN_FRONTEND noninteractive
RUN dpkg-divert --local --rename --add /sbin/initctl
RUN apt-get -y update
RUN apt-get -y install gnupg2 wget ca-certificates rpl pwgen
RUN apt-get -y update; apt-get -y install gnupg2 wget ca-certificates rpl pwgen
RUN sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ stretch-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
RUN apt-get -y update
#RUN apt-get -y upgrade
#-------------Application Specific Stuff ----------------------------------------------------
# We add postgis as well to prevent build errors (that we dont see on local builds)
# on docker hub e.g.
# The following packages have unmet dependencies:
RUN apt-get install -y postgresql-client-10 postgresql-common postgresql-10 postgresql-10-postgis-2.4 postgresql-10-pgrouting netcat
RUN apt-get update; apt-get install -y postgresql-client-10 postgresql-common postgresql-10 postgresql-10-postgis-2.4 postgresql-10-pgrouting netcat
# Open port 5432 so linked containers can see them
EXPOSE 5432
# Run any additional tasks here that are too tedious to put in
# this dockerfile directly.
ADD env-data.sh /env-data.sh
ADD setup.sh /setup.sh
RUN chmod 0755 /setup.sh
RUN chmod +x /setup.sh
RUN /setup.sh
# We will run any commands in this when the container starts
ADD start-postgis.sh /start-postgis.sh
RUN chmod 0755 /start-postgis.sh
ADD docker-entrypoint.sh /docker-entrypoint.sh
ADD setup-conf.sh /
ADD setup-database.sh /
ADD setup-pg_hba.sh /
ADD setup-replication.sh /
ADD setup-ssl.sh /
ADD setup-user.sh /
ADD postgresql.conf /tmp/postgresql.conf
RUN chmod +x /docker-entrypoint.sh
CMD /start-postgis.sh
# Optimise postgresql
RUN echo "kernel.shmmax=543252480" >> /etc/sysctl.conf
RUN echo "kernel.shmall=2097152" >> /etc/sysctl.conf
ENTRYPOINT /docker-entrypoint.sh

143
README.md
Wyświetl plik

@ -22,6 +22,10 @@ environment (though probably not for heavy load databases).
**Note:** We recommend using ``apt-cacher-ng`` to speed up package fetching -
you should configure the host for it in the provided 71-apt-cacher-ng file.
There is a nice 'from scratch' tutorial on using this docker image on Alex Urquhart's
blog [here](https://alexurquhart.com/post/set-up-postgis-with-docker/) - if you are
just getting started with docker, PostGIS and QGIS, we really recommend that you use it.
## Tagged versions
The following convention is used for tagging the images we build:
@ -30,7 +34,7 @@ kartoza/postgis:[postgres_version]-[postgis-version]
So for example:
``kartoza/postgis:9.5-2.2`` Provides PostgreSQL 9.5, PostGIS 2.2
``kartoza/postgis:9.6-2.4`` Provides PostgreSQL 9.6, PostGIS 2.4
**Note:** We highly recommend that you use tagged versions because
successive minor versions of PostgreSQL write their database clusters
@ -101,30 +105,24 @@ the container will allow connections only from the docker private subnet.
* -e ALLOW_IP_RANGE=<0.0.0.0/0>
## Convenience run script
## Convenience docker-compose.yml
For convenience we have provided a bash script for running this container
that lets you specify a volume mount point and a username / password
for the new instance superuser. It takes these options:
For convenience we have provided a ``docker-compose.yml`` that will run a
copy of the database image and also our related database backup image (see
[https://github.com/kartoza/docker-pg-backup](https://github.com/kartoza/docker-pg-backup)).
```
OPTIONS:
-h Show this message
-n Container name
-v Volume to mount the Postgres cluster into
-l local port (defaults to 25432)
-u Postgres user name (defaults to 'docker')
-p Postgres password (defaults to 'docker')
-d database name (defaults to 'gis')
```
The docker compose recipe will expose PostgreSQL on port 25432 (to prevent
potential conflicts with any local database instance you may have).
Example usage:
```
./run-postgis-docker.sh -p 6789 -v /tmp/foo/ -n postgis -u foo -p bar
docker-compose up -d
```
**Note:** The docker-compose recipe above will not persist your data on your local
disk, only in a docker volume.
## Connect via psql
Connect with psql (make sure you first install postgresql client tools on your
@ -159,7 +157,118 @@ docker run -d -v $HOME/postgres_data:/var/lib/postgresql kartoza/postgis`
You need to ensure the ``postgres_data`` directory has sufficient permissions
for the docker process to read / write it.
## Postgres Replication Setup
This image were provided with replication abilities. In some sense, we can
categorize an instance of the container as `master` or `slave`. A `master`
instance means that a particular container have a role as a single point of
database write. A `slave` instance means that a particular container will
mirror database content from a designated master. This replication scheme allows
us to sync database. However a `slave` is only of read-only transaction, thus
we can't write new data on it.
To experiment with the replication abilities, you can see a (docker-compose.yml)[sample/replication/docker-compose.yml]
sample provided. There are several environment variables that you can set, such as:
Master settings:
- ALLOW_IP_RANGE: A pg_hba.conf domain format which will allow certain host
to connect into the container. This is needed to allow `slave` to connect
into `master`, so specifically this settings should allow `slave` address.
- Both POSTGRES_USER and POSTGRES_PASS will be used as credentials for slave to
connect, so make sure you changed this into something secure.
Slave settings:
- REPLICATE_FROM: This should be the domain name, or ip address of `master`
instance. It can be anything from docker resolved name like written in the sample,
or the IP address of the actual machine where you expose `master`. This is
useful to create cross machine replication, or cross stack/server.
- REPLICATE_PORT: This should be the port number of `master` postgres instance.
Will default to 5432 (default postgres port), if not specified.
- DESTROY_DATABASE_ON_RESTART: Default is `True`. Set to otherwise to prevent
this behaviour. A slave will always destroy its current database on
restart, because it will try to sync again from `master` and avoid inconsistencies.
- PROMOTE_MASTER: Default none. If set to any value, then the current slave
will be promoted to master.
In some cases when `master` container has failed, we might want to use our `slave`
as `master` for a while. However promoted slave will break consistencies and
is not able to revert to slave anymore, unless the were destroyed and resynced
with the new master.
To run sample replication, do the following instructions:
Do manual image build by executing `build.sh` script
```
./build.sh
```
Go into `sample/replication` directory and experiment with the following Make
command to run both master and slave services.
```
make up
```
To shutdown services, execute:
```
make down
```
To view logs for master and slave respectively, use the following command:
```
make master-log
make slave-log
```
You can try experiment with several scenarios to see how replication works
### Sync changes from master to slave
You can use any postgres database tools to create new tables in master, by
connecting using POSTGRES_USER and POSTGRES_PASS credentials using exposed port.
In the sample, master database were exposed in port 7777.
Or you can do it via command line, by entering the shell:
```
make master-shell
```
Then made any database changes using psql.
After that, you can see that slave follows the changes by inspecting
slave database. You can, again, uses database management tools using connection
credentials, hostname, and ports for slave. Or you can do it via command line,
by entering the shell:
```
make slave-shell
```
Then view your changes using psql.
### Promoting slave to master
You will notice that you cannot make changes in slave, because it was read-only.
If somehow you want to promote it to master, you can specify `PROMOTE_MASTER: 'True'`
into slave environment and set `DESTROY_DATABASE_ON_RESTART: 'False'`.
After this, you can make changes to your slave, but master and slave will not
be in sync anymore. This is useful if slave needs to take over a failover master.
However it was recommended to take additional action, such as creating backup from
slave, so a dedicated master can be created again.
### Preventing slave database destroy on restart
You can optionally set `DESTROY_DATABASE_ON_RESTART: 'False'` after successful sync
to prevent the database from destroyed on restart. With this settings, you can
shutdown your slave and restart it later and it will continue to sync using existing
database (as long as there is no consistencies conflicts).
However, you should note that this option doesn't mean anything if you didn't
persist your database volumes. Because if it is not persisted, then it will be lost
on restart because docker will recreate the container.
## Credits

Wyświetl plik

@ -1 +1,2 @@
docker build -t kartoza/postgis:manual-build .
docker build -t kartoza/postgis:10.0-2.4 .

38
docker-compose.yml 100644
Wyświetl plik

@ -0,0 +1,38 @@
# docker-compose build
# docker-compose up -d web
version: '2'
volumes:
dbbackups:
postgis-data:
services:
db:
image: kartoza/postgis:10.0-2.4
volumes:
- 'postgis-data:/var/lib/postgresql'
- 'dbbackups:/backups'
environment:
- POSTGRES_DB=gis
- POSTGRES_USER=docker
- POSTGRES_PASS=docker
- ALLOW_IP_RANGE=0.0.0.0/0
ports:
- 25432:5432
restart: unless-stopped
dbbackups:
image: kartoza/pg-backup:10.0
hostname: pg-backups
volumes:
- dbbackups:/backups
links:
- db:db
environment:
- DUMPPREFIX=demo
- PGUSER=docker
- PGPASSWORD=docker
- PGDATABASE=gis
- PGPORT=5432
- PGHOST=db
restart: unless-stopped

Wyświetl plik

@ -0,0 +1,54 @@
#!/usr/bin/env bash
# This script will run as the postgres user due to the Dockerfile USER directive
set -e
# Setup postgres CONF file
source /setup-conf.sh
# Setup ssl
source /setup-ssl.sh
# Setup pg_hba.conf
source /setup-pg_hba.sh
if [ -z "$REPLICATE_FROM" ]; then
# This means this is a master instance. We check that database exists
echo "Setup master database"
source /setup-database.sh
else
# This means this is a slave/replication instance.
echo "Setup slave database"
source /setup-replication.sh
fi
# Running extended script or sql if provided.
# Useful for people who extends the image.
echo
for f in /docker-entrypoint-initdb.d/*; do
case "$f" in
*.sh) echo "$0: running $f"; . "$f" ;;
*.sql) echo "$0: running $f"; "${psql[@]}" < "$f"; echo ;;
*.sql.gz) echo "$0: running $f"; gunzip -c "$f" | "${psql[@]}"; echo ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done
# If no arguments passed to entrypoint, then run postgres by default
if [ $# -eq 0 ];
then
echo "Postgres initialisation process completed .... restarting in foreground"
cat /tmp/postgresql.conf > ${CONF}
su - postgres -c "$SETVARS $POSTGRES -D $DATADIR -c config_file=$CONF"
fi
# If arguments passed, run postgres with these arguments
# This will make sure entrypoint will always be executed
if [ "${1:0:1}" = '-' ]; then
# append postgres into the arguments
set -- postgres "$@"
fi
exec su - "$@"

Plik binarny nie jest wyświetlany.

Plik binarny nie jest wyświetlany.

Po

Szerokość:  |  Wysokość:  |  Rozmiar: 115 KiB

64
env-data.sh 100644
Wyświetl plik

@ -0,0 +1,64 @@
#!/usr/bin/env bash
DATADIR="/var/lib/postgresql/10/main"
ROOT_CONF="/etc/postgresql/10/main"
CONF="$ROOT_CONF/postgresql.conf"
RECOVERY_CONF="$ROOT_CONF/recovery.conf"
POSTGRES="/usr/lib/postgresql/10/bin/postgres"
INITDB="/usr/lib/postgresql/10/bin/initdb"
SQLDIR="/usr/share/postgresql/10/contrib/postgis-2.4/"
SETVARS="POSTGIS_ENABLE_OUTDB_RASTERS=1 POSTGIS_GDAL_ENABLED_DRIVERS=ENABLE_ALL"
LOCALONLY="-c listen_addresses='127.0.0.1'"
PG_BASEBACKUP="/usr/bin/pg_basebackup"
PROMOTE_FILE="/tmp/pg_promote_master"
PGSTAT_TMP="/var/run/postgresql/"
PG_PID="/var/run/postgresql/10-main.pid"
# Make sure we have a user set up
if [ -z "${POSTGRES_USER}" ]; then
POSTGRES_USER=docker
fi
if [ -z "${POSTGRES_PASS}" ]; then
POSTGRES_PASS=docker
fi
if [ -z "${POSTGRES_DBNAME}" ]; then
POSTGRES_DBNAME=gis
fi
# SSL mode
if [ -z "${PGSSLMODE}" ]; then
PGSSLMODE=require
fi
# Enable hstore and topology by default
if [ -z "${HSTORE}" ]; then
HSTORE=true
fi
if [ -z "${TOPOLOGY}" ]; then
TOPOLOGY=true
fi
# Replication settings
if [ -z "${REPLICATE_PORT}" ]; then
REPLICATE_PORT=5432
fi
if [ -z "${DESTROY_DATABASE_ON_RESTART}" ]; then
DESTROY_DATABASE_ON_RESTART=true
fi
if [ -z "${PG_MAX_WAL_SENDERS}" ]; then
PG_MAX_WAL_SENDERS=8
fi
if [ -z "${PG_WAL_KEEP_SEGMENTS}" ]; then
PG_WAL_KEEP_SEGMENTS=100
fi
# Compatibility with official postgres variable
# Official postgres variable gets priority
if [ ! -z "${POSTGRES_PASSWORD}" ]; then
POSTGRES_PASS=${POSTGRES_PASSWORD}
fi
if [ ! -z "${PGDATA}" ]; then
DATADIR=${PGDATA}
fi
if [ ! -z "$POSTGRES_DB" ]; then
POSTGRES_DBNAME=${POSTGRES_DB}
fi

652
postgresql.conf 100644
Wyświetl plik

@ -0,0 +1,652 @@
# -----------------------------
# PostgreSQL configuration file
# -----------------------------
#
# This file consists of lines of the form:
#
# name = value
#
# (The "=" is optional.) Whitespace may be used. Comments are introduced with
# "#" anywhere on a line. The complete list of parameter names and allowed
# values can be found in the PostgreSQL documentation.
#
# The commented-out settings shown in this file represent the default values.
# Re-commenting a setting is NOT sufficient to revert it to the default value;
# you need to reload the server.
#
# This file is read on server startup and when the server receives a SIGHUP
# signal. If you edit the file on a running system, you have to SIGHUP the
# server for the changes to take effect, or use "pg_ctl reload". Some
# parameters, which are marked below, require a server shutdown and restart to
# take effect.
#
# Any parameter can also be given as a command-line option to the server, e.g.,
# "postgres -c log_connections=on". Some parameters can be changed at run time
# with the "SET" SQL command.
#
# Memory units: kB = kilobytes Time units: ms = milliseconds
# MB = megabytes s = seconds
# GB = gigabytes min = minutes
# TB = terabytes h = hours
# d = days
#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------
# The default values of these variables are driven from the -D command-line
# option or PGDATA environment variable, represented here as ConfigDir.
data_directory = '/var/lib/postgresql/10/main' # use data in another directory
# (change requires restart)
hba_file = '/etc/postgresql/10/main/pg_hba.conf' # host-based authentication file
# (change requires restart)
ident_file = '/etc/postgresql/10/main/pg_ident.conf' # ident configuration file
# (change requires restart)
# If external_pid_file is not explicitly set, no extra PID file is written.
external_pid_file = '/var/run/postgresql/10-main.pid' # write an extra PID file
# (change requires restart)
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
#listen_addresses = 'localhost' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
port = 5432 # (change requires restart)
max_connections = 100 # (change requires restart)
#superuser_reserved_connections = 3 # (change requires restart)
unix_socket_directories = '/var/run/postgresql' # comma-separated list of directories
# (change requires restart)
#unix_socket_group = '' # (change requires restart)
#unix_socket_permissions = 0777 # begin with 0 to use octal notation
# (change requires restart)
#bonjour = off # advertise server via Bonjour
# (change requires restart)
#bonjour_name = '' # defaults to the computer name
# (change requires restart)
# - Security and Authentication -
#authentication_timeout = 1min # 1s-600s
ssl = on # (change requires restart)
#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers
# (change requires restart)
#ssl_prefer_server_ciphers = on # (change requires restart)
#ssl_ecdh_curve = 'prime256v1' # (change requires restart)
ssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem' # (change requires restart)
ssl_key_file = '/etc/ssl/private/ssl-cert-snakeoil.key' # (change requires restart)
#ssl_ca_file = '' # (change requires restart)
#ssl_crl_file = '' # (change requires restart)
#password_encryption = on
#db_user_namespace = off
#row_security = on
# GSSAPI using Kerberos
#krb_server_keyfile = ''
#krb_caseins_users = off
# - TCP Keepalives -
# see "man 7 tcp" for details
#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;
# 0 selects the system default
#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;
# 0 selects the system default
#tcp_keepalives_count = 0 # TCP_KEEPCNT;
# 0 selects the system default
#------------------------------------------------------------------------------
# RESOURCE USAGE (except WAL)
#------------------------------------------------------------------------------
# - Memory -
# Recommended 75 % of the database memory
shared_buffers = 512MB # min 128kB
# (change requires restart)
#huge_pages = try # on, off, or try
# (change requires restart)
#temp_buffers = 8MB # min 800kB
#max_prepared_transactions = 0 # zero disables the feature
# (change requires restart)
# Caution: it is not advisable to set max_prepared_transactions nonzero unless
# you actively intend to use prepared transactions.
work_mem = 16MB # min 64kB
maintenance_work_mem = 128MB # min 1MB
#replacement_sort_tuples = 150000 # limits use of replacement selection sort
#autovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_mem
#max_stack_depth = 2MB # min 100kB
dynamic_shared_memory_type = posix # the default is the first option
# supported by the operating system:
# posix
# sysv
# windows
# mmap
# use none to disable dynamic shared memory
# (change requires restart)
# - Disk -
#temp_file_limit = -1 # limits per-process temp file space
# in kB, or -1 for no limit
# - Kernel Resource Usage -
#max_files_per_process = 1000 # min 25
# (change requires restart)
#shared_preload_libraries = '' # (change requires restart)
# - Cost-Based Vacuum Delay -
#vacuum_cost_delay = 0 # 0-100 milliseconds
#vacuum_cost_page_hit = 1 # 0-10000 credits
#vacuum_cost_page_miss = 10 # 0-10000 credits
#vacuum_cost_page_dirty = 20 # 0-10000 credits
#vacuum_cost_limit = 200 # 1-10000 credits
# - Background Writer -
#bgwriter_delay = 200ms # 10-10000ms between rounds
#bgwriter_lru_maxpages = 100 # 0-1000 max buffers written/round
#bgwriter_lru_multiplier = 2.0 # 0-10.0 multiplier on buffers scanned/round
#bgwriter_flush_after = 512kB # measured in pages, 0 disables
# - Asynchronous Behavior -
#effective_io_concurrency = 1 # 1-1000; 0 disables prefetching
max_worker_processes = 8 # (change requires restart)
max_parallel_workers_per_gather = 2 # taken from max_worker_processes
#old_snapshot_threshold = -1 # 1min-60d; -1 disables; 0 is immediate
# (change requires restart)
#backend_flush_after = 0 # measured in pages, 0 disables
#------------------------------------------------------------------------------
# WRITE AHEAD LOG
#------------------------------------------------------------------------------
# - Settings -
#wal_level = minimal # minimal, replica, or logical
# (change requires restart)
#fsync = on # flush data to disk for crash safety
# (turning this off can cause
# unrecoverable data corruption)
#synchronous_commit = on # synchronization level;
# off, local, remote_write, remote_apply, or on
#wal_sync_method = fsync # the default is the first option
# supported by the operating system:
# open_datasync
# fdatasync (default on Linux)
# fsync
# fsync_writethrough
# open_sync
#full_page_writes = on # recover from partial page writes
#wal_compression = off # enable compression of full-page writes
#wal_log_hints = off # also do full page writes of non-critical updates
# (change requires restart)
wal_buffers = 1 # min 32kB, -1 sets based on shared_buffers
# (change requires restart)
#wal_writer_delay = 200ms # 1-10000 milliseconds
#wal_writer_flush_after = 1MB # measured in pages, 0 disables
#commit_delay = 0 # range 0-100000, in microseconds
#commit_siblings = 5 # range 1-1000
# - Checkpoints -
#checkpoint_timeout = 5min # range 30s-1d
#max_wal_size = 1GB
#min_wal_size = 80MB
#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0
#checkpoint_flush_after = 256kB # measured in pages, 0 disables
#checkpoint_warning = 30s # 0 disables
# - Archiving -
#archive_mode = off # enables archiving; off, on, or always
# (change requires restart)
#archive_command = ''
# placeholders: %p = path of file to archive
# %f = file name only
# e.g. 'test ! -f /opt/archivedir/%f && cp %p /opt/archivedir/%f'
#archive_timeout = 0 # force a logfile segment switch after this
# number of seconds; 0 disables
#------------------------------------------------------------------------------
# REPLICATION
#------------------------------------------------------------------------------
# - Sending Server(s) -
# Set these on the master and on any standby that will send replication data.
max_wal_senders = 8 # max number of walsender processes
# (change requires restart)
wal_keep_segments = 100 # in logfile segments, 16MB each; 0 disables
#wal_sender_timeout = 60s # in milliseconds; 0 disables
#max_replication_slots = 0 # max number of replication slots
# (change requires restart)
#track_commit_timestamp = off # collect timestamp of transaction commit
# (change requires restart)
# - Master Server -
# These settings are ignored on a standby server.
#synchronous_standby_names = '' # standby servers that provide sync rep
# number of sync standbys and comma-separated list of application_name
# from standby(s); '*' = all
#vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is delayed
# - Standby Servers -
# These settings are ignored on a master server.
#hot_standby = off # "on" allows queries during recovery
# (change requires restart)
#max_standby_archive_delay = 30s # max delay before canceling queries
# when reading WAL from archive;
# -1 allows indefinite delay
#max_standby_streaming_delay = 30s # max delay before canceling queries
# when reading streaming WAL;
# -1 allows indefinite delay
#wal_receiver_status_interval = 10s # send replies at least this often
# 0 disables
#hot_standby_feedback = off # send info from standby to prevent
# query conflicts
#wal_receiver_timeout = 60s # time that receiver waits for
# communication from master
# in milliseconds; 0 disables
#wal_retrieve_retry_interval = 5s # time to wait before retrying to
# retrieve WAL after a failed attempt
#------------------------------------------------------------------------------
# QUERY TUNING
#------------------------------------------------------------------------------
# - Planner Method Configuration -
#enable_bitmapscan = on
#enable_hashagg = on
#enable_hashjoin = on
#enable_indexscan = on
#enable_indexonlyscan = on
#enable_material = on
#enable_mergejoin = on
#enable_nestloop = on
#enable_seqscan = on
#enable_sort = on
#enable_tidscan = on
# - Planner Cost Constants -
seq_page_cost = 1.0 # measured on an arbitrary scale
random_page_cost = 2.0 # same scale as above
#cpu_tuple_cost = 0.01 # same scale as above
#cpu_index_tuple_cost = 0.005 # same scale as above
#cpu_operator_cost = 0.0025 # same scale as above
#parallel_tuple_cost = 0.1 # same scale as above
#parallel_setup_cost = 1000.0 # same scale as above
#min_parallel_relation_size = 8MB
effective_cache_size = 512MB
# - Genetic Query Optimizer -
#geqo = on
#geqo_threshold = 12
#geqo_effort = 5 # range 1-10
#geqo_pool_size = 0 # selects default based on effort
#geqo_generations = 0 # selects default based on effort
#geqo_selection_bias = 2.0 # range 1.5-2.0
#geqo_seed = 0.0 # range 0.0-1.0
# - Other Planner Options -
#default_statistics_target = 100 # range 1-10000
#constraint_exclusion = partition # on, off, or partition
#cursor_tuple_fraction = 0.1 # range 0.0-1.0
#from_collapse_limit = 8
#join_collapse_limit = 8 # 1 disables collapsing of explicit
# JOIN clauses
#force_parallel_mode = off
#------------------------------------------------------------------------------
# ERROR REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
#log_destination = 'stderr' # Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform. csvlog
# requires logging_collector to be on.
# This is used when logging to stderr:
#logging_collector = off # Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
# (change requires restart)
# These are only used if logging_collector is on:
#log_directory = 'pg_log' # directory where log files are written,
# can be absolute or relative to PGDATA
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,
# can include strftime() escapes
#log_file_mode = 0600 # creation mode for log files,
# begin with 0 to use octal notation
#log_truncate_on_rotation = off # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
#log_rotation_age = 1d # Automatic rotation of logfiles will
# happen after that time. 0 disables.
#log_rotation_size = 10MB # Automatic rotation of logfiles will
# happen after that much log output.
# 0 disables.
# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
#syslog_sequence_numbers = on
#syslog_split_messages = on
# This is only relevant when logging to eventlog (win32):
# (change requires restart)
#event_source = 'PostgreSQL'
# - When to Log -
#client_min_messages = notice # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# log
# notice
# warning
# error
#log_min_messages = warning # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic
#log_min_error_statement = error # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic (effectively off)
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this number
# of milliseconds
# - What to Log -
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default # terse, default, or verbose messages
#log_hostname = off
log_line_prefix = '%m [%p] %q%u@%d ' # special values:
# %a = application name
# %u = user name
# %d = database name
# %r = remote host and port
# %h = remote host
# %p = process ID
# %t = timestamp without milliseconds
# %m = timestamp with milliseconds
# %n = timestamp with milliseconds (as a Unix epoch)
# %i = command tag
# %e = SQL state
# %c = session ID
# %l = session line number
# %s = session start timestamp
# %v = virtual transaction ID
# %x = transaction ID (0 if none)
# %q = stop here in non-session
# processes
# %% = '%'
# e.g. '<%u%%%d> '
#log_lock_waits = off # log lock waits >= deadlock_timeout
#log_statement = 'none' # none, ddl, mod, all
#log_replication_commands = off
#log_temp_files = -1 # log temporary files equal or larger
# than the specified size in kilobytes;
# -1 disables, 0 logs all temp files
log_timezone = 'UTC'
# - Process Title -
cluster_name = '10.0/main' # added to process titles if nonempty
# (change requires restart)
#update_process_title = on
#------------------------------------------------------------------------------
# RUNTIME STATISTICS
#------------------------------------------------------------------------------
# - Query/Index Statistics Collector -
#track_activities = on
#track_counts = on
#track_io_timing = off
#track_functions = none # none, pl, all
#track_activity_query_size = 1024 # (change requires restart)
stats_temp_directory = '/var/run/postgresql/'
# - Statistics Monitoring -
#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#log_statement_stats = off
#------------------------------------------------------------------------------
# AUTOVACUUM PARAMETERS
#------------------------------------------------------------------------------
#autovacuum = on # Enable autovacuum subprocess? 'on'
# requires track_counts to also be on.
#log_autovacuum_min_duration = -1 # -1 disables, 0 logs all actions and
# their durations, > 0 logs only
# actions running at least this number
# of milliseconds.
#autovacuum_max_workers = 3 # max number of autovacuum subprocesses
# (change requires restart)
#autovacuum_naptime = 1min # time between autovacuum runs
#autovacuum_vacuum_threshold = 50 # min number of row updates before
# vacuum
#autovacuum_analyze_threshold = 50 # min number of row updates before
# analyze
#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum
#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze
#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum
# (change requires restart)
#autovacuum_multixact_freeze_max_age = 400000000 # maximum multixact age
# before forced vacuum
# (change requires restart)
#autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for
# autovacuum, in milliseconds;
# -1 means use vacuum_cost_delay
#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for
# autovacuum, -1 means use
# vacuum_cost_limit
#------------------------------------------------------------------------------
# CLIENT CONNECTION DEFAULTS
#------------------------------------------------------------------------------
# - Statement Behavior -
#search_path = '"$user", public' # schema names
#default_tablespace = '' # a tablespace name, '' uses the default
#temp_tablespaces = '' # a list of tablespace names, '' uses
# only default tablespace
#check_function_bodies = on
#default_transaction_isolation = 'read committed'
#default_transaction_read_only = off
#default_transaction_deferrable = off
#session_replication_role = 'origin'
#statement_timeout = 0 # in milliseconds, 0 is disabled
#lock_timeout = 0 # in milliseconds, 0 is disabled
#idle_in_transaction_session_timeout = 0 # in milliseconds, 0 is disabled
#vacuum_freeze_min_age = 50000000
#vacuum_freeze_table_age = 150000000
#vacuum_multixact_freeze_min_age = 5000000
#vacuum_multixact_freeze_table_age = 150000000
#bytea_output = 'hex' # hex, escape
#xmlbinary = 'base64'
xmloption = 'document'
#gin_fuzzy_search_limit = 0
#gin_pending_list_limit = 4MB
# - Locale and Formatting -
datestyle = 'iso, mdy'
#intervalstyle = 'postgres'
timezone = 'UTC'
#timezone_abbreviations = 'Default' # Select the set of available time zone
# abbreviations. Currently, there are
# Default
# Australia (historical usage)
# India
# You can create your own file in
# share/timezonesets/.
#extra_float_digits = 0 # min -15, max 3
#client_encoding = sql_ascii # actually, defaults to database
# encoding
# These settings are initialized by initdb, but they can be changed.
lc_messages = 'C.UTF-8' # locale for system error message
# strings
lc_monetary = 'C.UTF-8' # locale for monetary formatting
lc_numeric = 'C.UTF-8' # locale for number formatting
lc_time = 'C.UTF-8' # locale for time formatting
# default configuration for text search
default_text_search_config = 'pg_catalog.english'
# - Other Defaults -
#dynamic_library_path = '$libdir'
#local_preload_libraries = ''
#session_preload_libraries = ''
#------------------------------------------------------------------------------
# LOCK MANAGEMENT
#------------------------------------------------------------------------------
#deadlock_timeout = 1s
#max_locks_per_transaction = 64 # min 10
# (change requires restart)
#max_pred_locks_per_transaction = 64 # min 10
# (change requires restart)
#------------------------------------------------------------------------------
# VERSION/PLATFORM COMPATIBILITY
#------------------------------------------------------------------------------
# - Previous PostgreSQL Versions -
#array_nulls = on
#backslash_quote = safe_encoding # on, off, or safe_encoding
#default_with_oids = off
#escape_string_warning = on
#lo_compat_privileges = off
#operator_precedence_warning = off
#quote_all_identifiers = off
#sql_inheritance = on
#standard_conforming_strings = on
#synchronize_seqscans = on
# - Other Platforms and Clients -
#transform_null_equals = off
#------------------------------------------------------------------------------
# ERROR HANDLING
#------------------------------------------------------------------------------
#exit_on_error = off # terminate session on any error?
#restart_after_crash = on # reinitialize after backend crash?
#------------------------------------------------------------------------------
# CONFIG FILE INCLUDES
#------------------------------------------------------------------------------
# These options allow settings to be loaded from files other than the
# default postgresql.conf.
include_dir = 'conf.d' # include files ending in '.conf' from
# directory 'conf.d'
#include_if_exists = 'exists.conf' # include file only if it exists
#include = 'special.conf' # include file
#------------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#------------------------------------------------------------------------------
# Add settings for extensions here
listen_addresses = '*'
port = 5432
ssl = true
ssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem'
ssl_key_file = '/etc/ssl/private/ssl-cert-snakeoil.key'
wal_level = hot_standby
max_wal_senders = 8
wal_keep_segments = 100
hot_standby = on

Wyświetl plik

@ -1,117 +0,0 @@
#!/bin/bash
# Commit and redeploy the user map container
set -x
usage()
{
cat << EOF
usage: $0 options
This script runs a new docker postgis instance for you.
To get the image run:
docker pull kartoza/postgis
OPTIONS:
-h Show this message
-n Container name
-v Volume to mount the Postgres cluster into
-l local port (defaults to 25432)
-u Postgres user name (defaults to 'docker')
-p Postgres password (defaults to 'docker')
-d database cluster path (defaults to debian package defaults)
-z database name (defaults to 'gis')
EOF
}
while getopts ":h:n:v:l:u:p:d:z:" OPTION
do
case $OPTION in
n)
CONTAINER_NAME=${OPTARG}
;;
v)
VOLUME=${OPTARG}
;;
l)
LOCALPORT=${OPTARG}
;;
u)
PGUSER=${OPTARG}
;;
p)
PGPASSWORD=${OPTARG}
;;
d)
DATADIR=${OPTARG}
;;
z)
DBNAME=${OPTARG}
;;
\?)
usage
exit 1
;;
esac
done
if [[ -z $VOLUME ]] || [[ -z $CONTAINER_NAME ]] || [[ -z $PGUSER ]] || [[ -z $PGPASSWORD ]] || [[ -z $DATADIR ]] || [[ -z $LOCALPORT ]]
then
usage
exit 1
fi
if [[ ! -z $LOCALPORT]]
then
LOCALPORT=${LOCALPORT}
else
LOCALPORT=25432
fi
if [[ ! -z $VOLUME ]]
then
VOLUME_OPTION="-v ${VOLUME}:/var/lib/postgresql"
else
VOLUME_OPTION=""
fi
if [ ! -d $VOLUME ]
then
mkdir $VOLUME
fi
chmod a+w $VOLUME
docker kill ${CONTAINER_NAME}
docker rm ${CONTAINER_NAME}
CMD="docker run --name="${CONTAINER_NAME}" \
--hostname="${CONTAINER_NAME}" \
--restart=always \
-e POSTGRES_USER=${PGUSER} \
-e POSTGRES_PASS=${PGPASSWORD} \
-e DATADIR=${DATADIR} \
-e POSTGRES_DBNAME=${DBNAME} \
-d -t \
${VOLUME_OPTION} \
-p "${LOCALPORT}:5432" \
kartoza/postgis /start-postgis.sh"
echo 'Running\n'
echo $CMD
eval $CMD
docker ps | grep ${CONTAINER_NAME}
IPADDRESS=`docker inspect ${CONTAINER_NAME} | grep IPAddress | grep -o '[0-9\.]*'`
echo "Connect using:"
echo "psql -l -p 5432 -h $IPADDRESS -U $PGUSER"
echo "and password $PGPASSWORD"
echo
echo "Alternatively link to this container from another to access it"
echo "e.g. docker run -link postgis:pg .....etc"
echo "Will make the connection details to the postgis server available"
echo "in your app container as $PG_PORT_5432_TCP_ADDR (for the ip address)"
echo "and $PG_PORT_5432_TCP_PORT (for the port number)."

Wyświetl plik

@ -0,0 +1,26 @@
up:
docker-compose up -d
down:
docker-compose down
scale:
docker-compose up -d --scale pg-slave=3
unscale:
docker-compose up -d --scale pg-slave=1
status:
docker-compose ps
master-shell:
docker-compose exec pg-master /bin/bash
slave-shell:
docker-compose exec pg-slave /bin/bash
master-log:
docker-compose logs -f --tail=30 pg-master
slave-log:
docker-compose logs -f --tail=30 pg-slave

Wyświetl plik

@ -0,0 +1,77 @@
version: '2'
services:
pg-master:
image: 'kartoza/postgis:manual-build'
restart: 'always'
# You can optionally mount to volume, to play with the persistence and
# observe how the slave will behave after restarts.
volumes:
- './pg-master:/var/lib/postgresql'
environment:
# ALLOW_IP_RANGE option is used to specify additionals allowed domains
# in pg_hba.
# This range should allow slaves to connect to master
ALLOW_IP_RANGE: '0.0.0.0/0'
# We can specify optional credentials
POSTGRES_USER: 'superadmin'
POSTGRES_PASS: 'superstrongpassword'
# You can expose the port to observe it in your local machine
ports:
- "7777:5432"
pg-slave:
image: 'kartoza/postgis:manual-build'
restart: 'always'
# You can optionally mount to volume, but we're not able to scale it
# in that case.
# The slave will always destroy its database and copy from master at
# runtime
volumes:
- './pg-slave:/var/lib/postgresql'
environment:
# ALLOW_IP_RANGE option is used to specify additionals allowed domains
# in pg_hba.
# Not really needed in slaves for the replication, but optionally can
# be put when slaves are needed to be a failover server when master
# is down. The IP Range are generally needed if other services wants to
# connect to this slave
ALLOW_IP_RANGE: '0.0.0.0/0'
# REPLICATE_FROM options accepts domain-name or IP adress
# with this in mind, you can also put docker service name, because it
# will be resolved as host name.
REPLICATE_FROM: 'pg-master'
# REPLICATE_PORT will default to 5432 if not specified.
REPLICATE_PORT: '5432'
# In the case where you need to replicate from outside service,
# you can put the server address and port here, as long as the target
# where configured as master, and replicable.
# REPLICATE_FROM: '192.168.1.8'
# REPLICATE_PORT: '7777'
# DESTROY_DATABASE_ON_RESTART will default to True if not specified.
# If specified other than True, it will prevent slave from destroying
# database on restart
# DESTROY_DATABASE_ON_RESTART: 'False'
# PROMOTE_MASTER Default empty.
# If specified with any value, then it will convert current slave into
# a writable state. Useful if master is down and the current slave needs
# to be promoted until manual recovery.
# PROMOTE_MASTER: 'True'
# For now we don't support different credentials for replication
# so we use the same credentials as master's superuser, or anything that
# have replication role.
POSTGRES_USER: 'superadmin'
POSTGRES_PASS: 'superstrongpassword'
links:
- 'pg-master'
# You can expose the port to observe it in your local machine
# For this sample, it was disabled by default to allow scaling test
ports:
- "7776:5432"

15
setup-conf.sh 100644
Wyświetl plik

@ -0,0 +1,15 @@
#!/usr/bin/env bash
source /env-data.sh
# This script will setup necessary configuration to enable replications
# Refresh configuration in case environment settings changed.
cat $CONF.template > $CONF
cat >> $CONF <<EOF
wal_level = hot_standby
max_wal_senders = $PG_MAX_WAL_SENDERS
wal_keep_segments = $PG_WAL_KEEP_SEGMENTS
hot_standby = on
EOF

111
setup-database.sh 100644
Wyświetl plik

@ -0,0 +1,111 @@
#!/usr/bin/env bash
source /env-data.sh
# This script will setup the necessary folder for database
# test if DATADIR is existent
if [ ! -d ${DATADIR} ]; then
echo "Creating Postgres data at ${DATADIR}"
mkdir -p ${DATADIR}
fi
# Set proper permissions
# needs to be done as root:
chown -R postgres:postgres ${DATADIR}
# test if DATADIR has content
if [ ! "$(ls -A ${DATADIR})" ]; then
# No content yet - first time pg is being run!
# No Replicate From settings. Assume that this is a master database.
# Initialise db
echo "Initializing Postgres Database at ${DATADIR}"
#chown -R postgres $DATADIR
su - postgres -c "$INITDB ${DATADIR}"
fi
# test database existing
trap "echo \"Sending SIGTERM to postgres\"; killall -s SIGTERM postgres" SIGTERM
echo "Use modified postgresql.conf for greater speed (spatial and replication)"
cat /tmp/postgresql.conf > ${CONF}
su - postgres -c "${POSTGRES} -D ${DATADIR} -c config_file=${CONF} ${LOCALONLY} &"
# wait for postgres to come up
until su - postgres -c "psql -l"; do
sleep 1
done
echo "postgres ready"
RESULT=`su - postgres -c "psql -l | grep -w template_postgis | wc -l"`
if [[ ${RESULT} == '1' ]]
then
echo 'Postgis Already There'
if [[ ${HSTORE} == "true" ]]; then
echo 'HSTORE is only useful when you create the postgis database.'
fi
if [[ ${TOPOLOGY} == "true" ]]; then
echo 'TOPOLOGY is only useful when you create the postgis database.'
fi
else
echo "Postgis is missing, installing now"
# Note the dockerfile must have put the postgis.sql and spatialrefsys.sql scripts into /root/
# We use template0 since we want different encoding to template1
echo "Creating template postgis"
su - postgres -c "createdb template_postgis -E UTF8 -T template0"
echo "Enabling template_postgis as a template"
CMD="UPDATE pg_database SET datistemplate = TRUE WHERE datname = 'template_postgis';"
su - postgres -c "psql -c \"$CMD\""
echo "Loading postgis extension"
su - postgres -c "psql template_postgis -c 'CREATE EXTENSION postgis;'"
if [[ ${HSTORE} == "true" ]]
then
echo "Enabling hstore in the template"
su - postgres -c "psql template_postgis -c 'CREATE EXTENSION hstore;'"
fi
if [[ ${TOPOLOGY} == "true" ]]
then
echo "Enabling topology in the template"
su - postgres -c "psql template_postgis -c 'CREATE EXTENSION postgis_topology;'"
fi
# Needed when importing old dumps using e.g ndims for constraints
# Ignore error if it doesn't exists
echo "Loading legacy sql"
su - postgres -c "psql template_postgis -f ${SQLDIR}/legacy_minimal.sql" || true
su - postgres -c "psql template_postgis -f ${SQLDIR}/legacy_gist.sql" || true
fi
# Setup user
source /setup-user.sh
# Create a default db called 'gis' or $POSTGRES_DBNAME that you can use to get up and running quickly
# It will be owned by the docker db user
RESULT=`su - postgres -c "psql -l | grep -w ${POSTGRES_DBNAME} | wc -l"`
echo "Check default db exists"
if [[ ! ${RESULT} == '1' ]]; then
echo "Create default db ${POSTGRES_DBNAME}"
su - postgres -c "createdb -O ${POSTGRES_USER} -T template_postgis ${POSTGRES_DBNAME}"
else
echo "${POSTGRES_DBNAME} db already exists"
fi
# This should show up in docker logs afterwards
su - postgres -c "psql -l"
# Kill postgres
PID=`cat $PG_PID`
kill -TERM ${PID}
# Wait for background postgres main process to exit
while [ "$(ls -A ${PG_PID} 2>/dev/null)" ]; do
sleep 1
done

48
setup-pg_hba.sh 100644
Wyświetl plik

@ -0,0 +1,48 @@
#!/usr/bin/env bash
source /env-data.sh
# This script will setup pg_hba.conf
# Reconfigure pg_hba if environment settings changed
cat $ROOT_CONF/pg_hba.conf.template > $ROOT_CONF/pg_hba.conf
# Custom IP range via docker run -e (https://docs.docker.com/engine/reference/run/#env-environment-variables)
# Usage is: docker run [...] -e ALLOW_IP_RANGE='192.168.0.0/16'
if [ "$ALLOW_IP_RANGE" ]
then
echo "Add rule to pg_hba: $ALLOW_IP_RANGE"
echo "host all all $ALLOW_IP_RANGE md5" >> $ROOT_CONF/pg_hba.conf
fi
# check password first so we can output the warning before postgres
# messes it up
if [ "$POSTGRES_PASS" ]; then
pass="PASSWORD '$POSTGRES_PASS'"
authMethod=md5
else
# The - option suppresses leading tabs but *not* spaces. :)
cat >&2 <<-'EOWARN'
****************************************************
WARNING: No password has been set for the database.
This will allow anyone with access to the
Postgres port to access your database. In
Docker's default configuration, this is
effectively any other container on the same
system.
Use "-e POSTGRES_PASS=password" to set
it in "docker run".
****************************************************
EOWARN
pass=
authMethod=trust
fi
if [ -z "$REPLICATE_FROM" ]; then
# if env not set, then assume this is master instance
# add rules to pg_hba.conf to allow replication from all
echo "Add rule to pg_hba: replication user"
echo "host replication all 0.0.0.0/0 $authMethod" >> $ROOT_CONF/pg_hba.conf
fi

Wyświetl plik

@ -0,0 +1,51 @@
#!/usr/bin/env bash
source /env-data.sh
# This script will setup slave instance to use standby replication
# Adapted from https://github.com/DanielDent/docker-postgres-replication
# To set up replication
if [[ "$DESTROY_DATABASE_ON_RESTART" =~ [Tt][Rr][Uu][Ee] ]]; then
echo "Destroy initial database, if any."
rm -rf $DATADIR
fi
mkdir -p $DATADIR
chown -R postgres:postgres $DATADIR
chmod -R 700 $DATADIR
# No content yet - but this is a slave database
until ping -c 1 -W 1 ${REPLICATE_FROM}
do
echo "Waiting for master to ping..."
sleep 1s
done
if [[ "$DESTROY_DATABASE_ON_RESTART" =~ [Tt][Rr][Uu][Ee] ]]; then
echo "Get initial database from master"
su - postgres -c "echo \"${REPLICATE_FROM}:${REPLICATE_PORT}:*:${POSTGRES_USER}:${POSTGRES_PASS}\" > ~/.pgpass"
su - postgres -c "chmod 0600 ~/.pgpass"
until su - postgres -c "${PG_BASEBACKUP} -X stream -h ${REPLICATE_FROM} -p ${REPLICATE_PORT} -D ${DATADIR} -U ${POSTGRES_USER} -vP -w"
do
echo "Waiting for master to connect..."
sleep 1s
done
fi
# Setup recovery.conf, a configuration file for slave
cat > ${DATADIR}/recovery.conf <<EOF
standby_mode = on
primary_conninfo = 'host=${REPLICATE_FROM} port=${REPLICATE_PORT} user=${POSTGRES_USER} password=${POSTGRES_PASS} sslmode=${PGSSLMODE}'
trigger_file = '${PROMOTE_FILE}'
EOF
# Setup permissions. Postgres won't start without this.
chown postgres ${DATADIR}/recovery.conf
chmod 600 ${DATADIR}/recovery.conf
# Promote to master if desired
if [ ! -z "$PROMOTE_MASTER" ]; then
touch $PROMOTE_FILE
fi

17
setup-ssl.sh 100644
Wyświetl plik

@ -0,0 +1,17 @@
#!/usr/bin/env bash
source /env-data.sh
# This script will setup default SSL config
# /etc/ssl/private can't be accessed from within container for some reason
# (@andrewgodwin says it's something AUFS related) - taken from https://github.com/orchardup/docker-postgresql/blob/master/Dockerfile
cp -r /etc/ssl /tmp/ssl-copy/
chmod -R 0700 /etc/ssl
chown -R postgres /tmp/ssl-copy
rm -r /etc/ssl
mv /tmp/ssl-copy /etc/ssl
# Needed under debian, wasnt needed under ubuntu
mkdir -p ${PGSTAT_TMP}
chmod 0777 ${PGSTAT_TMP}

27
setup-user.sh 100644
Wyświetl plik

@ -0,0 +1,27 @@
#!/usr/bin/env bash
source /env-data.sh
# This script will setup new configured user
# Note that $POSTGRES_USER and $POSTGRES_PASS below are optional parameters that can be passed
# via docker run e.g.
#docker run --name="postgis" -e POSTGRES_USER=qgis -e POSTGRES_PASS=qgis -d -v
#/var/docker-data/postgres-dat:/var/lib/postgresql -t qgis/postgis:6
# If you dont specify a user/password in docker run, we will generate one
# here and create a user called 'docker' to go with it.
# Only create credentials if this is a master database
# Slave database will just mirror from master users
echo "Setup postgres User:Password"
echo "postgresql user: $POSTGRES_USER" > /tmp/PGPASSWORD.txt
echo "postgresql password: $POSTGRES_PASS" >> /tmp/PGPASSWORD.txt
# Check user already exists
RESULT=`su - postgres -c "psql postgres -t -c \"SELECT 1 FROM pg_roles WHERE rolname = '$POSTGRES_USER'\""`
COMMAND="ALTER"
if [ -z "$RESULT" ]; then
COMMAND="CREATE"
fi
su - postgres -c "psql postgres -c \"$COMMAND USER $POSTGRES_USER WITH SUPERUSER ENCRYPTED PASSWORD '$POSTGRES_PASS';\""

Wyświetl plik

@ -1,17 +1,18 @@
#!/usr/bin/env bash
# Add any additional setup tasks here
chmod 600 /etc/ssl/private/ssl-cert-snakeoil.key
# These tasks are run as root
CONF="/etc/postgresql/10/main/postgresql.conf"
source /env-data.sh
# Restrict subnet to docker private network
echo "host all all 172.17.0.0/16 md5" >> /etc/postgresql/10/main/pg_hba.conf
echo "host all all 172.18.0.0/16 md5" >> /etc/postgresql/10/main/pg_hba.conf
echo "host all all 172.0.0.0/8 md5" >> $ROOT_CONF/pg_hba.conf
# And allow access from DockerToolbox / Boottodocker on OSX
echo "host all all 192.168.0.0/16 md5" >> /etc/postgresql/10/main/pg_hba.conf
echo "host all all 192.168.0.0/16 md5" >> $ROOT_CONF/pg_hba.conf
# Listen on all ip addresses
echo "listen_addresses = '*'" >> /etc/postgresql/10/main/postgresql.conf
echo "port = 5432" >> /etc/postgresql/10/main/postgresql.conf
echo "listen_addresses = '*'" >> $CONF
echo "port = 5432" >> $CONF
# Enable ssl
@ -22,3 +23,7 @@ echo "ssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem'" >> $CONF
echo "ssl_key_file = '/etc/ssl/private/ssl-cert-snakeoil.key'" >> $CONF
#echo "ssl_ca_file = '' # (change requires restart)" >> $CONF
#echo "ssl_crl_file = ''" >> $CONF
# Create backup template for conf
cat $CONF > $CONF.template
cat $ROOT_CONF/pg_hba.conf > $ROOT_CONF/pg_hba.conf.template

Wyświetl plik

@ -1,160 +0,0 @@
#!/bin/bash
# This script will run as the postgres user due to the Dockerfile USER directive
DATADIR="/var/lib/postgresql/10/main"
CONF="/etc/postgresql/10/main/postgresql.conf"
POSTGRES="/usr/lib/postgresql/10/bin/postgres"
INITDB="/usr/lib/postgresql/10/bin/initdb"
SQLDIR="/usr/share/postgresql/10/contrib/postgis-2.2/"
LOCALONLY="-c listen_addresses='127.0.0.1, ::1'"
# /etc/ssl/private can't be accessed from within container for some reason
# (@andrewgodwin says it's something AUFS related) - taken from https://github.com/orchardup/docker-postgresql/blob/master/Dockerfile
cp -r /etc/ssl /tmp/ssl-copy/
chmod -R 0700 /etc/ssl
chown -R postgres /tmp/ssl-copy
rm -r /etc/ssl
mv /tmp/ssl-copy /etc/ssl
# Needed under debian, wasnt needed under ubuntu
mkdir -p /var/run/postgresql/10-main.pg_stat_tmp
chmod 0777 /var/run/postgresql/10-main.pg_stat_tmp
# test if DATADIR is existent
if [ ! -d $DATADIR ]; then
echo "Creating Postgres data directory at $DATADIR"
mkdir -p $DATADIR
fi
# needs to be done as root:
chown -R postgres:postgres $DATADIR
# Note that $POSTGRES_USER and $POSTGRES_PASS below are optional paramters that can be passed
# via docker run e.g.
#docker run --name="postgis" -e POSTGRES_USER=qgis -e POSTGRES_PASS=qgis -d -v
#/var/docker-data/postgres-dat:/var/lib/postgresql -t qgis/postgis:6
# If you dont specify a user/password in docker run, we will generate one
# here and create a user called 'docker' to go with it.
# test if DATADIR has content
if [ ! "$(ls -A $DATADIR)" ]; then
# No content yet - first time pg is being run!
# Initialise db
echo "Initializing Postgres Database at $DATADIR"
#chown -R postgres $DATADIR
su - postgres -c "$INITDB $DATADIR"
fi
# Make sure we have a user set up
if [ -z "$POSTGRES_USER" ]; then
POSTGRES_USER=docker
fi
if [ -z "$POSTGRES_PASS" ]; then
POSTGRES_PASS=docker
fi
# Set a default database name
if [ -z "$POSTGRES_DBNAME" ]; then
POSTGRES_DBNAME=gis
fi
# Enable hstore and topology by default
if [ -z "$HSTORE" ]; then
HSTORE=true
fi
if [ -z "$TOPOLOGY" ]; then
TOPOLOGY=true
fi
# Custom IP range via docker run -e (https://docs.docker.com/engine/reference/run/#env-environment-variables)
# Usage is: docker run [...] -e ALLOW_IP_RANGE='192.168.0.0/16'
if [ "$ALLOW_IP_RANGE" ]
then
echo "host all all $ALLOW_IP_RANGE md5" >> /etc/postgresql/10.0/main/pg_hba.conf
fi
# redirect user/pass into a file so we can echo it into
# docker logs when container starts
# so that we can tell user their password
echo "postgresql user: $POSTGRES_USER" > /tmp/PGPASSWORD.txt
echo "postgresql password: $POSTGRES_PASS" >> /tmp/PGPASSWORD.txt
su - postgres -c "$POSTGRES --single -D $DATADIR -c config_file=$CONF <<< \"CREATE USER $POSTGRES_USER WITH SUPERUSER ENCRYPTED PASSWORD '$POSTGRES_PASS';\""
trap "echo \"Sending SIGTERM to postgres\"; killall -s SIGTERM postgres" SIGTERM
su - postgres -c "$POSTGRES -D $DATADIR -c config_file=$CONF $LOCALONLY &"
# wait for postgres to come up
until `nc -z 127.0.0.1 5432`; do
echo "$(date) - waiting for postgres (localhost-only)..."
sleep 1
done
echo "postgres ready"
RESULT=`su - postgres -c "psql -l | grep postgis | wc -l"`
if [[ ${RESULT} == '1' ]]
then
echo 'Postgis Already There'
if [[ ${HSTORE} == "true" ]]; then
echo 'HSTORE is only useful when you create the postgis database.'
fi
if [[ ${TOPOLOGY} == "true" ]]; then
echo 'TOPOLOGY is only useful when you create the postgis database.'
fi
else
echo "Postgis is missing, installing now"
# Note the dockerfile must have put the postgis.sql and spatialrefsys.sql scripts into /root/
# We use template0 since we want t different encoding to template1
echo "Creating template postgis"
su - postgres -c "createdb template_postgis -E UTF8 -T template0"
echo "Enabling template_postgis as a template"
CMD="UPDATE pg_database SET datistemplate = TRUE WHERE datname = 'template_postgis';"
su - postgres -c "psql -c \"$CMD\""
echo "Loading postgis extension"
su - postgres -c "psql template_postgis -c 'CREATE EXTENSION postgis;'"
if [[ ${HSTORE} == "true" ]]
then
echo "Enabling hstore in the template"
su - postgres -c "psql template_postgis -c 'CREATE EXTENSION hstore;'"
fi
if [[ ${TOPOLOGY} == "true" ]]
then
echo "Enabling topology in the template"
su - postgres -c "psql template_postgis -c 'CREATE EXTENSION postgis_topology;'"
fi
# Needed when importing old dumps using e.g ndims for constraints
# commented out these lines since it seems these scripts are removed in Postgis 2.2
#echo "Loading legacy sql"
#su - postgres -c "psql template_postgis -f $SQLDIR/legacy_minimal.sql"
#su - postgres -c "psql template_postgis -f $SQLDIR/legacy_gist.sql"
# Create a default db called 'gis' that you can use to get up and running quickly
# It will be owned by the docker db user
su - postgres -c "createdb -O $POSTGRES_USER -T template_postgis $POSTGRES_DBNAME"
fi
# This should show up in docker logs afterwards
su - postgres -c "psql -l"
echo "Lock files"
echo "------------------------"
ls /var/run/postgresql/
echo "------------------------"
PID=`cat /var/run/postgresql/10-main.pid`
kill -TERM ${PID}
# Wait for background postgres main process to exit
while [ "$(ls -A /var/run/postgresql/10-main.pid 2>/dev/null)" ]; do
sleep 1
done
# Remove the lock file that may have been left behind in the above step
rm /var/run/postgresql/*.pid
echo "Postgres initialisation process completed .... restarting in foreground"
SETVARS="POSTGIS_ENABLE_OUTDB_RASTERS=1 POSTGIS_GDAL_ENABLED_DRIVERS=ENABLE_ALL"
su - postgres -c "$SETVARS $POSTGRES -D $DATADIR -c config_file=$CONF"