customize gunicorn worker count

pull/578/head
Karthik Balakrishnan 2023-05-14 14:56:02 +00:00
rodzic 79e1f0da14
commit cf1de42c3a
4 zmienionych plików z 10 dodań i 5 usunięć

Wyświetl plik

@ -1,3 +1,3 @@
web: gunicorn takahe.wsgi:application --workers 8
web: gunicorn takahe.wsgi:application --workers ${TAKAHE_WORKERS:-8}
worker: python manage.py runstator
release: python manage.py migrate

Wyświetl plik

@ -50,7 +50,6 @@ RUN TAKAHE_DATABASE_SERVER="postgres://x@example.com/x" TAKAHE_SECRET_KEY="takah
EXPOSE 8000
# Set some sensible defaults
ENV GUNICORN_CMD_ARGS="--workers 8"
ENV GUNICORN_CMD_ARGS=""
CMD ["bash", "docker/run.sh"]

Wyświetl plik

@ -12,7 +12,7 @@ sed "s/__CACHESIZE__/${CACHE_SIZE}/g" /etc/nginx/conf.d/default.conf.tpl | sed "
# Run nginx and gunicorn
nginx &
gunicorn takahe.wsgi:application -b 0.0.0.0:8001 $GUNICORN_EXTRA_CMD_ARGS &
gunicorn takahe.wsgi:application -b 0.0.0.0:8001 --workers ${TAKAHE_WORKERS:-8} $GUNICORN_EXTRA_CMD_ARGS &
# Wait for any process to exit
wait -n

Wyświetl plik

@ -9,7 +9,6 @@ We recommend that all installations are run behind a CDN, and
have caches configured. See below for more details on each.
Scaling
-------
@ -30,6 +29,13 @@ using more resources if you give them to it), you can:
a "celebrity" or other popular account will give Stator a lot of work as it
has to send a copy of each of their posts to every follower, separately.
* Takahe is run with Gunicorn which spawns several
[workers](https://docs.gunicorn.org/en/stable/settings.html#workers)to
handle requests. Depending on what environment you are running Takahe on,
you might want to customize the worker count via the `TAHAKE_WORKERS`
environment variable. The default is `8`.
As you scale up the number of containers, keep the PostgreSQL connection limit
in mind; this is generally the first thing that will fail, as Stator workers in
particular are quite connection-hungry (the parallel nature of their internal