kopia lustrzana https://github.com/linuxserver/docker-documentation
commit
0caeb93df7
142
FAQ.md
142
FAQ.md
|
@ -6,69 +6,69 @@ Here will some Frequently Asked Questions reside
|
|||
|
||||
Some x86_64 hosts running older versions of the Docker engine are not compatible with some images based on Ubuntu Jammy.
|
||||
|
||||
### Symptoms
|
||||
- Symptoms
|
||||
|
||||
If your host is affected you may see errors in your containers such as:
|
||||
If your host is affected you may see errors in your containers such as:
|
||||
|
||||
```shell
|
||||
ERROR - Unable to determine java version; make sure Java is installed and callable
|
||||
```
|
||||
```shell
|
||||
ERROR - Unable to determine java version; make sure Java is installed and callable
|
||||
```
|
||||
|
||||
Or
|
||||
Or
|
||||
|
||||
```shell
|
||||
Failed to create CoreCLR, HRESULT: 0x80070008
|
||||
```
|
||||
```shell
|
||||
Failed to create CoreCLR, HRESULT: 0x80070008
|
||||
```
|
||||
|
||||
Or
|
||||
Or
|
||||
|
||||
```shell
|
||||
WARNING :: MAIN : webStart.py:initialize:249 : can't start new thread
|
||||
```
|
||||
```shell
|
||||
WARNING :: MAIN : webStart.py:initialize:249 : can't start new thread
|
||||
```
|
||||
|
||||
### Resolution
|
||||
- Resolution
|
||||
|
||||
#### Option 1 (Long-Term Fix)
|
||||
- Option 1 (Long-Term Fix)
|
||||
|
||||
Upgrade your Docker engine install to at least version `20.10.10`. [Refer to the official Docker docs for installation/update details.](https://docs.docker.com/engine/install)
|
||||
Upgrade your Docker engine install to at least version `20.10.10`. [Refer to the official Docker docs for installation/update details.](https://docs.docker.com/engine/install)
|
||||
|
||||
#### Option 2 (Short-Term Fix)
|
||||
- Option 2 (Short-Term Fix)
|
||||
|
||||
For Docker CLI, run your container with:
|
||||
For Docker CLI, run your container with:
|
||||
|
||||
`--security-opt seccomp=unconfined`
|
||||
`--security-opt seccomp=unconfined`
|
||||
|
||||
For Docker Compose, run your container with:
|
||||
For Docker Compose, run your container with:
|
||||
|
||||
```yaml
|
||||
security_opt:
|
||||
- seccomp=unconfined
|
||||
```
|
||||
```yaml
|
||||
security_opt:
|
||||
- seccomp=unconfined
|
||||
```
|
||||
|
||||
## My host is incompatible with images based on rdesktop {#rdesktop}
|
||||
|
||||
Some x86_64 hosts have issues running rdesktop based images even with the latest docker version due to syscalls that are unknown to docker.
|
||||
Some x86_64 hosts have issues running rdesktop based images even with the latest docker version due to syscalls that are unknown to docker.
|
||||
|
||||
### Symptoms
|
||||
- Symptoms
|
||||
|
||||
If your host is affected you may see errors in your containers such as:
|
||||
If your host is affected you may see errors in your containers such as:
|
||||
|
||||
```shell
|
||||
Failed to close file descriptor for child process (Operation not permitted)
|
||||
```
|
||||
```shell
|
||||
Failed to close file descriptor for child process (Operation not permitted)
|
||||
```
|
||||
|
||||
### Resolution
|
||||
- Resolution
|
||||
|
||||
For Docker CLI, run your container with:
|
||||
For Docker CLI, run your container with:
|
||||
|
||||
`--security-opt seccomp=unconfined`
|
||||
`--security-opt seccomp=unconfined`
|
||||
|
||||
For Docker Compose, run your container with:
|
||||
For Docker Compose, run your container with:
|
||||
|
||||
```yaml
|
||||
security_opt:
|
||||
- seccomp=unconfined
|
||||
```
|
||||
```yaml
|
||||
security_opt:
|
||||
- seccomp=unconfined
|
||||
```
|
||||
|
||||
## My host is incompatible with images based on Ubuntu Focal and Alpine 3.13 and later {#libseccomp}
|
||||
|
||||
|
@ -80,54 +80,54 @@ This is due to a bug in the libseccomp2 library (dependency of Docker itself), w
|
|||
|
||||
You have a few options as noted below. Options 1 is short-term, while option 2 is considered the best option if you don't plan to reinstall the device (option 3).
|
||||
|
||||
### Resolution
|
||||
- Resolution
|
||||
|
||||
If you decide to do option 1 or 2, you should just need to restart the container after confirming you have libseccomp2.4.4 installed.
|
||||
If you decide to do option 1 or 2, you should just need to restart the container after confirming you have libseccomp2.4.4 installed.
|
||||
|
||||
If 1 or 2 did not work, ensure your Docker install is at least version 20.10.0, [refer to the official Docker docs for installation.](https://docs.docker.com/engine/install/debian/)
|
||||
If 1 or 2 did not work, ensure your Docker install is at least version 20.10.0, [refer to the official Docker docs for installation.](https://docs.docker.com/engine/install/debian/)
|
||||
|
||||
#### Option 1
|
||||
- Option 1
|
||||
|
||||
Manually install an updated version of the library with dpkg.
|
||||
Manually install an updated version of the library with dpkg.
|
||||
|
||||
```bash
|
||||
wget http://ftp.us.debian.org/debian/pool/main/libs/libseccomp/libseccomp2_2.4.4-1~bpo10+1_armhf.deb
|
||||
sudo dpkg -i libseccomp2_2.4.4-1~bpo10+1_armhf.deb
|
||||
```
|
||||
```shell
|
||||
wget http://ftp.us.debian.org/debian/pool/main/libs/libseccomp/libseccomp2_2.4.4-1~bpo10+1_armhf.deb
|
||||
sudo dpkg -i libseccomp2_2.4.4-1~bpo10+1_armhf.deb
|
||||
```
|
||||
|
||||
{% hint style="info" %}
|
||||
This url may have been updated. Find the latest by browsing [here](http://ftp.us.debian.org/debian/pool/main/libs/libseccomp/).
|
||||
{% endhint %}
|
||||
{% hint style="info" %}
|
||||
This url may have been updated. Find the latest by browsing [here](http://ftp.us.debian.org/debian/pool/main/libs/libseccomp/).
|
||||
{% endhint %}
|
||||
|
||||
#### Option 2
|
||||
- Option 2
|
||||
|
||||
Add the backports repo for DebianBuster. As seen [here](https://github.com/linuxserver/docker-jellyfin/issues/71#issuecomment-733621693).
|
||||
Add the backports repo for DebianBuster. As seen [here](https://github.com/linuxserver/docker-jellyfin/issues/71#issuecomment-733621693).
|
||||
|
||||
```bash
|
||||
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 04EE7237B7D453EC 648ACFD622F3D138
|
||||
echo "deb http://deb.debian.org/debian buster-backports main" | sudo tee -a /etc/apt/sources.list.d/buster-backports.list
|
||||
sudo apt update
|
||||
sudo apt install -t buster-backports libseccomp2
|
||||
```
|
||||
```shell
|
||||
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 04EE7237B7D453EC 648ACFD622F3D138
|
||||
echo "deb http://deb.debian.org/debian buster-backports main" | sudo tee -a /etc/apt/sources.list.d/buster-backports.list
|
||||
sudo apt update
|
||||
sudo apt install -t buster-backports libseccomp2
|
||||
```
|
||||
|
||||
#### Option 3
|
||||
- Option 3
|
||||
|
||||
Reinstall/update your OS to a version that still gets updates.
|
||||
Reinstall/update your OS to a version that still gets updates.
|
||||
|
||||
* Any distro based on DebianStretch does not seem to have this package available
|
||||
* DebianBuster based distros can get the package trough backports, as outlined in point 2.
|
||||
- Any distro based on DebianStretch does not seem to have this package available
|
||||
- DebianBuster based distros can get the package trough backports, as outlined in point 2.
|
||||
|
||||
{% hint style="info" %}
|
||||
RaspberryPI OS (formerly Raspbian) Can be upgraded to run with a 64bit kernel
|
||||
{% endhint %}
|
||||
{% hint style="info" %}
|
||||
RaspberryPI OS (formerly Raspbian) Can be upgraded to run with a 64bit kernel
|
||||
{% endhint %}
|
||||
|
||||
### Symptoms
|
||||
- Symptoms
|
||||
|
||||
* 502 errors in __Jellyfin__ as seen in [linuxserver/docker-jellyfin#71](https://github.com/linuxserver/docker-jellyfin/issues/71)
|
||||
* `Error starting framework core` messages in the docker log for __Plex__. [linuxserver/docker-plex#247](https://github.com/linuxserver/docker-plex/issues/247)
|
||||
* No WebUI for __Radarr__, even though the container is running. [linuxserver/docker-radarr#118](https://github.com/linuxserver/docker-radarr/issues/118)
|
||||
* Images based on our Nginx base-image(Nextcloud, SWAG, Nginx, etc.) fails to generate a certificate, with a message similar to `error getting time:crypto/asn1/a_time.c:330`
|
||||
* `docker exec <container-name> date` returns 1970
|
||||
- 502 errors in __Jellyfin__ as seen in [linuxserver/docker-jellyfin#71](https://github.com/linuxserver/docker-jellyfin/issues/71)
|
||||
- `Error starting framework core` messages in the docker log for __Plex__. [linuxserver/docker-plex#247](https://github.com/linuxserver/docker-plex/issues/247)
|
||||
- No WebUI for __Radarr__, even though the container is running. [linuxserver/docker-radarr#118](https://github.com/linuxserver/docker-radarr/issues/118)
|
||||
- Images based on our Nginx base-image(Nextcloud, SWAG, Nginx, etc.) fails to generate a certificate, with a message similar to `error getting time:crypto/asn1/a_time.c:330`
|
||||
- `docker exec <container-name> date` returns 1970
|
||||
|
||||
## I want to reverse proxy a application which defaults to https with a selfsigned certificate {#strict-proxy}
|
||||
|
||||
|
|
|
@ -12,4 +12,3 @@ The team resides primarily in our Discord server. We also have a forum if chat i
|
|||
| Forum | [https://discourse.linuxserver.io](https://discourse.linuxserver.io) |
|
||||
|
||||
For those interested in our CI environment via Jenkins: [https://ci.linuxserver.io/](https://ci.linuxserver.io/)
|
||||
|
||||
|
|
|
@ -4,11 +4,12 @@
|
|||
* Created 2021-08-18
|
||||
* Updated 2021-08-18
|
||||
|
||||
|
||||
## Charter
|
||||
|
||||
We will at all times attempt to keep a surplus of $6,000 in the bank account, or an amount which covers 3 years of expenses, whichever is higher. All other money will be disbursed by agreement of a general consensus of linuxserver.io staff members.
|
||||
|
||||
## Annual Expenses
|
||||
|
||||
* DigitalOcean yearly costs (currently paid for) **$1200**
|
||||
* AWS **~$200**
|
||||
* Contabo hosting **$287.76**
|
||||
|
@ -18,9 +19,11 @@ We will at all times attempt to keep a surplus of $6,000 in the bank account, or
|
|||
* Various licenses **~$150**
|
||||
|
||||
## Votes
|
||||
|
||||
In order for money to be approved for a project, the requesting member must go through every effort to bring to vote a fully formed idea that is ready to be actioned. This means that all the legwork is done before bringing an idea to vote, or at least as much as is reasonably possible. A vote will last for 3 days in order to give all team members the opportunity to participate without unnecessarily causing delays. A generasl consensus will need to be reached in order for it to proceed.
|
||||
|
||||
## Acceptable uses of money
|
||||
|
||||
* Hardware/Software needed to help the group reach a specific goal
|
||||
* Stationary + Related items for possible Conventions
|
||||
* Convention Fees (Both Attendence and Travel)
|
||||
|
@ -30,4 +33,5 @@ In order for money to be approved for a project, the requesting member must go t
|
|||
* Donations to upstream projects
|
||||
|
||||
## Links
|
||||
|
||||
* [https://opencollective.com/linuxserver#category-BUDGET](https://opencollective.com/linuxserver#category-BUDGET)
|
||||
|
|
|
@ -337,4 +337,3 @@
|
|||
| [hedgedoc](https://github.com/linuxserver/docker-hedgedoc/) | [HedgeDoc](https://hedgedoc.org/) gives you access to all your files wherever you are. |
|
||||
| [raneto](https://github.com/linuxserver/docker-raneto/) | [raneto](http://raneto.com/) - is an open source Knowledgebase platform that uses static Markdown files to power your Knowledgebase. |
|
||||
| [wikijs](https://github.com/linuxserver/docker-wikijs/) | [wikijs](https://github.com/Requarks/wiki) A modern, lightweight and powerful wiki app built on NodeJS. |
|
||||
|
||||
|
|
|
@ -34,7 +34,7 @@ if using compose. Where possible, to improve security, we recommend mounting the
|
|||
|
||||
One example use case is our Piwigo container has a plugin that supports video, but requires ffmpeg to be installed. No problem. Add this bad boy into a script file (can be named anything) and you're good to go.
|
||||
|
||||
```bash
|
||||
```shell
|
||||
#!/bin/bash
|
||||
|
||||
echo "**** installing ffmpeg ****"
|
||||
|
@ -59,7 +59,7 @@ if using compose. Where possible, to improve security, we recommend mounting the
|
|||
|
||||
Running cron in our containers is now as simple as a single file. Drop this script in `/custom-services.d/cron` and it will run automatically in the container:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
#!/usr/bin/with-contenv bash
|
||||
|
||||
/usr/sbin/crond -f -S -l 0 -c /etc/crontabs
|
||||
|
@ -83,7 +83,7 @@ This allows community members to publish a relatively static pile of logic that
|
|||
|
||||
An example of how this logic can be used to greatly expand the functionality of our base containers would be to add VPN support to a Transmission container:
|
||||
|
||||
```
|
||||
```shell
|
||||
docker create \
|
||||
--name=transmission \
|
||||
--cap-add=NET_ADMIN \
|
||||
|
|
|
@ -6,7 +6,7 @@ You may find at some point you need to view the internal data of a container.
|
|||
|
||||
Particularly useful when debugging the application - to shell in to one of our containers, run the following:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker exec -it <container_name> /bin/bash
|
||||
```
|
||||
|
||||
|
@ -14,7 +14,7 @@ docker exec -it <container_name> /bin/bash
|
|||
|
||||
The vast majority of our images are configured to output the application logs to the console, which in Docker's terms means you can access them using the `docker logs` command:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker logs -f --tail=<number_of_lines_to_start_with> <container_name>
|
||||
```
|
||||
|
||||
|
@ -22,7 +22,7 @@ The `--tail` argument is optional, but useful if the application has been runnin
|
|||
|
||||
To make life simpler for yourself here's a handy bash alias to do some of the leg work for you:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
# ~/.bash_aliases
|
||||
alias dtail='docker logs -tf --tail="50" "$@"'
|
||||
```
|
||||
|
@ -35,13 +35,12 @@ If you are experiencing issues with one of our containers, it helps us to know w
|
|||
|
||||
To obtain the build version for the container:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker inspect -f '{{ index .Config.Labels "build_version" }}' <container_name>
|
||||
```
|
||||
|
||||
Or the image:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker inspect -f '{{ index .Config.Labels "build_version" }}' linuxserver/<image_name>
|
||||
```
|
||||
|
||||
|
|
|
@ -6,7 +6,7 @@ A container bundles all the libraries required by an application to run, you no
|
|||
|
||||
To get started, not much. You will need to know about some of the terminology or concepts when performing more advanced tasks or troubleshooting but getting started couldn't be much simpler.
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker run hello-world
|
||||
```
|
||||
|
||||
|
@ -28,11 +28,10 @@ Containers are completely sandboxed environments by the Linux kernel. It may hel
|
|||
|
||||
By default a running container has absolutely no context of the world around it. Out the box you cannot connect from the outside world to the running webservers on ports 80 and 443 below. To allow entry to the sandbox from the outside world we must explicitly allow entry using the `-p` flag.
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker run -d --name=letsencrypt -p 80:80 -p 443:443 linuxserver/letsencrypt
|
||||
```
|
||||
|
||||
Take this concept and multiply it across all aspects of a running application. Ports, volumes \(i.e. the files you want to be available inside the container from outside the container\), environment variables and so on. For us as developers this allows us to isolate your system from troubleshooting as the box the container is running in \(the container\) is identical to the next.
|
||||
|
||||
Containers are an amazing way to run applications in a secure, sandboxed way.
|
||||
|
||||
|
|
|
@ -2,46 +2,55 @@
|
|||
|
||||
## Intro
|
||||
|
||||
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
|
||||
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
|
||||
|
||||
Note that when inputting data for variables, you must follow standard YAML rules. In the case of passwords with special characters this can mean escaping them properly ($ is the escape character) or properly quoting the variable. The best course of action if you do not know how to do this or are unwilling to research, is to stick to alphanumeric characters only.
|
||||
Note that when inputting data for variables, you must follow standard YAML rules. In the case of passwords with special characters this can mean escaping them properly ($ is the escape character) or properly quoting the variable. The best course of action if you do not know how to do this or are unwilling to research, is to stick to alphanumeric characters only.
|
||||
|
||||
## Installation
|
||||
|
||||
### Install Option 1 (recommended):
|
||||
Starting with version 2, Docker started publishing `docker compose` as a go based plugin for docker (rather than a python based standalone binary). And they also publish this plugin for various arches, including x86_64, armhf and aarch64 (as opposed to the x86_64 only binaries for v1.X). Therefore we updated our recommended install option to utilize the plugin.
|
||||
- Install Option 1 (recommended)
|
||||
|
||||
You can install `docker compose` via the following commands:
|
||||
```
|
||||
ARCH=$(uname -m) && [[ "${ARCH}" == "armv7l" ]] && ARCH="armv7"
|
||||
sudo mkdir -p /usr/local/lib/docker/cli-plugins
|
||||
sudo curl -SL "https://github.com/docker/compose/releases/latest/download/docker-compose-linux-${ARCH}" -o /usr/local/lib/docker/cli-plugins/docker-compose
|
||||
sudo chmod +x /usr/local/lib/docker/cli-plugins/docker-compose
|
||||
```
|
||||
Starting with version 2, Docker started publishing `docker compose` as a go based plugin for docker (rather than a python based standalone binary). And they also publish this plugin for various arches, including x86_64, armhf and aarch64 (as opposed to the x86_64 only binaries for v1.X). Therefore we updated our recommended install option to utilize the plugin.
|
||||
|
||||
Assuming you already have docker (or at the very least docker-cli) installed, preferably from the official docker repos, running `docker compose version` should display the compose version.
|
||||
You can install `docker compose` via the following commands:
|
||||
|
||||
If you don't have docker installed yet, we recommend installing it via the following commands:
|
||||
```
|
||||
curl -fsSL https://get.docker.com -o get-docker.sh
|
||||
sh get-docker.sh
|
||||
```
|
||||
```shell
|
||||
ARCH=$(uname -m) && [[ "${ARCH}" == "armv7l" ]] && ARCH="armv7"
|
||||
sudo mkdir -p /usr/local/lib/docker/cli-plugins
|
||||
sudo curl -SL "https://github.com/docker/compose/releases/latest/download/docker-compose-linux-${ARCH}" -o /usr/local/lib/docker/cli-plugins/docker-compose
|
||||
sudo chmod +x /usr/local/lib/docker/cli-plugins/docker-compose
|
||||
```
|
||||
|
||||
#### v1.X compatibility:
|
||||
As v2 runs as a plugin instead of a standalone binary, it is invoked by `docker compose args` instead of `docker-compose args`. There are also some slight differences in how the yaml is operated as well. To make migration easier, Docker released a replacement binary for `docker-compose` on x86_64 and aarch64 platforms. More info on that can be found at the [upstream repo](https://github.com/docker/compose-switch).
|
||||
Assuming you already have docker (or at the very least docker-cli) installed, preferably from the official docker repos, running `docker compose version` should display the compose version.
|
||||
|
||||
### Install Option 2:
|
||||
You can install docker-compose using our [docker-compose image](https://github.com/linuxserver/docker-docker-compose) via a run script. You can simply run the following commands on your system and you should have a functional install that you can call from anywhere as `docker-compose`:
|
||||
```
|
||||
sudo curl -L --fail https://raw.githubusercontent.com/linuxserver/docker-docker-compose/v2/run.sh -o /usr/local/bin/docker-compose
|
||||
sudo chmod +x /usr/local/bin/docker-compose
|
||||
```
|
||||
In order to update the local image, you can run the following commands:
|
||||
```
|
||||
docker pull linuxserver/docker-compose:"${DOCKER_COMPOSE_IMAGE_TAG:-v2}"
|
||||
docker image prune -f
|
||||
```
|
||||
The above commands will use the v2 images (although invoked by`docker-compose` instead of `docker compose`). If you'd like to use v1 images, you can set an env var `DOCKER_COMPOSE_IMAGE_TAG=alpine`, `DOCKER_COMPOSE_IMAGE_TAG=ubuntu` in your respective `.profile`. Alternatively you can set that var to a versioned image tag like `v2-2.4.1-r1` or `version-alpine-1.27.4` to pin it to a specific docker-compose version.
|
||||
If you don't have docker installed yet, we recommend installing it via the following commands:
|
||||
|
||||
```shell
|
||||
curl -fsSL https://get.docker.com -o get-docker.sh
|
||||
sh get-docker.sh
|
||||
```
|
||||
|
||||
- v1.X compatibility
|
||||
|
||||
As v2 runs as a plugin instead of a standalone binary, it is invoked by `docker compose args` instead of `docker-compose args`. There are also some slight differences in how the yaml is operated as well. To make migration easier, Docker released a replacement binary for `docker-compose` on x86_64 and aarch64 platforms. More info on that can be found at the [upstream repo](https://github.com/docker/compose-switch).
|
||||
|
||||
- Install Option 2
|
||||
|
||||
You can install docker-compose using our [docker-compose image](https://github.com/linuxserver/docker-docker-compose) via a run script. You can simply run the following commands on your system and you should have a functional install that you can call from anywhere as `docker-compose`:
|
||||
|
||||
```shell
|
||||
sudo curl -L --fail https://raw.githubusercontent.com/linuxserver/docker-docker-compose/v2/run.sh -o /usr/local/bin/docker-compose
|
||||
sudo chmod +x /usr/local/bin/docker-compose
|
||||
```
|
||||
|
||||
In order to update the local image, you can run the following commands:
|
||||
|
||||
```shell
|
||||
docker pull linuxserver/docker-compose:"${DOCKER_COMPOSE_IMAGE_TAG:-v2}"
|
||||
docker image prune -f
|
||||
```
|
||||
|
||||
The above commands will use the v2 images (although invoked by`docker-compose` instead of `docker compose`). If you'd like to use v1 images, you can set an env var `DOCKER_COMPOSE_IMAGE_TAG=alpine`, `DOCKER_COMPOSE_IMAGE_TAG=ubuntu` in your respective `.profile`. Alternatively you can set that var to a versioned image tag like `v2-2.4.1-r1` or `version-alpine-1.27.4` to pin it to a specific docker-compose version.
|
||||
|
||||
## Single service Usage
|
||||
|
||||
|
@ -143,33 +152,34 @@ If your compose yaml makes use of `.env`, please post an output of `docker compo
|
|||
|
||||
Create or open the file `~/.bash_aliases` and populate with the following content:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
alias dcup='docker compose -f /opt/docker-compose.yml up -d' #brings up all containers if one is not defined after dcup
|
||||
alias dcdown='docker compose -f /opt/docker-compose.yml stop' #brings down all containers if one is not defined after dcdown
|
||||
alias dcpull='docker compose -f /opt/docker-compose.yml pull' #pulls all new images is specified after dcpull
|
||||
alias dclogs='docker compose -f /opt/docker-compose.yml logs -tf --tail="50" '
|
||||
alias dtail='docker logs -tf --tail="50" "$@"'
|
||||
```
|
||||
|
||||
If the `docker-compose.yml` file is in a home directory, the following can be put in the `~/.bash_aliases` file.
|
||||
```
|
||||
|
||||
```shell
|
||||
alias dcup='docker-compose -f ~/docker-compose.yml up -d' #brings up all containers if one is not defined after dcup
|
||||
alias dcdown='docker-compose -f ~/docker-compose.yml stop' #brings down all containers if one is not defined after dcdown
|
||||
alias dcpull='docker-compose -f ~/docker-compose.yml pull' #pulls all new images unless one is specified
|
||||
alias dclogs='docker-compose -f ~/docker-compose.yml logs -tf --tail="50" '
|
||||
alias dtail='docker logs -tf --tail="50" "$@"'
|
||||
```
|
||||
There are multiple ways to see the logs of your containers. In some instances, using `docker logs` is preferable to `docker compose logs`. By default `docker logs` will not run unless you define which service the logs are coming from. The `docker compose logs` will pull all of the logs for the services defined in the `docker-compose.yml` file.
|
||||
|
||||
There are multiple ways to see the logs of your containers. In some instances, using `docker logs` is preferable to `docker compose logs`. By default `docker logs` will not run unless you define which service the logs are coming from. The `docker compose logs` will pull all of the logs for the services defined in the `docker-compose.yml` file.
|
||||
|
||||
When asking for help, you should post your logs or be ready to provide logs if someone requests it. If you are running multiple containers in your `docker-compose.yml` file, it is not helpful to submit **all** of the logs. If you are experiencing issues with a single service, say Heimdall, then you would want to get your logs using `docker logs heimdall` or `docker compose logs heimdall`. The bash_alias for `dclogs` can be used if you define your service after you've typed the alias. Likewise, the bash_alias `detail` will not run without defining the service after it.
|
||||
|
||||
|
||||
Some distributions, like Ubuntu, already have the code snippet below in the `~/.bashrc` file. If it is not included, you'll need to add the following to your `~/.bashrc` file in order for the aliases file to be picked up:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
if [ -f ~/.bash_aliases ]; then
|
||||
. ~/.bash_aliases
|
||||
fi
|
||||
```
|
||||
|
||||
Once configured, you can run `source ~/.bashrc` or log out and the log in again. Now you can type `dcpull` or `dcup` to manage your entire fleet of containers at once. It's like magic.
|
||||
|
||||
|
|
|
@ -102,6 +102,7 @@ All synchronized repositories and images returned.
|
|||
}
|
||||
}
|
||||
```
|
||||
|
||||
{% endapi-method-response-example %}
|
||||
{% endapi-method-response %}
|
||||
{% endapi-method-spec %}
|
||||
|
@ -135,7 +136,7 @@ The username and password that you define must then be provided as part of Fleet
|
|||
|
||||
All primary configuration for Fleet at runtime is loaded in via a `fleet.properties` file. This can be located anywhere on the file system, and is loaded in via a Runtime argument:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
# Runtime
|
||||
fleet.app.port=8080
|
||||
|
||||
|
@ -190,4 +191,3 @@ When starting Fleet for the first time it will create a default user in order fo
|
|||
{% hint style="warning" %}
|
||||
You should change the default password for this user as soon as possible! This can be done via the `Admin` -> `Users` menu options.
|
||||
{% endhint %}
|
||||
|
||||
|
|
|
@ -19,7 +19,7 @@ We do this because we believe that it makes it easier to answer the common quest
|
|||
|
||||
To create a container from one of our images, you must use either `docker create` or `docker run`. Each image follows the same pattern in the command when creating a container:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker create \
|
||||
--name=<container_name> \
|
||||
-v <path_to_data>:/config \
|
||||
|
@ -28,4 +28,3 @@ docker create \
|
|||
-p <host_port>:<app_port> \
|
||||
linuxserver/<image_name>
|
||||
```
|
||||
|
||||
|
|
291
general/swag.md
291
general/swag.md
|
@ -1,59 +1,63 @@
|
|||
The goal of this guide is to give you ideas on what can be accomplished with the [LinuxServer SWAG docker image](https://hub.docker.com/r/linuxserver/swag) and to get you started. We will explain some of the basic concepts and limitations, and then we'll provide you with common examples. If you have further questions, you can ask on [our forum](https://discourse.linuxserver.io/) or join our Discord for conversations: https://discord.gg/YWrKVTn
|
||||
# SWAG
|
||||
|
||||
# Table of Contents
|
||||
The goal of this guide is to give you ideas on what can be accomplished with the [LinuxServer SWAG docker image](https://hub.docker.com/r/linuxserver/swag) and to get you started. We will explain some of the basic concepts and limitations, and then we'll provide you with common examples. If you have further questions, you can ask on [our forum](https://discourse.linuxserver.io/) or join our Discord for conversations: <https://discord.gg/YWrKVTn>
|
||||
|
||||
* [Introduction](#introduction)
|
||||
* [What are SSL certs?](#what-are-ssl-certs)
|
||||
* [What is Let's Encrypt (and/or ZeroSSL)?](#what-is-lets-encrypt-andor-zerossl)
|
||||
* [Creating a SWAG container](#creating-a-swag-container)
|
||||
* [Docker cli](#docker-cli)
|
||||
* [Docker compose](#docker-compose)
|
||||
* [Authorization method](#authorization-method)
|
||||
* [Cert provider (Let's Encrypt vs ZeroSSL)](#cert-provider-lets-encrypt-vs-zerossl)
|
||||
* [Port forwards](#port-forwards)
|
||||
* [Docker networking](#docker-networking)
|
||||
* [Container setup examples](#container-setup-examples)
|
||||
* [Create container via http validation](#create-container-via-http-validation)
|
||||
* [Create container via dns validation with a wildcard cert](#create-container-via-dns-validation-with-a-wildcard-cert)
|
||||
* [Create container via duckdns validation with a wildcard cert](#create-container-via-duckdns-validation-with-a-wildcard-cert)
|
||||
* [Web hosting examples](#web-hosting-examples)
|
||||
* [Simple html web page hosting](#simple-html-web-page-hosting)
|
||||
* [Hosting a Wordpress site](#hosting-a-wordpress-site)
|
||||
* [Reverse Proxy](#reverse-proxy)
|
||||
* [Preset proxy confs](#preset-proxy-confs)
|
||||
* [Understanding the proxy conf structure](#understanding-the-proxy-conf-structure)
|
||||
* [Subdomain proxy conf](#subdomain-proxy-conf)
|
||||
* [Subfolder proxy conf](#subfolder-proxy-conf)
|
||||
* [Ombi subdomain reverse proxy example](#ombi-subdomain-reverse-proxy-example)
|
||||
* [Nextcloud subdomain reverse proxy example](#nextcloud-subdomain-reverse-proxy-example)
|
||||
* [Plex subfolder reverse proxy example](#plex-subfolder-reverse-proxy-example)
|
||||
* [Using Heimdall as the home page at domain root](#using-heimdall-as-the-home-page-at-domain-root)
|
||||
* [Troubleshooting](#troubleshooting)
|
||||
* [Common errors](#common-errors)
|
||||
* [404](#404)
|
||||
* [502](#502)
|
||||
* [Final Thoughts](#final-thoughts)
|
||||
* [How to Request Support](#how-to-request-support)
|
||||
## Table of Contents
|
||||
|
||||
# Introduction
|
||||
- [SWAG](#swag)
|
||||
- [Table of Contents](#table-of-contents)
|
||||
- [Introduction](#introduction)
|
||||
- [What are SSL certs?](#what-are-ssl-certs)
|
||||
- [What is Let's Encrypt (and/or ZeroSSL)?](#what-is-lets-encrypt-andor-zerossl)
|
||||
- [Creating a SWAG container](#creating-a-swag-container)
|
||||
- [docker cli](#docker-cli)
|
||||
- [docker-compose](#docker-compose)
|
||||
- [Authorization method](#authorization-method)
|
||||
- [Cert Provider (Let's Encrypt vs ZeroSSL)](#cert-provider-lets-encrypt-vs-zerossl)
|
||||
- [Port forwards](#port-forwards)
|
||||
- [Docker networking](#docker-networking)
|
||||
- [Container setup examples](#container-setup-examples)
|
||||
- [Create container via http validation](#create-container-via-http-validation)
|
||||
- [Create container via dns validation with a wildcard cert](#create-container-via-dns-validation-with-a-wildcard-cert)
|
||||
- [Create container via duckdns validation with a wildcard cert](#create-container-via-duckdns-validation-with-a-wildcard-cert)
|
||||
- [Web hosting examples](#web-hosting-examples)
|
||||
- [Simple html web page hosting](#simple-html-web-page-hosting)
|
||||
- [Hosting a Wordpress site](#hosting-a-wordpress-site)
|
||||
- [Reverse Proxy](#reverse-proxy)
|
||||
- [Preset proxy confs](#preset-proxy-confs)
|
||||
- [Understanding the proxy conf structure](#understanding-the-proxy-conf-structure)
|
||||
- [Subdomain proxy conf](#subdomain-proxy-conf)
|
||||
- [Subfolder proxy conf](#subfolder-proxy-conf)
|
||||
- [Ombi subdomain reverse proxy example](#ombi-subdomain-reverse-proxy-example)
|
||||
- [Nextcloud subdomain reverse proxy example](#nextcloud-subdomain-reverse-proxy-example)
|
||||
- [Plex subfolder reverse proxy example](#plex-subfolder-reverse-proxy-example)
|
||||
- [Using Heimdall as the home page at domain root](#using-heimdall-as-the-home-page-at-domain-root)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
- [Common errors](#common-errors)
|
||||
- [404](#404)
|
||||
- [502](#502)
|
||||
- [Final Thoughts](#final-thoughts)
|
||||
- [How to Request Support](#how-to-request-support)
|
||||
|
||||
## What are SSL certs?
|
||||
## Introduction
|
||||
|
||||
### What are SSL certs?
|
||||
|
||||
SSL certs allow users of a service to communicate via encrypted data transmitted up and down. Third party trusted certs also allow users to make sure that the remote service they are connecting to is really who they say they are and not someone else in the middle. When we run a web server for reasons like hosting websites or reverse proxying services on our own domain, we need to set it up with third party trusted ssl certs so client browsers trust it and communicate with it securely. When you connect to a website with a trusted cert, most browsers show a padlock icon next to the address bar to indicate that. Without a trusted cert (ie. with self signed cert) most browsers show warning pages or may block access to the website as the website identity cannot be confirmed via a trusted third party.
|
||||
|
||||
## What is Let's Encrypt (and/or ZeroSSL)?
|
||||
### What is Let's Encrypt (and/or ZeroSSL)?
|
||||
|
||||
In the past, the common way to get a trusted ssl cert was to contact one of the providers, send them the relevant info to prove ownership of a domain and pay for the service. Nowadays, with [Let's Encrypt](https://letsencrypt.org/) and [ZeroSSL](https://zerossl.com/), one can get free certs via automated means.
|
||||
|
||||
The [SWAG docker image](https://hub.docker.com/r/linuxserver/swag), published and maintained by [LinuxServer.io](https://linuxserver.io), makes setting up a full-fledged web server with auto generated and renewed ssl certs very easy. It is essentially an nginx webserver with php7, fail2ban (intrusion prevention) and Let's Encrypt cert validation built-in. It is just MySQL short of a LEMP stack and therefore is best paired with our [MariaDB docker image](https://hub.docker.com/r/linuxserver/mariadb).
|
||||
|
||||
# Creating a SWAG container
|
||||
## Creating a SWAG container
|
||||
|
||||
Most of the initial settings for getting a webserver with ssl certs up are done through the docker run/create or compose yaml parameters. Here's a list of all the settings available including the optional ones. It is safe to remove unnecessary parameters for different scenarios.
|
||||
|
||||
## docker cli
|
||||
### docker cli
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker create \
|
||||
--name=swag \
|
||||
--cap-add=NET_ADMIN \
|
||||
|
@ -77,7 +81,7 @@ docker create \
|
|||
lscr.io/linuxserver/swag
|
||||
```
|
||||
|
||||
## docker-compose
|
||||
### docker-compose
|
||||
|
||||
Compatible with docker-compose v2 schemas.
|
||||
|
||||
|
@ -112,38 +116,39 @@ services:
|
|||
restart: unless-stopped
|
||||
```
|
||||
|
||||
## Authorization method
|
||||
### Authorization method
|
||||
|
||||
Our image currently supports three different methods to validate domain ownership:
|
||||
|
||||
* **http:**
|
||||
* Let's Encrypt (acme) server connects to domain on port 80
|
||||
* Can be owned domain or a dynamic dns address
|
||||
* **dns:**
|
||||
* Let's Encrypt (acme) server connects to dns provider
|
||||
* Api credentials and settings entered into `ini` files under `/config/dns-conf/`
|
||||
* Supports wildcard certs
|
||||
* Need to have own domain name (non-free)
|
||||
* **duckdns:**
|
||||
* Let's Encrypt (acme) server connects to DuckDNS
|
||||
* Supports wildcard certs (only for the sub-subdomains)
|
||||
* No need for own domain (free)
|
||||
- **http:**
|
||||
- Let's Encrypt (acme) server connects to domain on port 80
|
||||
- Can be owned domain or a dynamic dns address
|
||||
- **dns:**
|
||||
- Let's Encrypt (acme) server connects to dns provider
|
||||
- Api credentials and settings entered into `ini` files under `/config/dns-conf/`
|
||||
- Supports wildcard certs
|
||||
- Need to have own domain name (non-free)
|
||||
- **duckdns:**
|
||||
- Let's Encrypt (acme) server connects to DuckDNS
|
||||
- Supports wildcard certs (only for the sub-subdomains)
|
||||
- No need for own domain (free)
|
||||
|
||||
The validation is performed when the container is started for the first time. Nginx won't be up until ssl certs are successfully generated.
|
||||
|
||||
The certs are valid for 90 days. The container will check the cert expiration status every night and if they are to expire within 30 days, it will attempt to auto-renew. If your certs are about to expire in less than 30 days, check the logs under `/config/log/letsencrypt` to see why the auto-renewals failed.
|
||||
|
||||
## Cert Provider (Let's Encrypt vs ZeroSSL)
|
||||
### Cert Provider (Let's Encrypt vs ZeroSSL)
|
||||
|
||||
As of January 2021, SWAG supports getting certs validated by either [Let's Encrypt](https://letsencrypt.org/) or [ZeroSSL](https://zerossl.com/). Both services use the [ACME protocol](https://en.wikipedia.org/wiki/Automated_Certificate_Management_Environment) as the underlying method to validate ownership. Our Certbot client in the SWAG image is ACME compliant and therefore supports both services.
|
||||
|
||||
Although very similar, ZeroSSL does (at the time of writing) have a couple of advantages over Let's Encrypt:
|
||||
* ZeroSSL provides unlimited certs via ACME and has no rate limits or throttling (it's quite common for new users to get throttled by Let's Encrypt due to multiple unsuccessful attempts to validate)
|
||||
* ZeroSSL provides a web interface that allows users to list and manage the certs they have received
|
||||
|
||||
- ZeroSSL provides unlimited certs via ACME and has no rate limits or throttling (it's quite common for new users to get throttled by Let's Encrypt due to multiple unsuccessful attempts to validate)
|
||||
- ZeroSSL provides a web interface that allows users to list and manage the certs they have received
|
||||
|
||||
SWAG currently defaults to Let's Encrypt as the cert provider so as not to break existing installs, however users can override that behavior by setting the environment variable `CERTPROVIDER=zerossl` to retrieve a cert from ZeroSSL instead. The only gotcha is that ZeroSSL requires the `EMAIL` env var to be set so the certs can be tied to a ZeroSSL account for management over their web interface.
|
||||
|
||||
## Port forwards
|
||||
### Port forwards
|
||||
|
||||
Port 443 mapping is required for access through `https://domain.com`. However, you don't necessarily need to have it listen on port 443 on the host server. All that is needed is to have port 443 on the router (wan) somehow forward to port 443 inside the container, while it can go through a different port on the host.
|
||||
|
||||
|
@ -151,7 +156,7 @@ For instance, it is ok to have port 443 on router (wan) forward to port 444 on t
|
|||
|
||||
Port 80 forwarding is required for `http` validation only. Same rule as above applies, and it's OK to go from 80 on the router to 81 on the host, mapped to 80 in the container.
|
||||
|
||||
## Docker networking
|
||||
### Docker networking
|
||||
|
||||
SWAG container happily runs with bridge networking. However, the default bridge network in docker does not allow containers to connect each other via container names used as dns hostnames. Therefore, it is recommended to first create a [user defined bridge network](https://docs.docker.com/network/bridge/) and attach the containers to that network.
|
||||
|
||||
|
@ -161,15 +166,15 @@ For the below examples, we will use a network named `lsio`. We can create it via
|
|||
|
||||
> Keep in mind that dns hostnames are meant to be case-insensitive, however container names are case-sensitive. For container names to be used as dns hostnames in nginx, they should be all lowercase as nginx will convert them to all lowercase before trying to resolve.
|
||||
|
||||
# Container setup examples
|
||||
## Container setup examples
|
||||
|
||||
## Create container via http validation
|
||||
### Create container via http validation
|
||||
|
||||
Let's assume our domain name is `linuxserver-test.com` and we would like our cert to also cover `www.linuxserver-test.com` and `ombi.linuxserver-test.com`. On the router, forward ports `80` and `443` to your host server. On your dns provider (if using your own domain), create an `A` record for the main domain and point it to your server IP (wan). Also create CNAMES for `www` and `ombi` and point them to the `A` record for the domain.
|
||||
|
||||
With docker cli, we'll first create a user defined bridge network if we haven't already `docker network create lsio`, and then create the container:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker create \
|
||||
--name=swag \
|
||||
--cap-add=NET_ADMIN \
|
||||
|
@ -221,7 +226,7 @@ After the container is started, we'll watch the logs with `docker logs swag -f`.
|
|||
|
||||
Now we can browse to `https://www.linuxserver-test.com` and we'll see the default landing page displayed.
|
||||
|
||||
## Create container via dns validation with a wildcard cert
|
||||
### Create container via dns validation with a wildcard cert
|
||||
|
||||
Let's assume our domain name is `linuxserver-test.com` and we would like our cert to also cover `www.linuxserver-test.com`, `ombi.linuxserver-test.com` and any other subdomain possible. On the router, we'll forward port `443` to our host server (Port 80 forwarding is optional).
|
||||
|
||||
|
@ -233,7 +238,7 @@ Now, let's get the container set up.
|
|||
|
||||
With docker cli, we'll first create a user defined bridge network if we haven't already `docker network create lsio`, and then create the container:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker create \
|
||||
--name=swag \
|
||||
--cap-add=NET_ADMIN \
|
||||
|
@ -287,7 +292,7 @@ After the container is started, we'll watch the logs with `docker logs swag -f`.
|
|||
|
||||
Once we enter the credentials into the ini file, we'll restart the docker container via `docker restart swag` and again watch the logs. After successful validation, we should see the notice `Server ready` and our webserver should be up and accessible at `https://www.linuxserver-test.com`.
|
||||
|
||||
## Create container via duckdns validation with a wildcard cert
|
||||
### Create container via duckdns validation with a wildcard cert
|
||||
|
||||
We will first need to get a subdomain from [DuckDNS](https://duckdns.org). Let's assume we get `linuxserver-test` so our url will be `linuxserver-test.duckdns.org`. Then we'll need to make sure that the subdomain points to our server IP (wan) on the DuckDNS website. We can always use our [DuckDNS docker image](https://hub.docker.com/r/linuxserver/duckdns) to keep the IP up to date. Don't forget to get the token for your account from DuckDNS. On the router, we'll forward port `443` to our host server (Port 80 forward is optional).
|
||||
|
||||
|
@ -295,7 +300,7 @@ Now, let's get the container set up.
|
|||
|
||||
With docker cli, we'll first create a user defined bridge network if we haven't already `docker network create lsio`, and then create the container:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker create \
|
||||
--name=swag \
|
||||
--cap-add=NET_ADMIN \
|
||||
|
@ -351,9 +356,9 @@ Now we can access the webserver by browsing to `https://www.linuxserver-test.duc
|
|||
|
||||
**NOTICE:** Due to a DuckDNS limitation, our cert only covers the wildcard subdomains, but it doesn't cover the main url. So if we try to access `https://linuxserver-test.duckdns.org`, we'll see a browser warning about an invalid ssl cert. But accessing it through the `www` (or `ombi` or any other) subdomain should work fine.
|
||||
|
||||
# Web hosting examples
|
||||
## Web hosting examples
|
||||
|
||||
## Simple html web page hosting
|
||||
### Simple html web page hosting
|
||||
|
||||
Once we have a working container, we can drop our web documents in and modify the nginx config files to set up our webserver.
|
||||
|
||||
|
@ -377,7 +382,7 @@ server {
|
|||
|
||||
After any changes to the config files, simply restart the container via `docker restart swag` to reload the nginx config.
|
||||
|
||||
## Hosting a Wordpress site
|
||||
### Hosting a Wordpress site
|
||||
|
||||
Wordpress requires a mysql database. For that, we'll use the [linuxserver MariaDB docker image](https://hub.docker.com/r/linuxserver/mariadb).
|
||||
|
||||
|
@ -427,7 +432,7 @@ services:
|
|||
And here are the docker cli versions (make sure you already created the lsio network [as described above](#docker-networking):
|
||||
Mariadb:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker create \
|
||||
--name=mariadb \
|
||||
--net=lsio \
|
||||
|
@ -445,7 +450,7 @@ docker create \
|
|||
|
||||
SWAG:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker create \
|
||||
--name=swag \
|
||||
--cap-add=NET_ADMIN \
|
||||
|
@ -466,17 +471,85 @@ docker create \
|
|||
|
||||
Once the SWAG container is set up with ssl certs and the webserver is up, we'll download the latest Wordpress and untar it into our www folder:
|
||||
|
||||
```
|
||||
```shell
|
||||
wget https://wordpress.org/latest.tar.gz
|
||||
tar xvf latest.tar.gz -C /home/aptalca/appdata/swag/www/
|
||||
rm latest.tar.gz
|
||||
```
|
||||
|
||||
Now that we have all the wordpress files under the container's `/config/www/wordpress` folder, we'll change the root directive in our SWAG default site conf to point there. We'll find the line in `/config/nginx/site-confs/default` that reads `root /config/www;` and change it to `root /config/www/wordpress;` and restart SWAG.
|
||||
Now that we have all the Wordpress files under the container's `/config/www/wordpress` folder, we'll need to make some adjustments to the nginx configurations.
|
||||
|
||||
Now we should be able to access our wordpress config page at `https://linuxserver-test.com/wp-admin/install.php`. We'll go ahead and enter `mariadb` as the `Database Host` address (we are using the container name as the dns hostname since both containers are in the same user defined bridge network), and also enter the Database Name, user and password we used in the mariadb config above (`WP_database`, `WP_dbuser` and `WP_dbpassword`).
|
||||
- Find the line in `/config/nginx/site-confs/default` that reads `root /config/www;` and change it to `root /config/www/wordpress;`
|
||||
- Find the line in `/config/nginx/site-confs/default` that reads `try_files $uri $uri/ /index.html /index.php$is_args$args =404;` and change it to `try_files $uri $uri/ /index.html /index.php$is_args$args;`
|
||||
|
||||
Once we go through the rest of the install steps, our wordpress instance should be fully set up and available at `https://linuxserver-test.com`.
|
||||
Alternatively, if you need to run multiple instances of Wordpress, you can leave `/config/nginx/site-confs/default` entirely unchanged and create new `site-confs` for each instance of Wordpress. The new `site-confs` will be slimmed down copies of `/config/nginx/site-confs/default`. This assumes you will run each instance on a separate subdomain. If you would prefer to have each Wordpress site on a different top level domain, be sure to add each domain to the `EXTRA_DOMAINS` environment variable.
|
||||
|
||||
Ex:
|
||||
`/config/nginx/site-confs/myfirstsubdomain.linuxserver-test.com.conf`
|
||||
|
||||
```nginx
|
||||
server {
|
||||
listen 443 ssl http2; # REMOVED default_server
|
||||
listen [::]:443 ssl http2; # REMOVED default_server
|
||||
|
||||
server_name myfirstsubdomain.linuxserver-test.com; # PUT YOUR DOMAIN HERE
|
||||
|
||||
root /config/sites/myfirstsubdomain.linuxserver-test.com/www; # CREATE THIS DIRECTORY STRUCTURE AND PUT WORDPRESS FILES HERE
|
||||
index index.html index.htm index.php;
|
||||
|
||||
location / {
|
||||
try_files $uri $uri/ /index.html /index.php$is_args$args; # REMOVED =404
|
||||
}
|
||||
|
||||
location ~ ^(.+\.php)(.*)$ {
|
||||
fastcgi_split_path_info ^(.+\.php)(.*)$;
|
||||
fastcgi_pass 127.0.0.1:9000;
|
||||
fastcgi_index index.php;
|
||||
include /etc/nginx/fastcgi_params;
|
||||
}
|
||||
|
||||
# deny access to .htaccess/.htpasswd files
|
||||
location ~ /\.ht {
|
||||
deny all;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
`/config/nginx/site-confs/mysecondsubdomain.linuxserver-test.com.conf`
|
||||
|
||||
```nginx
|
||||
server {
|
||||
listen 443 ssl http2; # REMOVED default_server
|
||||
listen [::]:443 ssl http2; # REMOVED default_server
|
||||
|
||||
server_name mysecondsubdomain.linuxserver-test.com; # PUT YOUR DOMAIN HERE
|
||||
|
||||
root /config/sites/mysecondsubdomain.linuxserver-test.com/www; # CREATE THIS DIRECTORY STRUCTURE AND PUT WORDPRESS FILES HERE
|
||||
index index.html index.htm index.php;
|
||||
|
||||
location / {
|
||||
try_files $uri $uri/ /index.html /index.php$is_args$args; # REMOVED =404
|
||||
}
|
||||
|
||||
location ~ ^(.+\.php)(.*)$ {
|
||||
fastcgi_split_path_info ^(.+\.php)(.*)$;
|
||||
fastcgi_pass 127.0.0.1:9000;
|
||||
fastcgi_index index.php;
|
||||
include /etc/nginx/fastcgi_params;
|
||||
}
|
||||
|
||||
# deny access to .htaccess/.htpasswd files
|
||||
location ~ /\.ht {
|
||||
deny all;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Now that you have completed changing your nginx configurations you need to restart the SWAG container.
|
||||
|
||||
Now we should be able to access our Wordpress config page at `https://linuxserver-test.com/wp-admin/install.php`. We'll go ahead and enter `mariadb` as the `Database Host` address (we are using the container name as the dns hostname since both containers are in the same user defined bridge network), and also enter the Database Name, user and password we used in the mariadb config above (`WP_database`, `WP_dbuser` and `WP_dbpassword`).
|
||||
|
||||
Once we go through the rest of the install steps, our Wordpress instance should be fully set up and available at `https://linuxserver-test.com`.
|
||||
|
||||
If you would like to have `http` requests on port 80 enabled and auto redirected to `https` on port 443, uncomment the relevant lines at the top of the default site config to read:
|
||||
|
||||
|
@ -490,7 +563,7 @@ server {
|
|||
}
|
||||
```
|
||||
|
||||
# Reverse Proxy
|
||||
## Reverse Proxy
|
||||
|
||||
A reverse proxy is a type of proxy server that retrieves resources on behalf of a client from one or more servers. These resources are then returned to the client as if they originated from the Web server itself (Shamelessly borrowed from [another post on our blog](https://blog.linuxserver.io/2017/11/28/how-to-setup-a-reverse-proxy-with-letsencrypt-ssl-for-all-your-docker-apps/#whatisareverseproxy)).
|
||||
|
||||
|
@ -498,7 +571,7 @@ A reverse proxy is a type of proxy server that retrieves resources on behalf of
|
|||
|
||||
In this case, a user or a client browser can connect to our SWAG container via https on port 443, request a service such as Ombi, then our SWAG container connects to the ombi container, retrieves the data and passes it on to the client via https with our trusted cert. The connection to ombi is local and does not need to be encrypted, but all communication between our SWAG container and the client browser will be encrypted.
|
||||
|
||||
## Preset proxy confs
|
||||
### Preset proxy confs
|
||||
|
||||
Our SWAG image comes with a list of preset reverse proxy confs for popular apps and services. They are [hosted on Github](https://github.com/linuxserver/reverse-proxy-confs) and are pulled into the `/config/nginx/proxy-confs` folder as inactive sample files. To activate, one must rename a conf file to remove `.sample` from the filename and restart the SWAG container. Any proxy conf file in that folder with a name that matches `*.subdomain.conf` or `*.subfolder.conf` will be loaded in nginx during container start.
|
||||
|
||||
|
@ -506,9 +579,9 @@ Most proxy confs work without any modification, but some may require other chang
|
|||
|
||||
The conf files also require that the SWAG container is in the same user defined bridge network as the other container so they can reach each other via container name as dns hostnames. Make sure you follow the instructions listed above in the [Docker networking section](#docker-networking).
|
||||
|
||||
## Understanding the proxy conf structure
|
||||
### Understanding the proxy conf structure
|
||||
|
||||
### Subdomain proxy conf
|
||||
#### Subdomain proxy conf
|
||||
|
||||
Here's the preset proxy conf for Heimdall as a subdomain (ie. `https://heimdall.linuxserver-test.com`):
|
||||
|
||||
|
@ -657,7 +730,7 @@ If the proxy_pass statement contains a `variable` instead of a `dns hostname`, n
|
|||
|
||||
If the proxied container is not in the same user defined bridge network as SWAG (could be on a remote host, could be using host networking or macvlan), we can change the value of `$upstream_app` to an IP address instead: `set $upstream_app 192.168.1.10;`
|
||||
|
||||
### Subfolder proxy conf
|
||||
#### Subfolder proxy conf
|
||||
|
||||
Here's the preset proxy conf for mytinytodo via a subfolder
|
||||
|
||||
|
@ -727,7 +800,7 @@ Same as the previous example, we set a variable `$upstream_app` with the value `
|
|||
|
||||
> Nginx has an interesting behavior displayed here. Even though we define `http://$upstream_mytinytodo:80/` as the address nginx should proxy, nginx actually connects to `http://$upstream_mytinytodo:80/todo`. Whenever we use a variable as part of the proxy_pass url, nginx automatically appends the defined `location` (in this case `/todo`) to the end of the proxy_pass url before it connects. If we include the subfolder, nginx will try to connect to `http://$upstream_mytinytodo:80/todo/todo` and will fail.
|
||||
|
||||
## Ombi subdomain reverse proxy example
|
||||
### Ombi subdomain reverse proxy example
|
||||
|
||||
In this example, we will reverse proxy Ombi at the address `https://ombi.linuxserver-test.com`.
|
||||
|
||||
|
@ -776,7 +849,7 @@ services:
|
|||
And here are the docker cli versions:
|
||||
Ombi:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker create \
|
||||
--name=ombi \
|
||||
--net=lsio \
|
||||
|
@ -791,7 +864,7 @@ docker create \
|
|||
|
||||
SWAG:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker create \
|
||||
--name=swag \
|
||||
--cap-add=NET_ADMIN \
|
||||
|
@ -812,7 +885,7 @@ docker create \
|
|||
|
||||
Once our containers up and running (and we confirm we can reach the placeholder page at `https://linuxserver-test.com`), we simply rename the file `ombi.subdomain.conf.sample` under `/config/nginx/proxy-confs/` to `ombi.subdomain.conf` and we restart the SWAG container. Now when we browser to `https://ombi.linuxserver-test.com` we should see the Ombi gui.
|
||||
|
||||
## Nextcloud subdomain reverse proxy example
|
||||
### Nextcloud subdomain reverse proxy example
|
||||
|
||||
Nextcloud is a bit trickier because the app has various security measures built-in, forcing us to configure certain options manually.
|
||||
|
||||
|
@ -878,7 +951,7 @@ services:
|
|||
And here are the docker cli versions:
|
||||
Nextcloud:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker create \
|
||||
--name=nextcloud \
|
||||
--net=lsio
|
||||
|
@ -893,7 +966,7 @@ docker create \
|
|||
|
||||
Mariadb:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker create \
|
||||
--name=mariadb \
|
||||
--net=lsio \
|
||||
|
@ -911,7 +984,7 @@ docker create \
|
|||
|
||||
SWAG:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker create \
|
||||
--name=swag \
|
||||
--cap-add=NET_ADMIN \
|
||||
|
@ -961,7 +1034,7 @@ These settings will tell Nextcloud to respond to queries where the destination a
|
|||
> If you followed the above directions to set it up for the first time, you only need to add the line `'trusted_proxies' => ['swag'],`, otherwise nextcloud 16+ shows a warning about incorrect reverse proxy settings.
|
||||
> By default, HSTS is disabled in SWAG config, because it is a bit of a sledgehammer that prevents loading of any http assets on the entire domain. You can enable it in SWAG's `ssl.conf`.
|
||||
|
||||
## Plex subfolder reverse proxy example
|
||||
### Plex subfolder reverse proxy example
|
||||
|
||||
In this example, we will set up Plex as a subfolder so it will be accessible at `https://linuxserver-test.com/plex`.
|
||||
We will initially set up Plex with host networking through its local IP and will connect to it from the same subnet. If we are on a different subnet, or if using a bridge network, we can use the `PLEX_CLAIM` variable to automatically claim the server with our plex account.
|
||||
|
@ -1011,7 +1084,7 @@ services:
|
|||
Here are the docker cli versions:
|
||||
Plex:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker create \
|
||||
--name=plex \
|
||||
--net=host \
|
||||
|
@ -1027,7 +1100,7 @@ docker create \
|
|||
|
||||
SWAG:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker create \
|
||||
--name=swag \
|
||||
--cap-add=NET_ADMIN \
|
||||
|
@ -1054,7 +1127,7 @@ If we are using host networking for our plex container, we will also have to mak
|
|||
|
||||
If we want Plex to always use our domain to connect (including in mobile apps), we can add our url `https://linuxserver-test.com/plex` into the `Custom server access URLs` in Plex server settings. After that, it is OK to turn off remote access in Plex server settings and remove the port forwarding port 32400. After that, all connections to our Plex server will go through SWAG reverse proxy over port 443.
|
||||
|
||||
## Using Heimdall as the home page at domain root
|
||||
### Using Heimdall as the home page at domain root
|
||||
|
||||
In this example, we will set Heimdall as our homepage at domain root so when we navigate to `https://linuxserver-test.com` we will reach Heimdall.
|
||||
|
||||
|
@ -1100,7 +1173,7 @@ services:
|
|||
Here are the docker cli versions:
|
||||
Heimdall:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker create \
|
||||
--name=heimdall \
|
||||
--net=lsio \
|
||||
|
@ -1114,7 +1187,7 @@ docker create \
|
|||
|
||||
SWAG:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker create \
|
||||
--name=swag \
|
||||
--cap-add=NET_ADMIN \
|
||||
|
@ -1153,47 +1226,47 @@ If we want to password protect our new homepage, we can run the following on the
|
|||
auth_basic_user_file /config/nginx/.htpasswd;
|
||||
```
|
||||
|
||||
# Troubleshooting
|
||||
## Troubleshooting
|
||||
|
||||
We wrote a blogpost for the deprecated letsencrypt image diving into troubleshooting issues regarding dns and port-forwards, which still is a very good resource: [blog.linuxserver.io](https://blog.linuxserver.io/2019/07/10/troubleshooting-letsencrypt-image-port-mapping-and-forwarding/)
|
||||
|
||||
## Common errors
|
||||
### Common errors
|
||||
|
||||
### 404
|
||||
#### 404
|
||||
|
||||
This error simply means that the resource was not found. Commonly happening when you try to access a subfolder that is not enabled.
|
||||
|
||||
### 502
|
||||
#### 502
|
||||
|
||||
This error means that nginx can't talk to the application. There is a few common reasons for this:
|
||||
|
||||
* The application and SWAG is not on the same custom docker network
|
||||
- The application and SWAG is not on the same custom docker network
|
||||
|
||||
Further up we talk about how to set up [Docker networking](#docker-networking), however there are some other common traps
|
||||
|
||||
* The container name does not match the application name.
|
||||
- The container name does not match the application name.
|
||||
|
||||
Covered in the section for [Understanding the proxy conf structure](#understanding-the-proxy-conf-structure)
|
||||
|
||||
* You manually changed the port.
|
||||
- You manually changed the port.
|
||||
|
||||
Also covered in the section for [Understanding the proxy conf structure](#understanding-the-proxy-conf-structure)
|
||||
|
||||
* The container originally ran with host networking, or the default bridge.
|
||||
- The container originally ran with host networking, or the default bridge.
|
||||
|
||||
In most cases the contents of `/config/nginx/resolver.conf;` should be `...resolver 127.0.0.11 valid=30s;`, if this is not the case, you can:
|
||||
|
||||
* Delete it, and restart the container to have it regenerate
|
||||
* Manually set the content(we wont override it)
|
||||
- Delete it, and restart the container to have it regenerate
|
||||
- Manually set the content(we wont override it)
|
||||
|
||||
# Final Thoughts
|
||||
## Final Thoughts
|
||||
|
||||
This image can be used in many different scenarios as it is a full fledged web server with some bells and whistles added. The above examples should be enough to get you started. For more information, please refer to the official documentation on either [Github](https://github.com/linuxserver/docker-swag/blob/master/README.md) or [Docker Hub](https://hub.docker.com/r/linuxserver/swag). If you have questions or issues, or want to discuss and share ideas, feel free to visit our discord: https://discord.gg/YWrKVTn
|
||||
This image can be used in many different scenarios as it is a full fledged web server with some bells and whistles added. The above examples should be enough to get you started. For more information, please refer to the official documentation on either [Github](https://github.com/linuxserver/docker-swag/blob/master/README.md) or [Docker Hub](https://hub.docker.com/r/linuxserver/swag). If you have questions or issues, or want to discuss and share ideas, feel free to visit our discord: <https://discord.gg/YWrKVTn>
|
||||
|
||||
## How to Request Support
|
||||
### How to Request Support
|
||||
|
||||
As you can see in this article, there are many different configurations, therefore we need to understand your exact setup before we can provide support. If you encounter a bug and confirm that it's a bug, please report it on [our github thread](https://github.com/linuxserver/docker-swag). If you need help with setting it up, [join our discord](https://discord.gg/YWrKVTn) and upload the following info to a service like pastebin and post the link:
|
||||
|
||||
* Docker run/create or compose yml you used
|
||||
* Full docker log (`docker logs swag`)
|
||||
* Any relevant conf files (default, nginx.conf or specific proxy conf)
|
||||
- Docker run/create or compose yml you used
|
||||
- Full docker log (`docker logs swag`)
|
||||
- Any relevant conf files (default, nginx.conf or specific proxy conf)
|
||||
|
|
|
@ -16,7 +16,7 @@ Using the `PUID` and `PGID` allows our containers to map the container's interna
|
|||
|
||||
When creating a container from one of our images, ensure you use the `-e PUID` and `-e PGID` options in your docker command:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker create --name=beets -e PUID=1000 -e PGID=1000 linuxserver/beets
|
||||
```
|
||||
|
||||
|
@ -30,7 +30,6 @@ environment:
|
|||
|
||||
It is most likely that you will use the `id` of yourself, which can be obtained by running the command below. The two values you will be interested in are the `uid` and `gid`.
|
||||
|
||||
```bash
|
||||
```shell
|
||||
id $user
|
||||
```
|
||||
|
||||
|
|
|
@ -10,7 +10,7 @@ Docker containers are, for the most part, immutable. This means that important c
|
|||
|
||||
Firstly, stop the container.
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker stop <container_name>
|
||||
```
|
||||
|
||||
|
@ -20,7 +20,7 @@ Once the container has been stopped, remove it.
|
|||
|
||||
> **Important**: Did you remember to persist the `/config` volume when you originally created the container? Bear in mind, you'll lose any configuration inside the container if this volume was not persisted. [Read up on why this is important](volumes.md).
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker rm <container_name>
|
||||
```
|
||||
|
||||
|
@ -28,7 +28,7 @@ docker rm <container_name>
|
|||
|
||||
Now you can pull the latest version of the application image from Docker Hub.
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker pull linuxserver/<image_name>
|
||||
```
|
||||
|
||||
|
@ -36,7 +36,7 @@ docker pull linuxserver/<image_name>
|
|||
|
||||
Finally, you can recreate the container. This is often cited as the most arduous task as it requires you to remember all of the mappings you set beforehand. You can help mitigate this step by using Docker Compose instead - this topic has been [outlined in our documentation](docker-compose.md).
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker create \
|
||||
--name=<container_name> \
|
||||
-v <path_to_data>:/config \
|
||||
|
@ -50,14 +50,14 @@ docker create \
|
|||
|
||||
It is also possible to update a single container using Docker Compose:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker-compose pull linuxserver/<image_name>
|
||||
docker-compose up -d <container_name>
|
||||
```
|
||||
|
||||
Or, to update all containers at once:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker-compose pull
|
||||
docker-compose up -d
|
||||
```
|
||||
|
@ -66,7 +66,6 @@ docker-compose up -d
|
|||
|
||||
Whenever a Docker image is updated, a fresh version of that image gets downloaded and stored on your host machine. Doing this, however, does not remove the _old_ version of the image. Eventually you will end up with a lot of disk space used up by stale images. You can `prune` old images from your system, which will free up space:
|
||||
|
||||
```bash
|
||||
```shell
|
||||
docker image prune
|
||||
```
|
||||
|
||||
|
|
|
@ -23,4 +23,3 @@ The above example shows how the usage of `-v` has mapped the host machine's `/op
|
|||
> **Remember**: When dealing with mapping overlays, it always reads `host:container`
|
||||
|
||||
You can do this for as many directories as required by either you or the container itself. Our rule-of-thumb is to _always_ map the `/config` directory as this contains pertinent runtime configuration for the underlying application. For applications that require further data, such as media, our documentation will clearly indicate which internal directories need mapping.
|
||||
|
||||
|
|
Ładowanie…
Reference in New Issue