Merge pull request #158 from linuxserver/mkdocs-tuning

pull/159/head
j0nnymoe 2023-10-25 23:04:01 +01:00 zatwierdzone przez GitHub
commit be6bff0052
Nie znaleziono w bazie danych klucza dla tego podpisu
ID klucza GPG: 4AEE18F83AFDEB23
18 zmienionych plików z 364 dodań i 340 usunięć

15
.editorconfig 100644
Wyświetl plik

@ -0,0 +1,15 @@
root = true
# Unix-style newlines with a newline ending every file
[*]
end_of_line = lf
insert_final_newline = true
# trim_trailing_whitespace may cause unintended issues and should not be globally set true
trim_trailing_whitespace = false
indent_style = space
[**.yml]
indent_size = 2
[**.md]
indent_size = 4

Wyświetl plik

@ -0,0 +1,3 @@
MD007:
indent: 4
line-length: false

219
FAQ.md
Wyświetl plik

@ -1,219 +0,0 @@
# FAQ
Here resides some Frequently Asked Questions.
## My host is incompatible with images based on Ubuntu Jammy {#jammy}
Some x86_64 hosts running older versions of the Docker engine are not compatible with some images based on Ubuntu Jammy.
- Symptoms
If your host is affected you may see errors in your containers such as:
```text
ERROR - Unable to determine java version; make sure Java is installed and callable
```
Or
```text
Failed to create CoreCLR, HRESULT: 0x80070008
```
Or
```text
WARNING :: MAIN : webStart.py:initialize:249 : can't start new thread
```
- Resolution
- Option 1 (Long-Term Fix)
Upgrade your Docker engine to at least version `20.10.10`. [Refer to the official Docker docs for installation/update details.](https://docs.docker.com/engine/install)
- Option 2 (Short-Term Fix)
For Docker CLI, run your container with:
`--security-opt seccomp=unconfined`
For Docker Compose, run your container with:
```yaml
security_opt:
- seccomp=unconfined
```
## My host is incompatible with images based on rdesktop {#rdesktop}
Some x86_64 hosts have issues running rdesktop based images even with the latest Docker version due to syscalls that are unknown to Docker.
- Symptoms
If your host is affected you may see errors in your containers such as:
```text
Failed to close file descriptor for child process (Operation not permitted)
```
- Resolution
For Docker CLI, run your container with:
`--security-opt seccomp=unconfined`
For Docker Compose, run your container with:
```yaml
security_opt:
- seccomp=unconfined
```
## My host is incompatible with images based on Ubuntu Focal and Alpine 3.13 and later {#libseccomp}
This only affects 32 bit installs of distros based on Debian Buster.
This is due to a bug in the libseccomp2 library (dependency of Docker itself), which is fixed. However, it's not pushed to all the repositories.
[A GitHub issue tracking this](https://github.com/moby/moby/issues/40734)
You have a few options as noted below. Options 1 is short-term, while option 2 is considered the best option if you don't plan to reinstall the device (option 3).
- Resolution
If you decide to do option 1 or 2, you should just need to restart the container after confirming you have libseccomp2.4.4 installed.
If 1 or 2 did not work, ensure your Docker install is at least version 20.10.0, [refer to the official Docker docs for installation.](https://docs.docker.com/engine/install/debian/)
- Option 1
Manually install an updated version of the library with dpkg.
```shell
wget http://ftp.us.debian.org/debian/pool/main/libs/libseccomp/libseccomp2_2.4.4-1~bpo10+1_armhf.deb
sudo dpkg -i libseccomp2_2.4.4-1~bpo10+1_armhf.deb
```
{% hint style="info" %}
This url may have been updated. Find the latest by browsing [here](http://ftp.us.debian.org/debian/pool/main/libs/libseccomp/).
{% endhint %}
- Option 2
Add the backports repo for DebianBuster. As seen [here](https://github.com/linuxserver/docker-jellyfin/issues/71#issuecomment-733621693).
```shell
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 04EE7237B7D453EC 648ACFD622F3D138
echo "deb http://deb.debian.org/debian buster-backports main" | sudo tee -a /etc/apt/sources.list.d/buster-backports.list
sudo apt update
sudo apt install -t buster-backports libseccomp2
```
- Option 3
Reinstall/update your OS to a version that still gets updates.
- Any distro based on DebianStretch does not seem to have this package available
- DebianBuster based distros can get the package trough backports, as outlined in point 2.
{% hint style="info" %}
RaspberryPI OS (formerly Raspbian) Can be upgraded to run with a 64bit kernel
{% endhint %}
- Symptoms
- 502 errors in __Jellyfin__ as seen in [linuxserver/docker-jellyfin#71](https://github.com/linuxserver/docker-jellyfin/issues/71)
- `Error starting framework core` messages in the docker log for __Plex__. [linuxserver/docker-plex#247](https://github.com/linuxserver/docker-plex/issues/247)
- No WebUI for __Radarr__, even though the container is running. [linuxserver/docker-radarr#118](https://github.com/linuxserver/docker-radarr/issues/118)
- Images based on our Nginx base-image(Nextcloud, SWAG, Nginx, etc.) fails to generate a certificate, with a message similar to `error getting time:crypto/asn1/a_time.c:330`
- `docker exec <container-name> date` returns 1970
## My host filesystem is incompatible with my docker storage driver {#storage}
Some host file systems types are not compatible with the default storage driver of docker (overlay2)
- Symptoms
If your host is affected you may see errors in your containers such as:
```text
ERROR Found no accessible config files
```
or
```text
Directory not empty. This directory contains an empty ignorecommands sub-directory
```
- Resolution
As shown in [Docker docs](https://docs.docker.com/storage/storagedriver/select-storage-driver/#supported-backing-filesystems)
A host filesystem of zfs requires a docker storage driver of zfs and a host file system of btrfs requires a docker storage driver of btrfs.
Correcting this oversight will resolve the issue. This is not something that a container change will resolve.
## What is lscr.io {#lscr}
LSCR is a vanity url for our images, this is provided to us in collaboration with [scarf.sh](https://about.scarf.sh/). It is not a dedicated docker registry, rather a redirection service. As of writing it redirects to GitHub Container Registry (ghcr.io).
Aside from giving us the ability to redirect to another backend, if necessary, it also exposes telemetry about pulls, historically only available to the backend provider. We base some decisions on this data, as it gives us a somewhat realistic usage overview (relative to just looking at pulls on DockerHub).
We have some blog posts related to how we utilize Scarf:
- [End of an Arch](https://www.linuxserver.io/blog/end-of-an-arch)
- [Unravelling Some Stats](https://www.linuxserver.io/blog/unravelling-some-stats)
- [Wrap Up Warm For Winter](https://www.linuxserver.io/blog/wrap-up-warm-for-the-winter)
### I cannot connect to lscr.io {#lscr-no-connect}
Due to the nature of Scarf as a Docker gateway which gathers usage metrics, some overzealous privacy-focused blocklists will include its domains.
If you want to help us in getting a better overview of how people use our containers, you should add `gateway.scarf.sh` to the allowlist in your blocklist solution.
Alternatively, you can use Docker Hub or GHCR directly to pull your images, although be aware that all public registries gather user metrics, so this doesn't provide you with any real benefit in that area.
If Scarf is on the blocklist, you will get an error message like this when trying to pull an image:
```
Error response from daemon: Get "https://lscr.io/v2/": dial tcp: lookup lscr.io: no such host
```
This is, however, a generic message. To rule out a service-interruption, you should also see if you can resolve the backend provider.
Using dig:
```shell
dig ghcr.io +short
dig lscr.io +short
```
Using nslookup:
```shell
nslookup ghcr.io
nslookup lscr.io
```
If you only got a response from ghcr, chances are that Scarf is on the blocklist.
## I want to reverse proxy an application which defaults to https with a self-signed certificate {#strict-proxy}
### Traefik {#strict-proxy-traefik}
In this example, we will configure a serverTransport rule we can apply to a service, as well as telling Traefik to use https on the backend for the service.
Create a [ServerTransport](https://doc.traefik.io/traefik/routing/services/#serverstransport_1) in your dynamic Traefik configuration; we are calling ours `ignorecert`.
```yml
http:
serversTransports:
ignorecert:
insecureSkipVerify: true
```
Then on our `foo` service we tell it to use this rule, as well as telling Traefik the backend is running on https.
```yml
- traefik.http.services.foo.loadbalancer.serverstransport=ignorecert
- traefik.http.services.foo.loadbalancer.server.scheme=https
```

Wyświetl plik

@ -1,7 +1,8 @@
nav:
- Introduction: index.md
- FAQ.md
- general
- images
- How to: general
- Container Images:
- Images: images
- Deprecated Images: deprecated_images
- Frequently Asked Questions: FAQ.md
- misc
- Deprecated Images: deprecated_images

Wyświetl plik

@ -1 +1,219 @@
--8<-- "FAQ.md"
# FAQ
Here resides some Frequently Asked Questions.
## My host is incompatible with images based on Ubuntu Jammy {#jammy}
Some x86_64 hosts running older versions of the Docker engine are not compatible with some images based on Ubuntu Jammy.
### Symptoms
If your host is affected you may see errors in your containers such as:
```text
ERROR - Unable to determine java version; make sure Java is installed and callable
```
Or
```text
Failed to create CoreCLR, HRESULT: 0x80070008
```
Or
```text
WARNING :: MAIN : webStart.py:initialize:249 : can't start new thread
```
### Resolution
#### Long-Term Fix
Upgrade your Docker engine to at least version `20.10.10`. [Refer to the official Docker docs for installation/update details.](https://docs.docker.com/engine/install)
#### Short-Term Fix
For Docker CLI, run your container with:
`--security-opt seccomp=unconfined`
For Docker Compose, run your container with:
```yaml
security_opt:
- seccomp=unconfined
```
## My host is incompatible with images based on rdesktop {#rdesktop}
Some x86_64 hosts have issues running rdesktop based images even with the latest Docker version due to syscalls that are unknown to Docker.
### Symptoms
If your host is affected you may see errors in your containers such as:
```text
Failed to close file descriptor for child process (Operation not permitted)
```
### Resolution
For Docker CLI, run your container with:
`--security-opt seccomp=unconfined`
For Docker Compose, run your container with:
```yaml
security_opt:
- seccomp=unconfined
```
## My host is incompatible with images based on Ubuntu Focal and Alpine 3.13 and later {#libseccomp}
This only affects 32 bit installs of distros based on Debian Buster.
This is due to a bug in the libseccomp2 library (dependency of Docker itself), which is fixed. However, it's not pushed to all the repositories.
[A GitHub issue tracking this](https://github.com/moby/moby/issues/40734)
You have a few options as noted below. Options 1 is short-term, while option 2 is considered the best option if you don't plan to reinstall the device (option 3).
### Resolution
If you decide to do option 1 or 2, you should just need to restart the container after confirming you have libseccomp2.4.4 installed.
If 1 or 2 did not work, ensure your Docker install is at least version 20.10.0, [refer to the official Docker docs for installation.](https://docs.docker.com/engine/install/debian/)
#### Manual patch
Manually install an updated version of the library with dpkg.
```shell
wget http://ftp.us.debian.org/debian/pool/main/libs/libseccomp/libseccomp2_2.4.4-1~bpo10+1_armhf.deb
sudo dpkg -i libseccomp2_2.4.4-1~bpo10+1_armhf.deb
```
!!! info
This url may have been updated. Find the latest by browsing [here](http://ftp.us.debian.org/debian/pool/main/libs/libseccomp/).
#### Automatic Patch
Add the backports repo for DebianBuster. As seen [here](https://github.com/linuxserver/docker-jellyfin/issues/71#issuecomment-733621693).
```shell
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 04EE7237B7D453EC 648ACFD622F3D138
echo "deb http://deb.debian.org/debian buster-backports main" | sudo tee -a /etc/apt/sources.list.d/buster-backports.list
sudo apt update
sudo apt install -t buster-backports libseccomp2
```
#### Move to a compatible OS
Reinstall/update your OS to a version that still gets updates.
- Any distro based on DebianStretch does not seem to have this package available
- DebianBuster based distros can get the package trough backports, as outlined in point 2.
!!! info
RaspberryPI OS (formerly Raspbian) Can be upgraded to run with a 64bit kernel
### Symptoms
- 502 errors in __Jellyfin__ as seen in [linuxserver/docker-jellyfin#71](https://github.com/linuxserver/docker-jellyfin/issues/71)
- `Error starting framework core` messages in the docker log for __Plex__. [linuxserver/docker-plex#247](https://github.com/linuxserver/docker-plex/issues/247)
- No WebUI for __Radarr__, even though the container is running. [linuxserver/docker-radarr#118](https://github.com/linuxserver/docker-radarr/issues/118)
- Images based on our Nginx base-image(Nextcloud, SWAG, Nginx, etc.) fails to generate a certificate, with a message similar to `error getting time:crypto/asn1/a_time.c:330`
- `docker exec <container-name> date` returns 1970
## My host filesystem is incompatible with my docker storage driver {#storage}
Some host file systems types are not compatible with the default storage driver of docker (overlay2)
### Symptoms
If your host is affected you may see errors in your containers such as:
```text
ERROR Found no accessible config files
```
or
```text
Directory not empty. This directory contains an empty ignorecommands sub-directory
```
### Resolution
As shown in [Docker docs](https://docs.docker.com/storage/storagedriver/select-storage-driver/#supported-backing-filesystems)
A host filesystem of zfs requires a docker storage driver of zfs and a host file system of btrfs requires a docker storage driver of btrfs.
Correcting this oversight will resolve the issue. This is not something that a container change will resolve.
## What is lscr.io {#lscr}
LSCR is a vanity url for our images, this is provided to us in collaboration with [scarf.sh](https://about.scarf.sh/). It is not a dedicated docker registry, rather a redirection service. As of writing it redirects to GitHub Container Registry (ghcr.io).
Aside from giving us the ability to redirect to another backend, if necessary, it also exposes telemetry about pulls, historically only available to the backend provider. We base some decisions on this data, as it gives us a somewhat realistic usage overview (relative to just looking at pulls on DockerHub).
We have some blog posts related to how we utilize Scarf:
- [End of an Arch](https://www.linuxserver.io/blog/end-of-an-arch)
- [Unravelling Some Stats](https://www.linuxserver.io/blog/unravelling-some-stats)
- [Wrap Up Warm For Winter](https://www.linuxserver.io/blog/wrap-up-warm-for-the-winter)
### I cannot connect to lscr.io {#lscr-no-connect}
Due to the nature of Scarf as a Docker gateway which gathers usage metrics, some overzealous privacy-focused blocklists will include its domains.
If you want to help us in getting a better overview of how people use our containers, you should add `gateway.scarf.sh` to the allowlist in your blocklist solution.
Alternatively, you can use Docker Hub or GHCR directly to pull your images, although be aware that all public registries gather user metrics, so this doesn't provide you with any real benefit in that area.
If Scarf is on the blocklist, you will get an error message like this when trying to pull an image:
```text
Error response from daemon: Get "https://lscr.io/v2/": dial tcp: lookup lscr.io: no such host
```
This is, however, a generic message. To rule out a service-interruption, you should also see if you can resolve the backend provider.
Using dig:
```shell
dig ghcr.io +short
dig lscr.io +short
```
Using nslookup:
```shell
nslookup ghcr.io
nslookup lscr.io
```
If you only got a response from ghcr, chances are that Scarf is on the blocklist.
## I want to reverse proxy an application which defaults to https with a self-signed certificate {#strict-proxy}
### Traefik {#strict-proxy-traefik}
In this example, we will configure a serverTransport rule we can apply to a service, as well as telling Traefik to use https on the backend for the service.
Create a [ServerTransport](https://doc.traefik.io/traefik/routing/services/#serverstransport_1) in your dynamic Traefik configuration; we are calling ours `ignorecert`.
```yml
http:
serversTransports:
ignorecert:
insecureSkipVerify: true
```
Then on our `foo` service we tell it to use this rule, as well as telling Traefik the backend is running on https.
```yml
- traefik.http.services.foo.loadbalancer.serverstransport=ignorecert
- traefik.http.services.foo.loadbalancer.server.scheme=https
```

Wyświetl plik

@ -4,10 +4,8 @@ nav:
- running-our-containers.md
- container-customization.md
- docker-compose.md
- support-policy.md
- understanding-puid-and-pgid.md
- updating-our-containers.md
- volumes.md
- fleet.md
- swag.md
- awesome-lsio.md

Wyświetl plik

@ -14,7 +14,8 @@ All of the functionality described in this post is live on every one of the cont
<https://fleet.linuxserver.io>
**NOTE:** While the following support has been added to our containers, we will not give support to any custom scripts, services, or mods. If you are having an issue with one of our containers, be sure to disable all custom scripts/services/mods before seeking support.
!!! note
While the following support has been added to our containers, we will not give support to any custom scripts, services, or mods. If you are having an issue with one of our containers, be sure to disable all custom scripts/services/mods before seeking support.
## Custom Scripts
@ -41,7 +42,8 @@ echo "**** installing ffmpeg ****"
apk add --no-cache ffmpeg
```
**NOTE:** The folder `/custom-cont-init.d` needs to be owned by root! If this is not the case, this folder will be renamed and a new (empty) folder will be created. This is to prevent remote code execution by putting scripts in the aforementioned folder.
!!! note
The folder `/custom-cont-init.d` needs to be owned by root! If this is not the case, this folder will be renamed and a new (empty) folder will be created. This is to prevent remote code execution by putting scripts in the aforementioned folder.
## Custom Services
@ -65,9 +67,11 @@ Running cron in our containers is now as simple as a single file. Drop this scri
/usr/sbin/crond -f -S -l 0 -c /etc/crontabs
```
**NOTE:** With this example, you will most likely need to have cron installed via a custom script using the technique in the previous section, and will need to populate the crontab.
!!! note
With this example, you will most likely need to have cron installed via a custom script using the technique in the previous section, and will need to populate the crontab.
**NOTE:** The folder `/custom-services.d` needs to be owned by root! If this is not the case, this folder will be renamed and a new (empty) folder will be created. This is to prevent remote code execution by putting scripts in the aforementioned folder.
!!! note
The folder `/custom-services.d` needs to be owned by root! If this is not the case, this folder will be renamed and a new (empty) folder will be created. This is to prevent remote code execution by putting scripts in the aforementioned folder.
## Docker Mods
@ -106,7 +110,8 @@ docker create \
The source code for this mod can be found [here](https://github.com/Taisun-Docker/config-mods/tree/master/pia).
**NOTE:** When pulling in logic from external sources practice caution and trust the sources/community you get them from, as there are extreme security implications to consuming files from sources outside of our control.
!!! note
When pulling in logic from external sources practice caution and trust the sources/community you get them from, as there are extreme security implications to consuming files from sources outside of our control.
## We are here to help

Wyświetl plik

@ -8,58 +8,58 @@ Note that when inputting data for variables, you must follow standard YAML rules
## Installation
- Install Option 1 (recommended)
### Official Install Script
Starting with version 2, Docker started publishing `docker compose` as a go based plugin for docker (rather than a python based standalone binary). And they also publish this plugin for various arches, including x86_64, armhf and aarch64 (as opposed to the x86_64 only binaries for v1.X). Therefore we updated our recommended install option to utilize the plugin.
Starting with version 2, Docker started publishing `docker compose` as a go based plugin for docker (rather than a python based standalone binary). And they also publish this plugin for various arches, including x86_64, armhf and aarch64 (as opposed to the x86_64 only binaries for v1.X). Therefore we updated our recommended install option to utilize the plugin.
Install docker from the official repos as described [here](https://docs.docker.com/engine/install/) or via the convenient [get-docker script](https://docs.docker.com/engine/install/ubuntu/#install-using-the-convenience-script) as described below:
Install docker from the official repos as described [here](https://docs.docker.com/engine/install/) or via the convenient [get-docker script](https://docs.docker.com/engine/install/ubuntu/#install-using-the-convenience-script) as described below:
```shell
curl -fsSL https://get.docker.com -o get-docker.sh && \
sh get-docker.sh
```
```shell
curl -fsSL https://get.docker.com -o get-docker.sh && \
sh get-docker.sh
```
- Install Option 2 (manual)
### Manual Package Installation
You can install `docker compose` manually via the following commands:
You can install `docker compose` manually via the following commands:
```shell
ARCH=$(uname -m) && [[ "${ARCH}" == "armv7l" ]] && ARCH="armv7" && \
sudo mkdir -p /usr/local/lib/docker/cli-plugins && \
sudo curl -SL "https://github.com/docker/compose/releases/latest/download/docker-compose-linux-${ARCH}" -o /usr/local/lib/docker/cli-plugins/docker-compose && \
sudo chmod +x /usr/local/lib/docker/cli-plugins/docker-compose
```
```shell
ARCH=$(uname -m) && [[ "${ARCH}" == "armv7l" ]] && ARCH="armv7" && \
sudo mkdir -p /usr/local/lib/docker/cli-plugins && \
sudo curl -SL "https://github.com/docker/compose/releases/latest/download/docker-compose-linux-${ARCH}" -o /usr/local/lib/docker/cli-plugins/docker-compose && \
sudo chmod +x /usr/local/lib/docker/cli-plugins/docker-compose
```
Assuming you already have docker (or at the very least docker-cli) installed, preferably from the official docker repos, running `docker compose version` should display the compose version.
Assuming you already have docker (or at the very least docker-cli) installed, preferably from the official docker repos, running `docker compose version` should display the compose version.
If you don't have docker installed yet, we recommend installing it via the following commands:
If you don't have docker installed yet, we recommend installing it via the following commands:
```shell
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
```
```shell
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
```
- v1.X compatibility
#### v1.X compatibility
As v2 runs as a plugin instead of a standalone binary, it is invoked by `docker compose args` instead of `docker-compose args`. There are also some slight differences in how the yaml is operated as well. To make migration easier, Docker released a replacement binary for `docker-compose` on x86_64 and aarch64 platforms. More info on that can be found at the [upstream repo](https://github.com/docker/compose-switch).
As v2 runs as a plugin instead of a standalone binary, it is invoked by `docker compose args` instead of `docker-compose args`. There are also some slight differences in how the yaml is operated as well. To make migration easier, Docker released a replacement binary for `docker-compose` on x86_64 and aarch64 platforms. More info on that can be found at the [upstream repo](https://github.com/docker/compose-switch).
- Install Option 3 (docker)
### Container alias
You can install docker-compose using our [docker-compose image](https://github.com/linuxserver/docker-docker-compose) via a run script. You can simply run the following commands on your system and you should have a functional install that you can call from anywhere as `docker-compose`:
You can install docker-compose using our [docker-compose image](https://github.com/linuxserver/docker-docker-compose) via a run script. You can simply run the following commands on your system and you should have a functional install that you can call from anywhere as `docker-compose`:
```shell
sudo curl -L --fail https://raw.githubusercontent.com/linuxserver/docker-docker-compose/v2/run.sh -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
```
```shell
sudo curl -L --fail https://raw.githubusercontent.com/linuxserver/docker-docker-compose/v2/run.sh -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
```
In order to update the local image, you can run the following commands:
In order to update the local image, you can run the following commands:
```shell
docker pull linuxserver/docker-compose:"${DOCKER_COMPOSE_IMAGE_TAG:-v2}"
docker image prune -f
```
```shell
docker pull linuxserver/docker-compose:"${DOCKER_COMPOSE_IMAGE_TAG:-v2}"
docker image prune -f
```
The above commands will use the v2 images (although invoked by`docker-compose` instead of `docker compose`). If you'd like to use v1 images, you can set an env var `DOCKER_COMPOSE_IMAGE_TAG=alpine`, `DOCKER_COMPOSE_IMAGE_TAG=ubuntu` in your respective `.profile`. Alternatively you can set that var to a versioned image tag like `v2-2.4.1-r1` or `version-alpine-1.27.4` to pin it to a specific docker-compose version.
The above commands will use the v2 images (although invoked by`docker-compose` instead of `docker compose`). If you'd like to use v1 images, you can set an env var `DOCKER_COMPOSE_IMAGE_TAG=alpine`, `DOCKER_COMPOSE_IMAGE_TAG=ubuntu` in your respective `.profile`. Alternatively you can set that var to a versioned image tag like `v2-2.4.1-r1` or `version-alpine-1.27.4` to pin it to a specific docker-compose version.
## Single service Usage
@ -191,7 +191,7 @@ Create or open the file `~/.bash_aliases` and populate with the following conten
alias dcup='docker compose -f /opt/docker-compose.yml up -d' #brings up all containers if one is not defined after dcup
alias dcdown='docker compose -f /opt/docker-compose.yml stop' #brings down all containers if one is not defined after dcdown
alias dcpull='docker compose -f /opt/docker-compose.yml pull' #pulls all new images is specified after dcpull
alias dclogs='docker compose -f /opt/docker-compose.yml logs -tf --tail="50" '
alias dclogs='docker compose -f /opt/docker-compose.yml logs -tf --tail="50" '
alias dtail='docker logs -tf --tail="50" "$@"'
```

Wyświetl plik

@ -108,15 +108,13 @@ All synchronized repositories and images returned.
{% endapi-method-spec %}
{% endapi-method %}
{% hint style="info" %}
Any repositories not synchronized with Docker Hub \(e.g. staging or metadata repositories\) will not be returned as part of the API. This also applies to images which the repository owner does not wish to be part of the primary image list.
{% endhint %}
!!! info
Any repositories not synchronized with Docker Hub \(e.g. staging or metadata repositories\) will not be returned as part of the API. This also applies to images which the repository owner does not wish to be part of the primary image list.
## Running Fleet
{% hint style="warning" %}
Fleet is a Java application and requires at least JRE 11.
{% endhint %}
!!! warning
Fleet is a Java application and requires at least JRE 11.
Grab the latest Fleet release from [GitHub](https://github.com/linuxserver/fleet/releases).
@ -152,9 +150,8 @@ fleet.admin.secret=<a_random_string>
All configuration can be loaded either via the config file, via JVM arguments, or via the system environment. Fleet will first look in the configuration file, then JVM runtime, and finally in the system environment. It will load the first value it finds, which can be useful when needing to override specific properties.
{% hint style="info" %}
If you place a property in the system environment, ensure that the property uses underscores rather than periods. This is due to a limitation in BASH environments where exported variables must not contain this character. E.g. `fleet.app.port=8080` becomes `export fleet_app_port=8080`
{% endhint %}
!!! info
If you place a property in the system environment, ensure that the property uses underscores rather than periods. This is due to a limitation in BASH environments where exported variables must not contain this character. E.g. `fleet.app.port=8080` becomes `export fleet_app_port=8080`
| Property Name | Purpose |
| :--- | :--- |
@ -169,9 +166,8 @@ If you place a property in the system environment, ensure that the property uses
As well as the base configuration file, Fleet also supports some runtime arguments by means of the `-D` flag. These can be used to direct Fleet to behave in a specific way at runtime.
{% hint style="info" %}
Unlike the properties defined above, these properties are only accessed via the JVM arguments \(`-D`\).
{% endhint %}
!!! info
Unlike the properties defined above, these properties are only accessed via the JVM arguments \(`-D`\).
| Runtime Argument | Purpose |
| :--- | :--- |
@ -188,6 +184,5 @@ When starting Fleet for the first time it will create a default user in order fo
**Password**: admin
{% hint style="warning" %}
You should change the default password for this user as soon as possible! This can be done via the `Admin` -&gt; `Users` menu options.
{% endhint %}
!!! warning
You should change the default password for this user as soon as possible! This can be done via the `Admin` -&gt; `Users` menu options.

Wyświetl plik

@ -84,17 +84,17 @@ services:
Our image currently supports three different methods to validate domain ownership:
- **http:**
- Let's Encrypt (acme) server connects to domain on port 80
- Can be owned domain or a dynamic dns address
- Let's Encrypt (acme) server connects to domain on port 80
- Can be owned domain or a dynamic dns address
- **dns:**
- Let's Encrypt (acme) server connects to dns provider
- Api credentials and settings entered into `ini` files under `/config/dns-conf/`
- Supports wildcard certs
- Need to have own domain name (non-free)
- Let's Encrypt (acme) server connects to dns provider
- Api credentials and settings entered into `ini` files under `/config/dns-conf/`
- Supports wildcard certs
- Need to have own domain name (non-free)
- **duckdns:**
- Let's Encrypt (acme) server connects to DuckDNS
- Supports wildcard certs (only for the sub-subdomains)
- No need for own domain (free)
- Let's Encrypt (acme) server connects to DuckDNS
- Supports wildcard certs (only for the sub-subdomains)
- No need for own domain (free)
The validation is performed when the container is started for the first time. Nginx won't be up until ssl certs are successfully generated.
@ -127,7 +127,8 @@ If you are using docker-compose, and your services are on the same yaml, you do
For the below examples, we will use a network named `lsio`. We can create it via `docker network create lsio`. After that, any container that is created with `--net=lsio` can ping each other by container name as dns hostname.
> Keep in mind that dns hostnames are meant to be case-insensitive, however container names are case-sensitive. For container names to be used as dns hostnames in nginx, they should be all lowercase as nginx will convert them to all lowercase before trying to resolve.
!!! info
Keep in mind that dns hostnames are meant to be case-insensitive, however container names are case-sensitive. For container names to be used as dns hostnames in nginx, they should be all lowercase as nginx will convert them to all lowercase before trying to resolve.
## Container setup examples
@ -317,7 +318,8 @@ After the container is started, we'll watch the logs with `docker logs swag -f`.
Now we can access the webserver by browsing to `https://www.linuxserver-test.duckdns.org`.
**NOTICE:** Due to a DuckDNS limitation, our cert only covers the wildcard subdomains, but it doesn't cover the main url. So if we try to access `https://linuxserver-test.duckdns.org`, we'll see a browser warning about an invalid ssl cert. But accessing it through the `www` (or `ombi` or any other) subdomain should work fine.
!!! warning
Due to a DuckDNS limitation, our cert only covers the wildcard subdomains, but it doesn't cover the main url. So if we try to access `https://linuxserver-test.duckdns.org`, we'll see a browser warning about an invalid ssl cert. But accessing it through the `www` (or `ombi` or any other) subdomain should work fine.
## Web hosting examples
@ -765,7 +767,8 @@ Any requests sent to nginx where the destination starts with `https://linuxserve
Same as the previous example, we set a variable `$upstream_app` with the value `mytinytodo` and tell nginx to use the variable as the address. Keep in mind that the port listed here is the container port because nginx is connecting to this container directly via the docker network. So if our mytinytodo container has a port mapping of `-p 8080:80`, we still set `$upstream_port` variable to `80`.
> Nginx has an interesting behavior displayed here. Even though we define `http://$upstream_mytinytodo:80/` as the address nginx should proxy, nginx actually connects to `http://$upstream_mytinytodo:80/todo`. Whenever we use a variable as part of the proxy_pass url, nginx automatically appends the defined `location` (in this case `/todo`) to the end of the proxy_pass url before it connects. If we include the subfolder, nginx will try to connect to `http://$upstream_mytinytodo:80/todo/todo` and will fail.
!!! info
Nginx has an interesting behavior displayed here. Even though we define `http://$upstream_mytinytodo:80/` as the address nginx should proxy, nginx actually connects to `http://$upstream_mytinytodo:80/todo`. Whenever we use a variable as part of the proxy_pass url, nginx automatically appends the defined `location` (in this case `/todo`) to the end of the proxy_pass url before it connects. If we include the subfolder, nginx will try to connect to `http://$upstream_mytinytodo:80/todo/todo` and will fail.
### Ombi subdomain reverse proxy example
@ -1223,8 +1226,8 @@ This error means that nginx can't talk to the application. There is a few common
In most cases the contents of `/config/nginx/resolver.conf;` should be `...resolver 127.0.0.11 valid=30s;`, if this is not the case, you can:
- Delete it, and restart the container to have it regenerate
- Manually set the content(we wont override it)
- Delete it, and restart the container to have it regenerate
- Manually set the content(we wont override it)
## Final Thoughts

Wyświetl plik

@ -1,8 +1,7 @@
# Understanding PUID and PGID
{% hint style="info" %}
We are aware that recent versions of the Docker engine have introduced the `--user` flag. Our images are not yet compatible with this, so we recommend continuing usage of PUID and PGID.
{% endhint %}
!!! info
We are aware that recent versions of the Docker engine have introduced the `--user` flag. Our images are not yet compatible with this, so we recommend continuing usage of PUID and PGID.
## Why use these?

Wyświetl plik

@ -18,7 +18,8 @@ docker stop <container_name>
Once the container has been stopped, remove it.
> **Important**: Did you remember to persist the `/config` volume when you originally created the container? Bear in mind, you'll lose any configuration inside the container if this volume was not persisted. [Read up on why this is important](volumes.md).
!!! warning
Did you remember to persist the `/config` volume when you originally created the container? Bear in mind, you'll lose any configuration inside the container if this volume was not persisted. [Read up on why this is important](volumes.md).
```shell
docker rm <container_name>

Wyświetl plik

@ -20,6 +20,7 @@ docker create --name my_container \
The above example shows how the usage of `-v` has mapped the host machine's `/opt/appdata/my_config` directory over the container's internal `/config` directory.
> **Remember**: When dealing with mapping overlays, it always reads `host:container`
!!! info
When dealing with mapping overlays, it always reads `host:container`
You can do this for as many directories as required by either you or the container itself. Our rule-of-thumb is to _always_ map the `/config` directory as this contains pertinent runtime configuration for the underlying application. For applications that require further data, such as media, our documentation will clearly indicate which internal directories need mapping.

Wyświetl plik

@ -1 +1,37 @@
--8<-- "finances.md"
# Finances
* v0.1 Beta (Work in progress)
* Created 2021-08-18
* Updated 2021-08-18
## Charter
We will at all times attempt to keep a surplus of $6,000 in the bank account, or an amount which covers 3 years of expenses, whichever is higher. All other money will be disbursed by agreement of a general consensus of linuxserver.io staff members.
## Annual Expenses
* DigitalOcean yearly costs (currently paid for) **$1200**
* AWS **~$200**
* Contabo hosting **$287.76**
* Email Hosting **$20**
* Various domains **~$150**
* Docker Pro Plan **$60**
* Various licenses **~$150**
## Votes
In order for money to be approved for a project, the requesting member must go through every effort to bring to vote a fully formed idea that is ready to be actioned. This means that all the legwork is done before bringing an idea to vote, or at least as much as is reasonably possible. A vote will last for 3 days in order to give all team members the opportunity to participate without unnecessarily causing delays. A general consensus will need to be reached in order for it to proceed.
## Acceptable uses of money
* Hardware/Software needed to help the group reach a specific goal
* Stationary + related items for possible conventions
* Convention fees (Both Attendence and Travel)
* Hosting services (Included domain purchases)
* Good will gestures (Example: For users outside the group that have provided help when asked.)
* Food/Drink for LinuxServer.io focused sprints
* Donations to upstream projects
## Links
* [https://opencollective.com/linuxserver#category-BUDGET](https://opencollective.com/linuxserver#category-BUDGET)

Wyświetl plik

@ -1,37 +0,0 @@
# Finances
* v0.1 Beta (Work in progress)
* Created 2021-08-18
* Updated 2021-08-18
## Charter
We will at all times attempt to keep a surplus of $6,000 in the bank account, or an amount which covers 3 years of expenses, whichever is higher. All other money will be disbursed by agreement of a general consensus of linuxserver.io staff members.
## Annual Expenses
* DigitalOcean yearly costs (currently paid for) **$1200**
* AWS **~$200**
* Contabo hosting **$287.76**
* Email Hosting **$20**
* Various domains **~$150**
* Docker Pro Plan **$60**
* Various licenses **~$150**
## Votes
In order for money to be approved for a project, the requesting member must go through every effort to bring to vote a fully formed idea that is ready to be actioned. This means that all the legwork is done before bringing an idea to vote, or at least as much as is reasonably possible. A vote will last for 3 days in order to give all team members the opportunity to participate without unnecessarily causing delays. A general consensus will need to be reached in order for it to proceed.
## Acceptable uses of money
* Hardware/Software needed to help the group reach a specific goal
* Stationary + related items for possible conventions
* Convention fees (Both Attendence and Travel)
* Hosting services (Included domain purchases)
* Good will gestures (Example: For users outside the group that have provided help when asked.)
* Food/Drink for LinuxServer.io focused sprints
* Donations to upstream projects
## Links
* [https://opencollective.com/linuxserver#category-BUDGET](https://opencollective.com/linuxserver#category-BUDGET)

Wyświetl plik

@ -15,7 +15,8 @@ theme:
- content.code.copy
- navigation.footer
- navigation.instant
- navigation.prune
# - navigation.prune
- navigation.tabs
- navigation.top
- navigation.tracking
- search.highlight
@ -86,3 +87,7 @@ plugins:
- minify:
minify_html: true
- search
- redirects:
redirect_maps:
'faq.md': 'FAQ.md'
general/awesome-lsio.md: misc/awesome-lsio.md